One of my favorite talks from Strange Loop... Sussman has amazing breadth across several domains (math, physics, electrical engineering, computing) as well as a long perspective on where we've been and where we can go. He (and Julie) were a true joy to have at Strange Loop this year.
In an older talk at hp Sussman's view is we are at the alchemy stage of understanding computer science. Hence why programmers are often compared to wizards. I guess not much has changed in 20 years.
Anybody noticed how closely this, being from the programmer's point of view, is related to what Bret Victor [1] has been talking about from the UI perspective? Take http://worrydream.com/#!/ScrubbingCalculator, or any of the "Kill Math" or "Explorable Xplanations" articles, for an example.
Combining a good UI to the propagators as explained by Sussman in this video, would make a disruptive product on any field where decision making is needed. (Not like such tools didn't exist already in the hands of some, of course.)
I have got the same feeling. It is this idea of moving away from the pencil-and-paper operating system for modeling complex systems. Victor is more about making computing explorable and accessible to the masses, whereas Sussman is more about figuring out new ways to compute things. However, both are basically arguing that our current abstractions are not enough anymore. That is, we need to revisit some of the assumptions that were made 40 years ago about how programs should be structured.
CNMAT's o.dot library for Max has a similar notion. As messages pass through the systems and affect new results, the resulting message preserves its entire history.
It's too bad we only saw his face and not his interaction with the slides, watching him explain the circuit and math examples live was like watching an olympic gymnast.
He originally asked to use an old-school transparency projector and we had a camera-enabled one ready for him but he ended up creating actual slides. I kind of wish he'd actually had the transparencies!
The 1GB for a human thing, whether or not it's off by a factor of 10, is the cost to build an infant and dismisses the real complexity of a human. The cost to build a high functioning adult is vastly higher. I don't know what it would cost to build me now (complete with screwed up kidney!), but I'm sure that it's quite a bit higher than 1GB: I've learned English, Spanish, love (or so my wife would say), loveV2 (or so my kids would say), basically every computer language, how to catch a football, how to WALK, how to have sex, how to have a conversation over cocktails, etc.
The magic of computers is that once N programs have run through the process of learning to do something, we can clone it. Getting to "how to speak English" is going to be hard; building 1e9 machines to do it will be relatively easy.
meh... He referred to the 1GB a couple of times and I didn't hear he talk about how humans are much more complicated. Also [as an atheist], my point is not to argue against Sussman, but is to point out that calculating the storage requirements of a human is complicated. Putting aside all the guts, walking, eating, etc, we store an immense amount of information. What is incredible is that 1GB is what is required to specify the creation and evolution of the data structure necessary to become adult (and to die).
I really didn't like his 1GB example in the beginning. You can't compare humans to computer programs in terms of capability. Computer programs do very different things. Sure we can spot the missing triangle faster, but we can't sort 1M names very quickly.
And on flexibility of the human source code -- sure a small change results in a cow. But a small change in Windows (default registry settings) can make Windows start in its standard shell, command line, safe mode, Media Center mode, etc...
While I can be in awe of the complexity and power of living organisms, I don't think its all that useful to compare them to programs -- at least not based on our current understanding of biology.
I think you're missing the point. Computer programs are processes. There's a whole lot of amazing biological processes out there. Why are current human formulations about the nature of process (programs) so brittle compared to these other processes we observe? It's a humbling talk. Our notions of computation are somewhere at the primordial soup phase.
They're more brittle because they can be. For example, in computers loading and execution of a program generally occur w/o transcription error.
With that said there are things about computer programs that are hard problems in human biology (and note the analog is really more of an OS to an animal -- the animal is a set of processes, not just one process). I can easily use libraries in my current program. Transplants are still non-trivial in humans. I can kill my shell and it will come back. I can even hit an unrecoverable error, reboot and things will usually still work fine. I can probably remove half of the files on my compuer and it will still work fine. I can hibernate my computer, store the state and send it to different piece of hardware. I can take an image of my machine and clone it to 100 other machines. I can add new features and upgrade my OS -- generally can't do that to my body -- at least not in any satisfactory way.
Sure there are some animals for which there are non-necessary components, just like in operating systems. But if you remove a heart or the lungs or the brain from most animals -- they'll die. Cancers can kill most animals -- there's no real equivalent to operating systems. There is generally nothing that will flat out kill an OS.
There are plenty of programs written based on the assumption that there will be tuns of memory errors. Where systems can not only detect problems automatically but try a range of solutions to fix the HW problems without intervention. But it's more a question of cost to develop vs deploy. If your sending a probe to Saturn or sending 100 million devices in the field to monitor power transmission without interruption for years you build a vary different system vs severs that can be monitored by people.
PS: Don't forget your DNA is a single program that's been running continually for over 3 BILLION years because at no point did any of your ancestors die before having offspring.
The biggest problem behind human transplants is that our defense mechanisms (which is a hard problem in computing!) will kill the foreign material. So it's a trade-off, not a hard problem.
In any case, the real difference between biological DNA and programs is the latter is designed to be modified in a _directed_ way. You could think of DNA as a highly compressed program which is modified _in its compressed form_. In evolution, changes are made randomly, so this isn't really a problem - if anything, the magnification effect is a good thing. But in computer programs, we know what we want to change, and don't want to have to make several million random changes to try to find one that brings us closer to the goal. And so computer programs are more brittle - small changes have small, predictable effects - while human DNA is more flexible, but at the expense of predictability.
The OS is analogous to electrical impulses in the animal brain. The real thing that separates animal brains from computers is the lack of persistent storage media that retain information when you remove power or other operating components.
this seems vaguely like the kind of thing that hofstadter was working on with copycat etc (particularly if you think of hofstadter's version as being more complex because he didn't have the computing power to run many things in parallel and so needed higher level control to allocate resources).
You can get it with Chrome by launching with the iPad user agent and looking at source. Here's a gist with an alias for launching chrome on OSX with the iPad user agent.