This post is basically a repackaging of Nick Bostrom's book SuperIntelligence, a work suspended somewhere between the sci-fi and non-fiction aisles.
As a philosopher of the future, Bostrom has successfully combined the obscurantism of Continental philosophy, the license of futurism and the jargon of technology to build a tower from which he foresees events that may or may not occur for centuries to come. Nostradamus in a hoody.
Read this sentence:
"It looks quite difficult to design a seed AI such that its preferences, if fully implemented, would be consistent with the survival of humans and the things we care about," Bostrom told Dylan Matthews, a reporter at Vox.
Notice the mixture of pseudo-technical terms like “seed AI” and “fully implemented”, alongisde logical contructs such as “consistent with” -- all leading up to the phobic beacons radiating at the finale: “the survival of humans and the things we care about.”
It's interesting, the technical challenges that he feels optimism and pessimism for. For reasons best known to himself, Bostrom has chosen to be optimistic that we can solve AI (some of the best researchers are not, and they are very conservative about the present state of research). It may perhaps the hardest problem in computer science. But he's pessimistic that we'll make it friendly.
Bostrom’s tower is great for monologs. The parlor game of AI fearmongering has entertained, rattled and flattered a lot of people in Silicon Valley, because it is about us. It elevates one of our core, collective projects to apocalyptic status. But there is no dialog to enter, no opponent to grapple with, because no one can deny Bostrom's pronouncements any more than he can prove them.
Superintelligence is like one of those books on chess strategy that walk you through one gambit after the other. Bostrom, too, walks us through gambits; for example, what are the possible consequences of developing hardware that allows us to upload or emulate a brain? Hint: It would make AI much easier, or in Bostrom’s words, reduce “recalcitrance.”
But unlike the gambits of chess, which assume fixed rules and pieces, Bostrom’s gambits imagine new pieces and rules at each step, substituting dragons for knights and supersonic albatrosses for rooks, so that we are forced to consider the pros and cons of decreasingly likely scenarios painted brightly at the end of a line of mights and coulds. In science fiction, this can be intriguing; in a work of supposed non-fiction, it is tiresome.
How can you possibly respond to someone positing a supersonic albatross? Maybe Bostrom thinks it will have two eyes, while I say three, and that might make all the difference, a few more speculative steps into the gambit.
In the New Yorker article The Doomsday Invention, Bostrom noted that he was "learning how to code."
We might have expected him to do that before he wrote a book about AI. In a way, it's the ultimate admission of a charlatan. He is writing about a discipline that he does not practice.
Your core claim seems to be that the future of AI is impossible to predict for anyone, including Bostrom. If that's the case, it seems like that should inspire more caution, not less.
(There's also some DH2 http://paulgraham.com/disagree.html level stuff about the terms Bostrom chooses to use to make his argument... I'm not sure if there's anything to be said about this except that Bostrom's book seems more accessible to me than most academic writing http://stevenpinker.com/why-academics-stink-writing and I'd hate for him to receive flak for that. It's the ideas that matter, not the pomposity with which you communicate them. I also don't understand the implied disregard for anything that seems like science fiction--what's the point in trying to speculate about the future if all the speculation will be ignored when decisions are being made?)
If an Oxford philosophy professor is not enough for you, here's a succinct explanation of the AI safety problem & its importance from Stuart Russell (Berkeley CS professor & coauthor of Artificial Intelligence: A Modern Approach):
My core claim is that Bostrom doesn't know his arse from his elbow. And a professorship in philosophy at Oxford is not, in itself, a great support for his authority on technical matters, or on the behavior of intelligent species yet to exist. That is, in fact, a topic on which no one speaks with authority.
I have nothing against science fiction, but I object to any fiction that disguises itself as non-fiction, as Bostrom's often does. Nor do I think that the impossibility of predicting the future of AI is, in itself, a reason for undue caution.
Bostrom is performing a sort of Pascalian blackmail by claiming that the slight chance of utter destruction merits a great deal of concern. In fact, he is no different from a long line of doomsday prophets who have evoked fictional and supposedly superior beings, ranging from deities to aliens, in order to control others. The prophet who awakens fear puts himself in a position of power, and that's what Bostrom is doing.
Regardless of Bostrom's motives, we as humans face an infinite number of possible dangers that threaten total destruction. These range from the angry Judeo-Christian god to the gray goo of nano-technology to Peak Oil or Malthusian demographics. In each case, the burden of proof that we should be concerned is on the doomsday prophet -- we should not default to concern as some sort of reasonable middle ground, because if we do, then we will worry uselessly and without pause. There is not enough room in one mind to worry about all the gods, aliens, golems and unforeseen natural disasters that might destroy us. And you and I both know, most if not all of those threats have turned out to be nonsense -- a waste of time.
I do not believe that Bostrom carries the burden of proof well. His notion of superintelligence is based on a recursively self-improving AI that betters itself infinitely. Most technological advances follow S-curves, moving slow, fast and slow. Bostrom does not seem to grasp that, and cites very little evidence of technological change to back up his predictions about AI. He should be, first and foremost, a technological historian. But he contents himself with baseless speculation.
We are in danger, reading Bostrom or Russell or Good, of falling into language traps, by attempting to reason about objects that do not exist outside the noun we have applied to them. The risk is that we accept the very premise in order to fiddle with the details. But the very premise, in this case, is in doubt.
>That is, in fact, a topic on which no one speaks with authority.
Agreed.
>Nor do I think that the impossibility of predicting the future of AI is, in itself, a reason for undue caution.
Sure.
>Bostrom is performing a sort of Pascalian blackmail by claiming that the slight chance of utter destruction merits a great deal of concern. In fact, he is no different from a long line of doomsday prophets who have evoked fictional and supposedly superior beings, ranging from deities to aliens, in order to control others. The prophet who awakens fear puts himself in a position of power, and that's what Bostrom is doing.
Consider: "Previous books published by authors hailing from Country X contained flaws in their logic; therefore since this book's author came from Country X, this book must also have a logical flaw." It's not a very strong form of argument: you might as well just read the book to see if it has logical flaws. Similarly even if a claim seems superficially similar to the kind of claim made by non-credulous people, that's far from conclusive evidence for it being an invalid claim.
It would be a shame if religious doomsayers have poisoned the well sufficiently that people never listen to anyone who is saying we should be cautious of some future event.
>And you and I both know, most if not all of those threats have turned out to be nonsense -- a waste of time.
Sure, but there have been a few like nuclear weapons that were very much not a waste of time. Again, you really have to take things on a case by case basis.
>His notion of superintelligence is based on a recursively self-improving AI that betters itself infinitely. Most technological advances follow S-curves, moving slow, fast and slow. Bostrom does not seem to grasp that, and cites very little evidence of technological change to back up his predictions about AI. He should be, first and foremost, a technological historian. But he contents himself with baseless speculation.
I can see how this could be a potentially fruitful line of reasoning and I'd encourage you to pursue it further, since the future of AI is an important topic and it deserves more people thinking carefully about it.
However, I don't see how this does much to counter what Bostrom wrote. Let's assume that AI development will follow an S-curve. This by itself doesn't give us an idea of where the upper bound is.
Estimates suggest that human neurons fire at a rate of at most 200 times per second (200 hertz). Modern chips run in the gigahertz... so the fundamental activity that chips do is being done at something like a million times the speed that the fundamental activity neurons do is being done. (Humans are able to do a ton of computation because we have lots of neurons and they fire in parallel.)
And there are lots of arguments one could make along similar lines. Human brains are limited by the size of our skulls; server farms don't have the same size limitations. Human brains are a kludge hacked together by evolution; code written for AIs has the potential to be much more elegant. We know that the algorithms the human brain is running are extremely suboptimal for basic tasks we've figured out how to replicate with computers like doing arithmetic and running numerical simulations.
Even within the narrow bounds within which humans do differ, we can see variation from village idiots to Von Neumann. Imagine a mind V' that's as inscrutable to Von Neumann as Von Neumann is to a village idiot, then imagine a mind V'' that's as inscrutable to V' as V' is to Von Neumann, etc. Given that AI makers do not need to respect the narrow bounds within which human brains differ, I don't think it would be surprising if the "upper bound" you're discussing ends up being equivalent to V with a whole bunch of prime symbols after it.
> Consider: "Previous books published by authors hailing from Country X contained flaws in their logic; therefore since this book's author came from Country X, this book must also have a logical flaw." It's not a very strong form of argument: you might as well just read the book to see if it has logical flaws. Similarly even if a claim seems superficially similar to the kind of claim made by non-credulous people, that's far from conclusive evidence for it being an invalid claim.
You're right, that's not a very strong form of argument. But that is not my argument. The connection between Bostrom and previous doomsayers is not as arbitrary as geography. Your analogy is false. The connection between Bostrom and other doomsayers in the structure of their thought. The way they falsely extrapolate great harm from small technological shifts.
> It would be a shame if religious doomsayers have poisoned the well sufficiently that people never listen to anyone who is saying we should be cautious of some future event.
I think many people do not realize the quasi-religious nature of AI speculation. Every religion and charismatic mass movement promises a "new man." In the context of AI, that is the singularity, the fusion of humans with intelligent machines. The power we associated with AI present and future leads us to make such dramatic, quasi-religious predictions. They are almost certainly false.
>Sure, but there have been a few like nuclear weapons that were very much not a waste of time. Again, you really have to take things on a case by case basis.
Nuclear weapons are fundamentally different from every other doomsday scenario I cited because they were designed explicitly and solely as weapons whose purpose was to wreak massive destruction at a huge cost to human life. The potential to destroy humanity defined nuclear weapons. That is not the case for other technological, demographic and resource-related doomsday scenarios, which are much more tenuous.
It sounds to me as though you believe that because religious nuts have prophesied doomsday since forever, we can rule out any doomsday scenario as "almost certainly false" (a very confident statement! Prediction is difficult, especially about the future. Weren't you just explaining how hard it is to predict these things?) But no actions on the part of religious doomsayers are going to protect us from a real doomsday-type scenario if the universe throws one at us.
The fact that a claim bears superficial resemblance to one made by wackos might be a reason for you to believe that it's most likely not worth investigating, but as soon as you spend more than a few minutes thinking about a claim, the weight of superficial resemblances is going to be overwhelmed by other data. If I have been talking to someone for three hours, and my major data point for inferring what they are like as a person is what clothes they are wearing, I am doing conversation wrong.
"Nuclear weapons are fundamentally different from every other doomsday scenario I cited because they were designed explicitly and solely as weapons whose purpose was to wreak massive destruction at a huge cost to human life. The potential to destroy humanity defined nuclear weapons. That is not the case for other technological, demographic and resource-related doomsday scenarios, which are much more tenuous."
Sure, and in the same way nuclear bombs were a weaponization of physics research, there will be some way to weaponize AI research.
Yes, Bostrom is all over the place, but the development curve upon which he bases his doomsday predictions is an exponential one in which an AGI recursively improves itself.
An AGI has been 20 years away for the last 80 years, in the opinion of many computer scientists. The reason why these prediction have little value is the same reason why software is delivered late: the people involved often don't know the problems that need to be solved. In the case of AGI, the nature and complexity of the problems is greater than a simple CRUD app.
As a philosopher of the future, Bostrom has successfully combined the obscurantism of Continental philosophy, the license of futurism and the jargon of technology to build a tower from which he foresees events that may or may not occur for centuries to come. Nostradamus in a hoody.
Read this sentence:
"It looks quite difficult to design a seed AI such that its preferences, if fully implemented, would be consistent with the survival of humans and the things we care about," Bostrom told Dylan Matthews, a reporter at Vox.
Notice the mixture of pseudo-technical terms like “seed AI” and “fully implemented”, alongisde logical contructs such as “consistent with” -- all leading up to the phobic beacons radiating at the finale: “the survival of humans and the things we care about.”
It's interesting, the technical challenges that he feels optimism and pessimism for. For reasons best known to himself, Bostrom has chosen to be optimistic that we can solve AI (some of the best researchers are not, and they are very conservative about the present state of research). It may perhaps the hardest problem in computer science. But he's pessimistic that we'll make it friendly.
Bostrom’s tower is great for monologs. The parlor game of AI fearmongering has entertained, rattled and flattered a lot of people in Silicon Valley, because it is about us. It elevates one of our core, collective projects to apocalyptic status. But there is no dialog to enter, no opponent to grapple with, because no one can deny Bostrom's pronouncements any more than he can prove them.
Superintelligence is like one of those books on chess strategy that walk you through one gambit after the other. Bostrom, too, walks us through gambits; for example, what are the possible consequences of developing hardware that allows us to upload or emulate a brain? Hint: It would make AI much easier, or in Bostrom’s words, reduce “recalcitrance.”
But unlike the gambits of chess, which assume fixed rules and pieces, Bostrom’s gambits imagine new pieces and rules at each step, substituting dragons for knights and supersonic albatrosses for rooks, so that we are forced to consider the pros and cons of decreasingly likely scenarios painted brightly at the end of a line of mights and coulds. In science fiction, this can be intriguing; in a work of supposed non-fiction, it is tiresome.
How can you possibly respond to someone positing a supersonic albatross? Maybe Bostrom thinks it will have two eyes, while I say three, and that might make all the difference, a few more speculative steps into the gambit.
In the New Yorker article The Doomsday Invention, Bostrom noted that he was "learning how to code."
http://www.newyorker.com/magazine/2015/11/23/doomsday-invent...
We might have expected him to do that before he wrote a book about AI. In a way, it's the ultimate admission of a charlatan. He is writing about a discipline that he does not practice.