So, here's a random thought on this whole subject of "AI risk".
Bostrom, Yudkowsky, etc. posit that an "artificial super-intelligence" will be many times smarter than humans, and will represent a threat somewhat analogous to an atomic weapon. BUT... consider that the phrase "many times smarter than humans" may not even mean anything. Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be. Or close enough that being "smarter than human" does not represent anything analogous to an atomic bomb.
So this might be an interesting topic for research, or at least for the philosophers: "What's the limit of how 'smart' it's possible to be"? It may be that there's no possible way to determine that (you don't know what you don't know and all that) but if there is, it might be enlightening.
> Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be.
I think Nick Bostrom had the perfect reply to that in Superintelligence: Paths, Dangers, Strategies:
> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
It would be extremely strange if we were near the smartest possible minds. Just look at the evidence: Our fastest neurons send signals at 0.0000004c. Our working memory is smaller than a chimp's.[1] We need pencil and paper to do basic arithmetic. These are not attributes of the pinnacle of possible intelligences.
Even if you think it's likely that we are near the smartest possible minds, consider the consequences of being wrong: The AI becomes much smarter than us and potentially destroys everyone and everything we care about. Unless you are supremely confident in humanity's intelligence, you should be concerned about AI risk.
There are savants who can do amazing feats of mental arithmetic, yet have severe mental disabilities in other areas. Perhaps there are some fundamental limits and trade-offs involved? We don't know yet whether computers will be able to break through those limits.
The humans who are in charge of our current society by virtue of having the most wealth, political power, or popularity aren't necessarily the smartest or quickest thinkers, at least not in the way that most AGI researchers seem to be targeting. So even if someone manages to build a real AGI there's no reason to expect it will end up ruling us.
>There are savants who can do amazing feats of mental arithmetic, yet have severe mental disabilities in other areas.
There are also geniuses who can do amazing feats of mental arithmetic and have no severe mental disabilities in any other areas. There are people who have perfect memories (and usually, they live totally normal lives but for the inability to forget a single event). There are people who are far better than average at recognizing faces, pattern recognition, and other tasks that are non-conscious.
The notion that there are "fundamental limits" that are for some reason near the typical human is the cognitive bias of the just-world fallacy. The world is not just. Bad things happen to good people. Good things happen to bad people. If there is a fundamental limit to how smart a mind can get, it's very, very far above the typical human, because there are exceptional humans that are that far away and there's no reason to think a computer couldn't beat them either. Deep Blue beat Kasparov.
>The humans who are in charge of our current society by virtue of having the most wealth, political power, or popularity aren't necessarily the smartest or quickest thinkers, at least not in the way that most AGI researchers seem to be targeting.
I'm not at all familiar with AGI research, but while there are people who start out ahead, there's no reason to think that you can't actually play the game and win it. Winning challenges of acquiring wealth or political power or popularity are related to intelligence, insofar as the concept of intelligence is defined (as it is at least in psychology) as the ability to accomplish general goals.
Walking into a meeting of the Deutsche Arbeiterpartei, joining as the 55th member, and later seizing control of the state requires intelligence. Landing in a dingy yacht with 81 men, bleeding a regime to death, and ruling it until your death requires intelligence. Buying the rights to the Quick and Dirty Operating System, licensing it to IBM, and becoming the de facto standard OS for all consumer computing requires intelligence. Presenting yourself as a folksy Texan when you're a private-school elite from Connecticut and convincing the electorate that you'd have a beer with them requires intelligence. All of these outcomes are goals, and accomplishing them demonstrates an ability to actually put the rubber to the road.
You're confusing the technical term of intelligence, which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence" which means something along the lines of "ability to impress people." Intelligence is not always impressive, and humans have a fantastic ability to write off the accomplishments of others when they want to make the world appear more just. I mean, nobody feels good about the fact that we're one Napoleon away from a totally different world system that may or may not suit our interests. The belief that the world is somehow fundamentally different to the extent that the next Napoleon-class intelligence who isn't content managing a hedge fund and just increasing an int in a bank database can't actually redraw the world map, is just an illusion we tell ourselves to make the world we live in more fair, the stories we live out more meaningful, and the events of our lives have a little bit more importance.
AI could change things very, very drastically. It's right to be afraid.
> There are also geniuses who can do amazing feats of mental arithmetic and have no severe mental disabilities in any other areas.
And do they seem to hold any sort of significant power? It seems that intelligence is not a very good means for achieving power. Charm seems much more effective, and therefore dangerous. I'd be afraid of super-human charm much more than super-human intelligence.
Mental disabilities aside, very high IQ seems to be correlated with relatively low charm and low ability to solve particular problems (how to get people to do what you want) that are far more dangerous than the kind of problem-solving intelligence (or how we commonly define it) is capable of solving. "Super" intelligent people are terrible problem-solvers when the problems involve other humans.
Ironically, the fact that people like me view the AI-scare as a religious apocalypse that is as threatening as any other religious apocalypse implies one of two things: 1) that the people promoting the reality of this apocalypse are not as intelligent as they believe themselves to be (a real possibility given their limited understanding of both intelligence and our real achievements in the field of AI) and/or that 2) intelligent people are terrible at convincing others, and so don't pose much of a risk.
Either possiblity shows that super-human AI is a non-issue, certainly not at this point in time. As someone said (I don't remember who), we might as well worry about over-population on mars.
What's worse is that machine learning poses other, much more serious and much more imminent threats than super-human intelligence, such as learned biases, which are just one example of conservative feedback-loops (the more we rely on data and shape our actions accordingly, the more the present dynamics reflected in the data ensure that they don't change).
>It seems that intelligence is not a very good means for achieving power. Charm seems much more effective, and therefore dangerous. I'd be afraid of super-human charm much more than super-human intelligence.
See these paragraphs in the post to which you replied:
>Walking into a meeting of the Deutsche Arbeiterpartei, joining as the 55th member, and later seizing control of the state requires intelligence. Landing in a dingy yacht with 81 men, bleeding a regime to death, and ruling it until your death requires intelligence. Buying the rights to the Quick and Dirty Operating System, licensing it to IBM, and becoming the de facto standard OS for all consumer computing requires intelligence. Presenting yourself as a folksy Texan when you're a private-school elite from Connecticut and convincing the electorate that you'd have a beer with them requires intelligence. All of these outcomes are goals, and accomplishing them demonstrates an ability to actually put the rubber to the road.
>You're confusing the technical term of intelligence, which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence" which means something along the lines of "ability to impress people." Intelligence is not always impressive, and humans have a fantastic ability to write off the accomplishments of others when they want to make the world appear more just. I mean, nobody feels good about the fact that we're one Napoleon away from a totally different world system that may or may not suit our interests. The belief that the world is somehow fundamentally different to the extent that the next Napoleon-class intelligence who isn't content managing a hedge fund and just increasing an int in a bank database can't actually redraw the world map, is just an illusion we tell ourselves to make the world we live in more fair, the stories we live out more meaningful, and the events of our lives have a little bit more importance.
>Ironically, the fact that people like me view the AI-scare as a religious apocalypse that is as threatening as any other religious apocalypse implies one of two things:
Some alternative explanations:
3. You're wrong, and super-human AI is a massive issue, regardless of how you pattern-match it as "religious"
4. You're not creative enough (one might dare say "intelligent" but that would be too snarky) to imagine all the ways in which an AI could devastate humanity without it having much intelligence or much charm
5. Your view of the world is factually incorrect, I mean you believe things like:
>Mental disabilities aside, very high IQ seems to be correlated with relatively low charm and low ability to solve particular problems (how to get people to do what you want) that are far more dangerous than the kind of problem-solving intelligence (or how we commonly define it) is capable of solving. "Super" intelligent people are terrible problem-solvers when the problems involve other humans.
Let's assume that IQ is a good proxy for intelligence (it isn't): what IQ do you think Bill Gates or Napoleon or Warren Buffet or Karl Rove have? What IQ do you think Steve Jobs or Steve Ballmer had/have? Do you think they're just "average" or just not "very high"?
This:
>very high IQ seems to be correlated with relatively low charm
is again the just world fallacy! There is no law of the universe that makes people very good at abstract problem solving bad at social situations. In fact, most hyper-successful people are almost certainly good at both.
And that ignores the fact that cognitive biases DO exist, and it's very possible to apply the scientific method and empirical problem solving to finding them, and then exploiting humans that way. This is a huge subfield of psychology (persuasion) and the basis of marketing. Do you think it takes some super-special never-going-to-be-replicated feat of non-Turing-computable human thought to write Zynga games?
It's nice to think the world is a safe place, but the reality is that our social order is increasingly precarious and an AI could easily disrupt that.
> which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence"
Perhaps. But if that is the case, the people who are most intelligent by this definition are far from the ones recognized as intelligent by the AI-fearing community. Let me put it this way: Albert Einstein and Richard Feynman would not be among them. Adolf Hitler, on the other hand, would be a genius.
> You're wrong, and super-human AI is a massive issue, regardless of how you pattern-match it as "religious"
If you think I don't always presume that everything I say is likely wrong then you misunderstand me. I do, however, don't understand what you mean by "a massive issue". Do you mean imminent danger? Yes, I guess it's possible, but being familiar with the state of the art, I can at least discount the "imminent" part.
> You're not creative enough (one might dare say "intelligent" but that would be too snarky) to imagine all the ways in which an AI could devastate humanity without it having much intelligence or much charm
I can imagine many things. I can even imagine an alien race destroying our civilization tomorrow. What I fail to see is compelling arguments why AI is any more dangerous or any more imminent than hundreds of bigger, more imminent threats.
> In fact, most hyper-successful people are almost certainly good at both.
I would gladly debate this issue if I believed you genuinely believed that. If you had a list ordered by social power of the top 100 most powerful people in the world, I doubt you would say their defining quality is intelligence.
> it's very possible to apply the scientific method and empirical problem solving to finding them, and then exploiting humans that way. This is a huge subfield of psychology (persuasion) and the basis of marketing.
Psychology is one of the fields I know most about, and I can tell you that the people most adept at exploiting others are not the ones you would call super-intelligent. You wouldn't say they are of average intelligence, but I don't think you'd recognize their intelligence as being superior.
> It's nice to think the world is a safe place, but the reality is that our social order is increasingly precarious and an AI could easily disrupt that.
There are so many things that could disrupt that, and while AI is one of them, it is not among the top ten.
>Perhaps. But if that is the case, the people who are most intelligent by this definition are far from the ones recognized as intelligent by the AI-fearing community. Let me put it this way: Albert Einstein and Richard Feynman would not be among them. Adolf Hitler, on the other hand, would be a genius.
How so? Feynman in particular was quite able to continually accomplish his goals, and he purposely chose divergent goals to test himself (his whole "I'll be a biologist this summer" thing).
And yes, see my original comment re: it takes intelligence to walk into the DAP meeting and join as member 55 and come out conquering mainland Europe.
>I do, however, don't understand what you mean by "a massive issue". Do you mean imminent danger? Yes, I guess it's possible, but being familiar with the state of the art, I can at least discount the "imminent" part.
The state of the art is irrelevant here; in particular, most of AI seems to be moving in the direction of "use computers to emulate human neural hardware and use massive amounts of training data to compensate for the relative sparseness of the artificial neural networks."
What's imminently dangerous about AI is that all it really takes is a few innovations that might be in seemingly unrelated areas enable probably several people who see the pattern to go and implement AI. This is how most innovation happens, but here it could be very dangerous, because...
>What I fail to see is compelling arguments why AI is any more dangerous or any more imminent than hundreds of bigger, more imminent threats.
AI could totally destabilize our society in a matter of hours. Our infrastructure is barely secure against human attackers, and it could be totally obliterated by an AI that chose to do that, or incidentally caused it to happen. An AI might not be able to launch nukes directly (in the US at least, who knows what the Russians have hooked up to computers), but it could almost certainly make it seem to any nuclear power that another nuclear power had launched a nuclear attack. There actually are places that will just make molecules you send them, so if the AI figures out protein folding, it could wipe out humanity with a virus.
AI is more dangerous than most things, because it has:
* limitless capability for action
* near instantaneous ability to act
The second one is really key; there's nearly nothing that would make shit hit the fan FASTER than a hostile AI.
If you have a list of hundreds of bigger, more imminent threats, that can take humanity from 2015 to 20000BCE in a day, I'd like to see it.
>I doubt you would say their defining quality is intelligence.
I'm confused as to how you can read three comments of "intelligence is the ability to accomplish goals" and then say "people who have chosen to become politically powerful and accomplished that goal must not be people you consider intelligent."
>You wouldn't say they are of average intelligence, but I don't think you'd recognize their intelligence as being superior.
Well, they can exploit people. How's that for superiority?
My background is admittedly in cognitive psychology, not clinical, but I do see your point here. I'd like to make two distinctions:
* A generally intelligent person (say, Feynman) could learn to manipulate people and would almost certainly be successful at it
* People that are most adept at manipulating people, usually are that way because that's the main skill they've trained themselves for over the course of their lives.
>it is not among the top ten.
Of the top ten, what would take less than a week to totally destroy our current civilization?
> Feynman in particular was quite able to continually accomplish his goals, and he purposely chose divergent goals to test himself (his whole "I'll be a biologist this summer" thing).
His goals pertained to himself. He never influenced the masses and never amassed much power.
> it takes intelligence to walk into the DAP meeting and join as member 55 and come out conquering mainland Europe.
I didn't say it doesn't, but it doesn't take super intelligence to do that. Just more than a baseline. Hitler was no genius.
> What's imminently dangerous about AI is that all it really takes is a few innovations that might be in seemingly unrelated areas enable probably several people who see the pattern to go and implement AI.
That could be said just about anything. A psychologist could accidentally discover a fool-proof mechanism of brainwashing people; a microbiologist could discover an un-killable deadly microbe; an archeologist could uncover a dormant spaceship from a hostile civilization. There's nothing that shows that such breakthroughs in AI are any more imminent than in other fields.
> Our infrastructure is barely secure against human attackers, and it could be totally obliterated by an AI that chose to do that
Why?
> but it could almost certainly make it seem to any nuclear power that another nuclear power had launched a nuclear attack
Why can an AI do that but a human can't?
> limitless capability for action
God has limitless capability for action. But we have no reason whatsoever to believe that either God or true AI would reveal themselves in the near future.
> near instantaneous ability to act
No. Again,
> there's nearly nothing that would make shit hit the fan FASTER than a hostile AI.
There's nothing that would make shit hit the fan FASTER than a hostile spaceworm devouring the planet. But both the spaceworm and the AI are currently speculative sci-fi.
> I'm confused as to how you can read three comments of "intelligence is the ability to accomplish goals"
There are a couple of problems with that: one, that is not the definition that is commonly used today. Britney Spears has a lot of ability to achieve her goals, but no one would classify her as especially intelligent. Two, that is not where AI research is going. No one is trying to make computers able to "achieve goals", but able to carry out certain computations. Those computations are very loosely correlated with actual ability to achieve goals. You could define intelligence as "the ability to kill the world with a thought" and then say AI is awfully dangerous, but that definition alone won't change AI's actual capabilities.
> A generally intelligent person (say, Feynman) could learn to manipulate people and would almost certainly be successful at it
I disagree. We have no data to support that prediction. We know that manipulation requires intelligence, but we do not know that added intelligence translates to added ability to manipulate and that that relationship scales.
> what would take less than a week to totally destroy our current civilization?
That is a strange question, because you have no idea how long it would take an AI. I would say that whatever an AI could achieve in a week, humans could achieve in a similar timeframe and much sooner. In any case, as someone who worked with neural-networks in the nineties, I can tell you that we haven't made as much progress as you think. We are certainly not at any point where a sudden discovery could yield true AI any more than a sudden discovery would create an unkillable virus.
The AI becomes much smarter than us and potentially destroys everyone and everything we care about. Unless you are supremely confident in humanity's intelligence, you should be concerned about AI risk.
I acknowledge it as a real risk, but it's not terribly high on my personal list of things to worry about right now. I like what Andrew Ng said about how worrying about this now is "like worrying about over-population on Mars".
Please notice that your reply is a different argument than the one you first put forth. Originally, you weren't worried about AI because you thought it could never –even in principle– vastly exceed human abilities. Now you're basically saying, "I don't need to worry because it won't happen for a long time." That is a huge amount of ground to cede.
I'm not so confident that human-level AI will take a long time. The timeline depends on algorithmic insights, which are notoriously difficult to predict. It could be a century. It could be a decade. Still, it seems like something worth worrying about.
Please notice that your reply is a different argument than the one you first put forth. Originally, you weren't worried about AI because you thought it could never –even in principle– vastly exceed human abilities.
I never said I wasn't worried about AI. You're extrapolating from what I did say; which I've said all along was just a thought experiment, not a position I'm actually arguing for.
I really recommend you read Bostrom, he really does succinctly argue the relevant positions, if a bit drily.
It's one of those books that put the arguments so clearly you're suddenly catapulted to having a vastly superior knowledge of the subject than someone trying to do simple thought experiments.
Both of your arguments look out-dated if you're one of the people 'in the know'.
Also, I suggest looking a bit more into what's going on in machine learning, it's suddenly got far more sophisticated than I personally realized until a couple of months ago when I was chatting to someone developing in it at the moment.
Or like worrying about global warming back when it would have been easier to prevent?
Ng's statement is, at best, equivalent to a student who is putting off starting their semester project until finals week. Yes it seems far away, but the future is going to happen.
I don't know. I mean, we don't seem to even be close to actually beginning to colonize Mars, much less be close to the point of overpopulation. I think Ng's statement, formed in an analogy similar to yours, would be closer to
"a freshman student who is putting off studying for his Senior final project until his Senior year".
The question Ng asked was something like "is there any practical action we can take today, to address over-population on mars" as an analogy to "is there any practical step we can take today to address the danger of a super-AGI". And honestly, I'm not convinced there is anything practical to do about super-AGI today. Well, nothing besides pursuing the "open AI" strategy.
But I'm willing to be convinced otherwise if somebody has a good argument.
> The AI becomes much smarter than us and potentially destroys everyone and everything we care about.
What makes you think we humans won't attempt to do even more harm towards humanity? Maybe the AI will save us from ourselves, and, being so much smarter, might guide us towards our further evolution.
I think most people didn't really understand the meaning of your comment. They seem to all equate intelligence and processing speed.
I think it's legitimately an interesting question. As in, it could be something like Turing completeness. All Turing complete languages are capable of computing the same things, some are just faster. Maybe there's nothing beyond our level of understanding, just a more accelerated and accurate version of it. An AI will think on the same level as us, just faster. In that case, in that hypothetical, an AI 100x faster than a person is not much better than 100 people. It won't forget things (that's an assumption, actually), it's neuron firing or equivalent would be faster, but maybe it won't really be capable of anything fundamentally different than people.
This is not the same as the difference between chimps and humans. We are fundamentally on another level. A chimp, or even a million chimps, can never accomplish what a person can. They will not discover abstract math, write a book, speak a language.
Mind you, I suspect this is not the case. I suspect that a super intelligent AI will be able to think of things we can never hope to accomplish.
But it is an interesting question that I think is worth thinking about, rather than inanely down voting the idea.
> They seem to all equate intelligence and processing speed.
It's helpful to remind people that every human brain that exists runs at the same processing speed†, yet we have greatly varying intelligence between us. (Also, IQ is an index, but people confuse it for a linear measure; human geniuses may be doing things that require many times the "intelligence" of the average person.)
† Okay, I lied: there are some people whose neurons have abnormal firing rates; the visible result is Parkinsonism.
Even if that is the case, a person with human-level intelligence, but with unlimited memory, ability to visualize, internet connection, no need to sleep, and thinking 100 times faster than a normal person would quickly become pretty much God to us.
I agree, I suspect at this time there's more room for progress to be made by augmenting human intellect than AI.
Think about how long it takes you to imagine a program vs actually coding it, or imagining an object you want to create and actually building it, there are at least two or three orders of magnitude for improvement over the keyboard.
As in, it could be something like Turing completeness. All Turing complete languages are capable of computing the same things, some are just faster. Maybe there's nothing beyond our level of understanding, just a more accelerated and accurate version of it. An AI will think on the same level as us, just faster. In that case, in that hypothetical, an AI 100x faster than a person is not much better than 100 people. It won't forget things (that's an assumption, actually), it's neuron firing or equivalent would be faster, but maybe it won't really be capable of anything fundamentally different than people.
Yeah, that's another really good way of putting it. That's probably closer to what I meant, than what I said above.
- that there's only one class of "humanic" intelligence;
- that we can approximately represent instances in this class of intelligence as vectors of {memory, learning speed, computation speed, communication speed};
- that any AI that could be created is merely a vector in this n-dimensional intelligence space, lacking any extra-intelligent qualities;
- that productivity and achievement increases exponentially the more intelligent being you devote to a problem, but logarithmically for more beings you devote (e.g. a being with intelligence vector {10,10,10,10} might be as productive as 10000 {1,1,1,1} beings);
then this doesn't exclude the possibility of us creating an AI with an intelligence vector twice an average human's intelligence vector, which can suggest improvements to its algorithms and datacenter and chip designs to become 10x as intelligent as a human, and from there it could quickly determine new algorithms, and eventually it's considering philosophy (and what to do about these humans).
The point is: viewing intelligence the way you suggest doesn't help us on what to do about "super" artificial intelligence.
If you look at the history of human evolution, this doesn't make sense. Evolution was very very slowly increasing human intelligence by, e.g., making our skulls bigger. Then we got to the point where we could use language/transmit knowledge between generations/etc. and got up to technological, spacefaring civilization in, evolutionarily speaking, no time whatsoever. This is not a story which suggests that human intelligence is some sort of maximum, or that evolution was running into diminishing returns and so stopped at human intelligence. It suggests that human intelligence is the minimum intelligence necessary to produce the kind of generational transfer that gets you up to technological civilization.
It's not like humans sit around all day thinking. We build tools to do the thinking for us.
Humans ourselves are very bad at predicting the weather. So we build supercomputers, and weather models, and now we're a bit better at it.
Now the question is: given the same budget, would a super-intelligent AI predict the weather in a substantially different way to humans? I cannot see how, but maybe that is my human stupidity talking.
This is a pretty random domain to pick. We don't know whether it's even possible to predict the weather better than we are. There are limits to that kind of prediction because weather is a chaotic system.
However if it is possible to improve weather simulations, and if there are advancements to be made in meteorology, then it's very likely an AI would be able to find them.
Humans are very bad at self improving. Imagine how smart we'd be now if our intelligence had increased at the same rate that computing power increases. Now imagine how fast computing power would've increased if we'd been that smart. Now imagine...
Close to the limit of how smart it's possible to be? Don't be silly. The human brain is limited by its slow speed, by the amount of cortical mass you can fit inside a human skull, and by the length of human lifetimes. Computers will not have any of those limitations.
In terms of speed: if you could build the exact silicon equivalent of a human brain, you may be able to run it several orders of magnitude faster, simply because it wouldn't be limited by the slow speeds of electrochemical processes in the human brain. Nerve impulses travel at speeds measured in meters per second. Neurons also need time to recharge between spike bursts, and they can physically damage themselves if they get too excited.
In terms of volume: much of our intelligence is in perceiving patterns. That's limited by cortical mass. Pattern recognition is what all these "deep learning" systems excel at. The more depth they add, the better they get. Having deeper pattern recognizers, or simply having more of them, means you can see more patterns, more complex patterns, etc. Things that might be beyond the reach of any human.
Then, in terms of data, machine have an advantage too. We're limited by our short lifetimes. How many people are expert musicians, genius mathematicians, rockstar programmers and great cooks? Very few. There's only so many hours in a day, and we only live so long. A machine could learn all those skills, and more. It could excel at everything. Speak every language, master every skill, be aware of so many more facts.
And finally, I posit that maybe we, humans, are limited in our ability to grasp complex conceptual relationships. If you think about it, the average person can fit 7-8 items in their short-term memory, in their brain's "registers", so to speak. That probably limits our ability to reason by analogy. We can go "A is to B what C is to D", but maybe more complex relationships with 50 variables and several logical connectives will seem "intuitive" to a machine that can manipulate 200 items in its short-term memory.
The human brain is limited by its slow speed, by the amount of cortical mass you can fit inside a human skull, and by the length of human lifetimes. Computers will not have any of those limitations.
Right, but the idea I'm playing around with is this: Suppose you had a hypothetical creature with a brain 20x as fast as the human brain and with twice the volume. How much smarter would that creature be in practice. It's kind of an abstract idea (and I probably don't fully understand it myself), but I'm getting at something like "is there a point where that additional raw computing power just doesn't buy you anything meaningful" at least in terms of "does it represent an existential threat?" or "does the nuclear analogy hold?"
You could try looking at existing animal neural counts: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_n... Doublings or 20xing get you a long way: a 1/20x cerebral cortex neural count decrease takes you from human to... horse. I like horses a lot, but there's clearly a very wide gulf in capabilities, and I don't like the thought of ever encountering someone who is to us as we are to horses.
I don't know. If you dropped a human infant into a den of bears (more or less the equivalent situation), I don't think it would be the bears who would be at a disadvantage. So even if we were able to create an AI as far above us as we are above bears (a pretty huge if), it hardly seems certain that it would suddenly (or ever) dominate us.
But we do dominate bears. They continue to exist only at our sufferance; we tolerate them for the most part (though we kill them quickly if they ever threaten us), but we could wipe them out easily if we wanted to. We probably won't do it deliberately, but we drive a number of species to extinction if they're in the way of resource extraction by us - there are estimates that 20% of extant species will go extinct as a result of human activity, and that's with us deliberately trying not to cause extinctions!
(There are more technical arguments that an AI's values would be unlikely to be the complex mismash that human values are, so such an AI would be very unlikely to share our sentimental desire to not make species extinct)
But that's the point. A human society dominates bears. But a solitary human, raised by bears, wouldn't dominate them. So to assume that a single that a solitary AI, "raised" by humans, would somehow be able to conquer us a pretty problematic assumption (on top of a string of other pretty problematic assumptions).
An AI would likely be able to scale and/or copy itself effortlessly. A hundred clones of the same person absolutely would dominate bears, even if they'd been raised by them.
Think of it like Amdahl's law vs Gustafon's law. Maybe a field like calculus is a closed problem- there's not much more to solve there. But a computer can discover new theorems and proofs that would take a human two-three decades to get to the point of discovering them. Consider that a computer doesn't just have the ability to do what humans do faster, but has the ability to solve problems beyond the scale of human.
Even if human intelligence was the pinnacle, AI could be still extremely dangerous just by running at accelerated simulation speed and using huge amounts of subjective time to invent faster hardware. See https://intelligence.org/files/IEM.pdf for discussion. The point is moot anyway though, since the hypothesis (that humans are the most intelligent possible) is just severely incompatible with our current understanding of science.
This comes with the assumption that the first AI will be able to run at better than human levels on something like a raspbery pi. More realistically, it's going to have to run on an immense super computer, and even if it can marginally improve (marginal because a human level AI is no more likely to improve its own hardware than I am), it won't be able to just spread all over the world. It needs real physical hardware.
That is, unless the reason we don't have AI is because we haven't put the pieces together correctly, and it really could work on minimal hardware.
Just because it requires real hardware does not mean it may be able to spread all over the world. It may just do it by setting up virtual machines in data centers all over the world, or as a botnet with a distrusted processing approach similar to folding at home.
Even if that's true, imagine an AI as smart as John von Neumann on modafinil, that never thinks about food/sex/art/etc., that never sleeps, that has access to Wikipedia etc. at the speed of thought, and no morals. That's not uncontrolled intelligence explosion level disaster, but it's still highly dangerous.
But maybe it think about art a lot. Or maybe we can't cut out sleep without harming our ability to reason. Or maybe the AI would be more interested in watching youtube videos than working in the field of AI.
The only human level general intelligence we have to look at is humans. Sure, it's possible that there's an easy way to build an artificial intelligence that doesn't need to sleep and isn't interested in art but that retains our ability to problem solve; but maybe their isn't. It's possible that we'll be able to easily figure out that alternative brain architecture and build in the near future; but given how little we still don't understand about our own brain (and the failure to mimic the brains of even simple creatures), it doesn't seem likely. It's possible that if we were able to overcome those major hurdles, a super John von Neumann would be able to wield enormous political power; or it's possible that, like the real von Neumann, it would focus mostly on its research.
In order to get to the AI doomsday scenario, you have to assume that a lot of very unlikely things are all going to happen. And the main argument for them happening seems to be one of ignorance - "hey, we don't have any idea what this will be like, so we can't say that this improbable situation won't happen."
I'd say your ideas are more unlikely. You can't generalize from one example. An AI that hasn't gone though evolution is extremely unlikely to have the exact set of complex wants and needs that humans have.
And even if you're right, as Alexander says, in that case why the need to open-source it? If you think it's important that AI is open-source and not controlled by e.g. Facebook, that must be because you think AI is going to be powerful and effective. In which case it's as dangerous as he says.
But it's not a generalization from one example; I'm giving a string of different possibilities, based on the fact that we only have a single point of data. People seem to readily except that particular scenario - the "AI as smart as John von Neumann on modafinil, that never thinks about food/sex/art/etc., that never sleeps, that has access to Wikipedia etc. at the speed of thought, and no morals" scenario - based on zero evidence that an AI would actually be like that.
People seem more ready to accept the idea that an artificial general intelligence would act similar to it's portrayed in SciFi stories than accept that it might act similar to the other general intelligences we can observe (namely ourselves).
But all of your possibilities are ridiculously human-parochial. And they all boil down to "the AI might have diverse interests", which is very unlikely - we have them as a result of evolution, but an AI created by human programmers would very likely have only one driving interest. And whether that interest was food, sex, art, or some particular notion of morality, the result would be equally terrible.
A virtual campus with thousands of hyperfocused John Von Neumanns collaborating telepathically and accomplish subjective years of works in seconds of real world time.
If we have a Von Neumann level intelligence running on a supercomputer, and we haven't solved friendliness by then, we've lost the future of humanity. As far as I can tell, all arguments against that conclusion are based on various kinds of wishful thinking.
And yet Big Yud refuses to publish the conversation. I know his arguments (unknown unknowns) but this is a very un-scientific approach and frankly why should we believe that what he said happened really happened?
Exactly. The lack of actual information is a reason to disbelieve Yudkowsky on this topic, not to assume that it's true for Secret Technophile Mystery Cult Reasons.
Could be wrong, but I believe in most cases the conversations were with users of the SL4 mailing list, and at least one user posted a PGP signed statement to the effect that the AI was let out of the box.
Bet he offered the subject 500 dollars real money.. That wouldn't be cheating either. Any decent ai would think of the same thing. And it would explain the secrecy.
The page I linked has him explicitly denying doing that.
"The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI... nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can't offer anything to the human simulating the Gatekeeper. The AI party also can't hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it's not what's being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out). (...)
In case you were wondering, I (Yudkowsky) obeyed this protocol voluntarily in both earlier tests."
Yes, I'm aware of that. But without 3rd party we're basically trusting the participants that it didn't happen.
E.g. "I'll give you $500, but you also have too sign an NDA, so that people don't know we cheated."
I don't want to imply that they cheated, just want to reiterate my original argument that the lack of transparency makes the experiment effectively invalid. Think Tesla and his claims about World Wireless system.
Well, if it's possible to build a human level intelligence, it's probably possible to build an intelligence that's much like a very smart human except it runs 100x faster. And in that case, somebody with sufficient resources could build an ensemble of 1000 superfast intelligences.
That's a lower bound on the scariness of AI explosion, and it may already be enough to take over the world. Certainly it should be enough to take over the Internet circa 2015...
To my mind it seems pretty clear that if AI exists, then scary AI is not far off.
That said, I don't worry about this stuff too much, because I see AI as being much technically harder, and much less likely to materialize in our lifetimes, than articles like this suppose.
I think the more relevant fact is that we don't have any ethical objections to shutting down computers, they're wholly dependent on our infrastructure, and the'll only evolve in ways that prove useful to us, because we wouldn't put a computer in charge of everything unless it were sufficiently compliant to our desires.
I mean, are you going to put the same machine in charge of mineral extraction, weapon construction, transportation, and weapon deployment? When it hasn't proven to act correctly in a high-fidelity simulated environment? Probably not.
We're also assuming that human ethics and intelligence are independent. I don't see many reasons to believe this. Social power and intelligence might be independent.
I think one of the best evidences we have is the level at which computers outperform the human mind in certain domains. An artificial general intelligence would have very low latency access to all these mathematical and computational tools we've invented (physical simulations, databases, theorem provers), and it would not need to mechanically enter program code on a keyboard, but it would be directly wired to the compiler. It could possibly learn to think in program code and execute it on the fly.
The computation environment of neurons is also extremely noisy (axons are not well insulated) and neurons only fire at 7-200Hz. Assuming noise and low firing rate do not fulfill a certain task in mammalian brains, this is another way in which silicon-based minds could potentially be vastly superior.
Thirdly, assuming sleep is not necessary for intelligence, artificial minds would never get exhausted. They could work 24 hours on a problem a day, which is possibly 5-10 time the amount of thinking time a human can do realistically.
And lastly, an AI could easily make copies of itself. Doing so it could branch a certain problem to many computers which run copies of it and eventually collect the best result, or just shorten the time it takes to get a result. It could also evolve at a much faster rate than humans, assuming it has a genetic description: possibly hours to seconds instead of 20 years. Anyhow, it could easily perform experiments with slightly changed version of itself.
Alternately, perhaps it is possible to be much smarter, but it's not as effective as we expect?
If we think of intelligence as skill at solving problems, it might be that there are not many problems that are easily solved with increased intelligence alone, because the solutions we have are nearly optimal, or because other inputs are needed as well.
This seems most likely to happen with mathematical proofs and scientific laws. Increased intelligence doesn't let you prove something that's false, and it doesn't let you violate scientific laws.
But I don't find this particularly plausible. Consider computer security: hackers are finding new exploits all the time. We're far from fixing all the loopholes that could possibly be found.
Alternately, perhaps it is possible to be much smarter, but it's not as effective as we expect?
If we think of intelligence as skill at solving problems, it might be that there are not many problems that are easily solved with increased intelligence alone, because the solutions we have are nearly optimal, or because other inputs are needed as well.
Yeah, I think that's a much better way of putting what I'm trying to get at. That is, maybe the machine is much "smarter" depending on how you define "smart". But maybe that doesn't enable it to do much more than we can do, because of other fundamental limits.
It is not just an issue of finding solutions to impossible problems, or breaking scientific laws, but also problems where the solutions are along the lines of what George Soros would call reflexive. Computer security is like this, so is securities trading (no pun.)
Secondly, what about problems which require the destruction of the problem solvers along the path to the optimal solution? I'm not sure about the correct word to describe this, or the best example but it could be seen in large systems. Where humans are right now is a result of this. We would not know many things if those things which came before were not destroyed (cities, 1000 year empires, etc.)
Thirdly, is a uniform, singular AI the most optimal agent to solve these sorts of problems? Much the way we don't rely or use mainframes for computing today, perhaps there will be many AI agents each which may be really good at solving particular narrowly defined problem sets. This could be described perhaps as a swarm AI.
Nick Bostrom's Superintelligence is a great book, but I don't recall much consideration along these lines. When a lot of AI agents are "live" the paths to solutions where AI compete against each other open up even more complex scenarios.
There certainly are physical limitations to AI. Things like the speed of light can slow down processing. Consumption of energy. Physical elements that can be assembled for computational purposes.
Between now and "super" AI, even really good AI could struggle to find solutions to the most difficult problems, especially if those are problems other AI are creating. The speed alone may be the largest challenge to humans. How do we measure this difficulty relative to human capabilities, I don't know.
End of rant -- but the limits of not just AI but problem solving is quite interesting.
Also, a big part of human intelligence is its communal nature. A human society might kill off all of the wolves in an area. But does anyone think that a single human, raised by wolves, is going to do the same? It might be able to do certain things other animals in the area couldn't, but we wouldn't expect it to dominate the entire area.
Why would you compare the limits on intelligence of an AI to the abilities of just one human?
Why not compare it to 1000 people, all communicating and problem solving together?
We know that this is possible because it happens all the time, and enables such groups to make lots of money in digital markets, and invest it in things like marketing, robotics, and policy.
The intelligence of an AI is lower bounded by that of the most intelligent possible corporation.
Potential corollary: Assuming one can make a human level AI, then if it is not sufficiently resource constrained (hard?) or somehow encoded with "human values" (very very hard), then it will be at least as dangerous as the most sociopathic human corporation.
We, as humans, don't even know how smart we actually are - and we probably never will. It's very unlikely that any species is equipped to accurately comprehend its own cognitive limits - if such limits even exist.
It's even less likely that we can relegate the intelligence of a nonhuman entitity to a mathematically meaningful figure without restricting the testing material to concepts and topics meaningful to humans - which may have absolutely no relation to the intelligence or interest domains of a nonhuman entity.
As human beings we don't always reach our potential.
As a child I had problems with other children picking on me, I was suspected of having autism due to social issues, I was given an IQ test and scored a 189. I was high functioning an in 1975 there was no such thing as high functioning autism, that wasn't discovered until 1994, so I got diagnosed with depression instead. Child Psychologist told my parents to put me into a school for gifted children, but they put me in public school instead where I struggled and my brain worked 10 times faster so I was always ahead of the class in learning and bored waiting for people to catch up. I was still bullied and picked on and this interfered with my learning. The same thing happened when I went to college and had a job, I was bullied and picked on. I never reached my potential and my mental illness was one of the reasons why and people picking on me was another reason, and had I been in a school for gifted children I'd be able to reach my potential better.
I developed schizoaffective disorder in 2001 and it screws with my memory and focus and concentration. I ended up on disability in 2003. My career is basically over but I still have a potential I never met.
What good is a high IQ if you can't reach your potential to use it?
We keep hearing talk of an AI that is smarter than a human being, but we haven't seen one yet. Our current AI programs are not as smart as a human being yet, but they can do tasks that put human beings out of work. Just having a perfect memory, and being able to do fast math equations makes an AI in the "Rainman" category http://www.imdb.com/title/tt0095953/ even if it is not as smart as a human being.
I guess what I am trying to say is that an AI doesn't have to be as smart as a human being to be dangerous. Just like the Google Maps app that drives people off a cliff or into an ocean. An AI can make robocalls and sell a product and put people out of work. You can replace almost any line of work with an AI, and then it gets dangerous when a majority of people are unemployed by AIs that aren't even as smart as a human being.
I'd like to see a personal AI that works for poor and disabled people to earn money for them, as they run it on a personal computer. Doing small tasks on an SEO marketplace using webbots for $5 each and 100 tasks a day for $500 in a Paypal account to help lift them out of poverty. I know there are people already doing this for their own personal gain, but if the AI that does that is open sourced so disabled and poor people can run it, it can help solve the problem of poverty.
Dumb AI will dominate the world well before "smart" AI even gets close to taking off. I think the more realistic scenario is something like the paperclip maximizer, but a little dumber. A world of highly interconnected, but somewhat stupid AIs could cause utter chaos in milliseconds by following just some very basic rules (e.g. maximize number of paperclips in collection).
I don't think a lack of intelligence of the IQ test variety is what's holding humanity back. I think it's distraction, greed, lack of empathy (or emotional intelligence), lack of information, and lack of communication. (Maybe there are more things.) Basically, I believe we have all the technology and intelligence we need to make this planet a much better place.
We had all the technology and intelligence we needed 2,000 years ago to make the world a better place. Yet our ancestors did not.
For many 1000's of years different religions, some on opposite sides of the world from each other, have been saying the same thing. We have the ability to be better people, but all of us squander it with pettiness, jealousy, greed etc.
> Now that's more depressing than global annihilation.
Fair point. But I honestly wonder if it's not true, or close to true. I mean, people have been talking about the "end of science" for a while now, even though we know we're not quite literally at "the end". But let's say that we unify relativity / gravity / QM in the next 100 years or so, and identify the exact nature of dark energy / dark matter. Presumably that would represent knowing close to everything about the physical world that its possible to know. And if we can discover that, it does lead you to wonder "what else would more intelligence represent"?
Of course, maybe we're WAY more than 100 years away from those things. Or maybe we'll never get there. Or maybe if we do, it doesn't mean anything vis-a-vis the limits of intelligence. I'm just thinking out loud here, this isn't an argument that I've put a lot of time into developing...
> would represent knowing close to everything about the physical world that its possible to know.
That's a severe misunderstanding of what it means to have a unified physical theory. The current standard model of physics allows us to theoretically predict the evolution of physical systems with an insanely high precision. A precision so great that there wouldn't be any meaningful error simulating macroscopic objects like human brains, given sufficient computing power.
But we still have massive holes in our understanding of many things, including human biology or neurology. We don't have enough computing power to precisely simulate, but even if we had that power, it wouldn't necessarily let us understand anything. Elementary particles and humans are on hugely different scales. Observing a simulated human would be easier than observing a human simulated by physics itself, but it wouldn't just magically make everything clear.
Having a TOE in physics brings us very little relative benefit compared to the Standard Model, when it comes to understanding biology, materials science, geology, psychology, computer science, etc.
That's a severe misunderstanding of what it means to have a unified physical theory. The current standard model of physics allows us to theoretically predict the evolution of physical systems with an insanely high precision. A precision so great that there wouldn't be any meaningful error simulating macroscopic objects like human brains, given sufficient computing power.
Sure, like I said, I'm just thinking out loud here. Obviously TOE in and of itself doesn't automatically translate into full knowledge of everything in practice. But in principle, it would given sufficient computing power, allow us to simulate anything, which I think would yield additional understanding. But here's the thing, and where this all ties back together: sufficient computing resources may not be possible, even in principle, to allow use of that TOE to do $X. But an upper bound on the amount of computing resources should also reflect something of an upper bound on what our hypothetical AI can do as well.
Or to put it all slightly differently.. if we had a TOE, we'd have shown, at least, that humans are smart enough to develop a TOE. Which I think at least raises the issue of "how much smarter can a machine be, or what would it mean for a machine to be much smarter than that?"
Note to that I'm not necessarily arguing for this position. It's more of a thought experiment or discussion point than something I'm firmly convinced of.
Look at how long it's taken us to develop theories or technologies even after we had everything we needed to make them possible. The ancient Greeks had steam engines, but it took us millennia to start using them seriously. The concept of the engineering tolerance makes industry thousands of times more efficient; it could have been invented any time since the 16th century, but it wasn't. The checklist has been known for decades to reduce errors across approximately all disciplines, but it still faces huge barriers to adoption. Evidence-based medicine has been the obviously correct approach since the invention of the scientific method but it's still facing struggles. Even in science, we're still arguing about the interpretation of basic quantum mechanics 100 years after the relevant experiments.
Lacking in principle the tools to solve the problems the universe gives you may be more depressing than non-existence. Though, I don't think it's actually possible to not exist. Name one person who's experienced nonexistence.
I'm not sure that "experiencing nonexistence" is theoretically possible. However, I can name lots of people who existed, and now don't. (Even if you're a theist or believe in reincarnation, I'm talking about this current, earthly existence.)
If you're concerned that humans are as smart as it's possible to be then I would recommend reading Thinking Fast and Slow or some other book on cognitive psychology. There's essentially a whole branch of academia studying topics isomorphic to figuring out those things we fail to realize we don't know on a day to day basis.
If you're concerned that humans are as smart as it's possible to be
It's not about humans being as smart as possible though, it's more about being "smart enough" to where a hypothetical "smarter than human AI" is not analogous to a nuclear bomb. That is, are we smart enough to where a super-AGI can't come up anything fundamentally new, that humans aren't capable of coming up with, as bound by the fundamental laws of nature.
then I would recommend reading Thinking Fast and Slow or some other book on cognitive psychology
I'm reading Thinking, Fast and Slow right now, actually.
And just to re-iterate this point: I'm not arguing for this position, just putting it out there as a thought experiment / discussion topic. I'm certainly not convinced this is true, it's just a possibility that occurred to me earlier while reading TFA.
Even if there is a limit to the size of a mind, there is not a limit to the number or speed. An atomic scenario would be a billion human level intelligences running a hundred times faster.
This post is basically a repackaging of Nick Bostrom's book SuperIntelligence, a work suspended somewhere between the sci-fi and non-fiction aisles.
As a philosopher of the future, Bostrom has successfully combined the obscurantism of Continental philosophy, the license of futurism and the jargon of technology to build a tower from which he foresees events that may or may not occur for centuries to come. Nostradamus in a hoody.
Read this sentence:
"It looks quite difficult to design a seed AI such that its preferences, if fully implemented, would be consistent with the survival of humans and the things we care about," Bostrom told Dylan Matthews, a reporter at Vox.
Notice the mixture of pseudo-technical terms like “seed AI” and “fully implemented”, alongisde logical contructs such as “consistent with” -- all leading up to the phobic beacons radiating at the finale: “the survival of humans and the things we care about.”
It's interesting, the technical challenges that he feels optimism and pessimism for. For reasons best known to himself, Bostrom has chosen to be optimistic that we can solve AI (some of the best researchers are not, and they are very conservative about the present state of research). It may perhaps the hardest problem in computer science. But he's pessimistic that we'll make it friendly.
Bostrom’s tower is great for monologs. The parlor game of AI fearmongering has entertained, rattled and flattered a lot of people in Silicon Valley, because it is about us. It elevates one of our core, collective projects to apocalyptic status. But there is no dialog to enter, no opponent to grapple with, because no one can deny Bostrom's pronouncements any more than he can prove them.
Superintelligence is like one of those books on chess strategy that walk you through one gambit after the other. Bostrom, too, walks us through gambits; for example, what are the possible consequences of developing hardware that allows us to upload or emulate a brain? Hint: It would make AI much easier, or in Bostrom’s words, reduce “recalcitrance.”
But unlike the gambits of chess, which assume fixed rules and pieces, Bostrom’s gambits imagine new pieces and rules at each step, substituting dragons for knights and supersonic albatrosses for rooks, so that we are forced to consider the pros and cons of decreasingly likely scenarios painted brightly at the end of a line of mights and coulds. In science fiction, this can be intriguing; in a work of supposed non-fiction, it is tiresome.
How can you possibly respond to someone positing a supersonic albatross? Maybe Bostrom thinks it will have two eyes, while I say three, and that might make all the difference, a few more speculative steps into the gambit.
In the New Yorker article The Doomsday Invention, Bostrom noted that he was "learning how to code."
We might have expected him to do that before he wrote a book about AI. In a way, it's the ultimate admission of a charlatan. He is writing about a discipline that he does not practice.
Your core claim seems to be that the future of AI is impossible to predict for anyone, including Bostrom. If that's the case, it seems like that should inspire more caution, not less.
(There's also some DH2 http://paulgraham.com/disagree.html level stuff about the terms Bostrom chooses to use to make his argument... I'm not sure if there's anything to be said about this except that Bostrom's book seems more accessible to me than most academic writing http://stevenpinker.com/why-academics-stink-writing and I'd hate for him to receive flak for that. It's the ideas that matter, not the pomposity with which you communicate them. I also don't understand the implied disregard for anything that seems like science fiction--what's the point in trying to speculate about the future if all the speculation will be ignored when decisions are being made?)
If an Oxford philosophy professor is not enough for you, here's a succinct explanation of the AI safety problem & its importance from Stuart Russell (Berkeley CS professor & coauthor of Artificial Intelligence: A Modern Approach):
My core claim is that Bostrom doesn't know his arse from his elbow. And a professorship in philosophy at Oxford is not, in itself, a great support for his authority on technical matters, or on the behavior of intelligent species yet to exist. That is, in fact, a topic on which no one speaks with authority.
I have nothing against science fiction, but I object to any fiction that disguises itself as non-fiction, as Bostrom's often does. Nor do I think that the impossibility of predicting the future of AI is, in itself, a reason for undue caution.
Bostrom is performing a sort of Pascalian blackmail by claiming that the slight chance of utter destruction merits a great deal of concern. In fact, he is no different from a long line of doomsday prophets who have evoked fictional and supposedly superior beings, ranging from deities to aliens, in order to control others. The prophet who awakens fear puts himself in a position of power, and that's what Bostrom is doing.
Regardless of Bostrom's motives, we as humans face an infinite number of possible dangers that threaten total destruction. These range from the angry Judeo-Christian god to the gray goo of nano-technology to Peak Oil or Malthusian demographics. In each case, the burden of proof that we should be concerned is on the doomsday prophet -- we should not default to concern as some sort of reasonable middle ground, because if we do, then we will worry uselessly and without pause. There is not enough room in one mind to worry about all the gods, aliens, golems and unforeseen natural disasters that might destroy us. And you and I both know, most if not all of those threats have turned out to be nonsense -- a waste of time.
I do not believe that Bostrom carries the burden of proof well. His notion of superintelligence is based on a recursively self-improving AI that betters itself infinitely. Most technological advances follow S-curves, moving slow, fast and slow. Bostrom does not seem to grasp that, and cites very little evidence of technological change to back up his predictions about AI. He should be, first and foremost, a technological historian. But he contents himself with baseless speculation.
We are in danger, reading Bostrom or Russell or Good, of falling into language traps, by attempting to reason about objects that do not exist outside the noun we have applied to them. The risk is that we accept the very premise in order to fiddle with the details. But the very premise, in this case, is in doubt.
>That is, in fact, a topic on which no one speaks with authority.
Agreed.
>Nor do I think that the impossibility of predicting the future of AI is, in itself, a reason for undue caution.
Sure.
>Bostrom is performing a sort of Pascalian blackmail by claiming that the slight chance of utter destruction merits a great deal of concern. In fact, he is no different from a long line of doomsday prophets who have evoked fictional and supposedly superior beings, ranging from deities to aliens, in order to control others. The prophet who awakens fear puts himself in a position of power, and that's what Bostrom is doing.
Consider: "Previous books published by authors hailing from Country X contained flaws in their logic; therefore since this book's author came from Country X, this book must also have a logical flaw." It's not a very strong form of argument: you might as well just read the book to see if it has logical flaws. Similarly even if a claim seems superficially similar to the kind of claim made by non-credulous people, that's far from conclusive evidence for it being an invalid claim.
It would be a shame if religious doomsayers have poisoned the well sufficiently that people never listen to anyone who is saying we should be cautious of some future event.
>And you and I both know, most if not all of those threats have turned out to be nonsense -- a waste of time.
Sure, but there have been a few like nuclear weapons that were very much not a waste of time. Again, you really have to take things on a case by case basis.
>His notion of superintelligence is based on a recursively self-improving AI that betters itself infinitely. Most technological advances follow S-curves, moving slow, fast and slow. Bostrom does not seem to grasp that, and cites very little evidence of technological change to back up his predictions about AI. He should be, first and foremost, a technological historian. But he contents himself with baseless speculation.
I can see how this could be a potentially fruitful line of reasoning and I'd encourage you to pursue it further, since the future of AI is an important topic and it deserves more people thinking carefully about it.
However, I don't see how this does much to counter what Bostrom wrote. Let's assume that AI development will follow an S-curve. This by itself doesn't give us an idea of where the upper bound is.
Estimates suggest that human neurons fire at a rate of at most 200 times per second (200 hertz). Modern chips run in the gigahertz... so the fundamental activity that chips do is being done at something like a million times the speed that the fundamental activity neurons do is being done. (Humans are able to do a ton of computation because we have lots of neurons and they fire in parallel.)
And there are lots of arguments one could make along similar lines. Human brains are limited by the size of our skulls; server farms don't have the same size limitations. Human brains are a kludge hacked together by evolution; code written for AIs has the potential to be much more elegant. We know that the algorithms the human brain is running are extremely suboptimal for basic tasks we've figured out how to replicate with computers like doing arithmetic and running numerical simulations.
Even within the narrow bounds within which humans do differ, we can see variation from village idiots to Von Neumann. Imagine a mind V' that's as inscrutable to Von Neumann as Von Neumann is to a village idiot, then imagine a mind V'' that's as inscrutable to V' as V' is to Von Neumann, etc. Given that AI makers do not need to respect the narrow bounds within which human brains differ, I don't think it would be surprising if the "upper bound" you're discussing ends up being equivalent to V with a whole bunch of prime symbols after it.
> Consider: "Previous books published by authors hailing from Country X contained flaws in their logic; therefore since this book's author came from Country X, this book must also have a logical flaw." It's not a very strong form of argument: you might as well just read the book to see if it has logical flaws. Similarly even if a claim seems superficially similar to the kind of claim made by non-credulous people, that's far from conclusive evidence for it being an invalid claim.
You're right, that's not a very strong form of argument. But that is not my argument. The connection between Bostrom and previous doomsayers is not as arbitrary as geography. Your analogy is false. The connection between Bostrom and other doomsayers in the structure of their thought. The way they falsely extrapolate great harm from small technological shifts.
> It would be a shame if religious doomsayers have poisoned the well sufficiently that people never listen to anyone who is saying we should be cautious of some future event.
I think many people do not realize the quasi-religious nature of AI speculation. Every religion and charismatic mass movement promises a "new man." In the context of AI, that is the singularity, the fusion of humans with intelligent machines. The power we associated with AI present and future leads us to make such dramatic, quasi-religious predictions. They are almost certainly false.
>Sure, but there have been a few like nuclear weapons that were very much not a waste of time. Again, you really have to take things on a case by case basis.
Nuclear weapons are fundamentally different from every other doomsday scenario I cited because they were designed explicitly and solely as weapons whose purpose was to wreak massive destruction at a huge cost to human life. The potential to destroy humanity defined nuclear weapons. That is not the case for other technological, demographic and resource-related doomsday scenarios, which are much more tenuous.
It sounds to me as though you believe that because religious nuts have prophesied doomsday since forever, we can rule out any doomsday scenario as "almost certainly false" (a very confident statement! Prediction is difficult, especially about the future. Weren't you just explaining how hard it is to predict these things?) But no actions on the part of religious doomsayers are going to protect us from a real doomsday-type scenario if the universe throws one at us.
The fact that a claim bears superficial resemblance to one made by wackos might be a reason for you to believe that it's most likely not worth investigating, but as soon as you spend more than a few minutes thinking about a claim, the weight of superficial resemblances is going to be overwhelmed by other data. If I have been talking to someone for three hours, and my major data point for inferring what they are like as a person is what clothes they are wearing, I am doing conversation wrong.
"Nuclear weapons are fundamentally different from every other doomsday scenario I cited because they were designed explicitly and solely as weapons whose purpose was to wreak massive destruction at a huge cost to human life. The potential to destroy humanity defined nuclear weapons. That is not the case for other technological, demographic and resource-related doomsday scenarios, which are much more tenuous."
Sure, and in the same way nuclear bombs were a weaponization of physics research, there will be some way to weaponize AI research.
Yes, Bostrom is all over the place, but the development curve upon which he bases his doomsday predictions is an exponential one in which an AGI recursively improves itself.
An AGI has been 20 years away for the last 80 years, in the opinion of many computer scientists. The reason why these prediction have little value is the same reason why software is delivered late: the people involved often don't know the problems that need to be solved. In the case of AGI, the nature and complexity of the problems is greater than a simple CRUD app.
> Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be.
No, there's clearly at least one human smarter than you.
Wait, what? I'm sure there are a lot of humans smarter than me. I'm talking about the overall approximate intelligence of humans in a general sense, not any specific individual.
I certainly think there are individuals within our species whose cognitive algorithms are performing near-optimally. I just also have very good reason to think I can develop some algorithms that are better for certain tasks than the ones I consist in: I think there's sometimes more information in our sensory data than we take advantage of, and I think that better algorithms could tie together sensing and decision-making to obtain better data more often. Our brains "bite the Bayesian bullet" a bit too hard, accepting noisy data as-is and just making whatever inferences are feasible, rather than computing how to obtain clearer, less-noisy data (which is why we developed science as a form of domain knowledge rather than as an intuitive algorithm).
People have already studied this. You make it sound like an open question, but the answer is right there in Bostrom's Superintelligence, or any works on cognitive heuristics & biases, the physics of computing, or the mathematics of decision making. The answer is "no. we are nowhere near the smartest creatures possible". And there are multiple independent lines of argument and evidence that point directly to this conclusion.
The answer is "no. we are nowhere near the smartest creatures possible".
Right, like I said, this whole thing was just "thinking out loud" more than any thoroughly researched and validated idea. And I didn't do a very good job of even explaining what I was getting at. It's not so much that the question is "are we the smartest creatures possible". It's more like:
"as smart as we are, and given the constraints of the physical world, is there room for a hypothetical 'smarter than human AGI' to represent an existential threat, or something that justifies the analogy to nuclear weapons?"
See also skybrian's comment which actually states it better than I did originally:
> "as smart as we are, and given the constraints of the physical world, is there room for a hypothetical 'smarter than human AGI' to represent an existential threat, or something that justifies the analogy to nuclear weapons?"
The answer is still yes. Your "thinking out loud" is privileging an already refuted hypothesis.
Well that's a fine assertion, but how do you justify it?
Your "thinking out loud" is privileging an already refuted hypothesis.
WTH does that even mean? There is no hypothesis, there's a vague notion of an area to consider, which - if considered thoroughly - might or might not yield a hypothesis.
As I said, Nick Bostrom's Superintelligence talks all about this.
The main thing is that the constraints of the physical world are nowhere near the limitations of human capabilities. Yes, there's a theoretical limit to how much computation you can do in a certain amount of space with a certain amount of energy. No biology or technology currently in existence even gets close to those limits.
Without any such fundamental limit there's just no reason for AGI not to present a threat.
I've been planning to read Superintelligence for a while anyway, so I think I'll move it up the list a bit in response to all this discussion. I'm on vacation once I check out of here today, until the second week of Jan, so I'll probably read it over the holiday.
The thing is it's "just thinking out loud" on the same level as "what if the particles aren't really in a superposition and we just don't know which state they're in". These are basic questions, the answers are known, published, and the article even mentions exactly where you can find them.
When I say I'm "thinking out loud" what I mean is, the exact words I used may not reflect the underlying point I was getting at, because it was fuzzy in my head when I first started thinking about it. Reading all of these responses, it's clear that most people are responding to something different than the issue I really meant to raise. Fair enough, that's my fault for not being clearer. But that's the value in a discussion, so this whole exercise has been productive (for me at least).
These are basic questions, the answers are known, published, and the article even mentions exactly where you can find them.
I've read and re-read TFA and I don't find that it addresses the issue I'm thinking about. It's not so much asking "are we the smartest possible creature", or even asking if we're close to that. It's also not about asking whether or not it's possible for a super-AGI to be smarter than humans.
The issue I was trying to raise is more of "given how smart humans are (whatever that mean) and given whatever the limit is for how smart a hypothetical super-AGI can be, does the analogy between a super-AGI and a nuclear bomb hold? That is, does a super-AGI really represent an existential threat?"
And again, I'm not taking a side on this either way. I honestly haven't spent enough time thinking about it. I will say this though... what I've read on the topic so far (and I haven't yet gotten to Bostrom's book, to be fair) doesn't convince me that this is a settled question. Maybe after I finish Superintelligence I'll feel differently though. I have it on my shelf waiting to be read anyway, so maybe I'll bump it up the priority list a bit and read it over the holiday.
Bostrom, Yudkowsky, etc. posit that an "artificial super-intelligence" will be many times smarter than humans, and will represent a threat somewhat analogous to an atomic weapon. BUT... consider that the phrase "many times smarter than humans" may not even mean anything. Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be. Or close enough that being "smarter than human" does not represent anything analogous to an atomic bomb.
So this might be an interesting topic for research, or at least for the philosophers: "What's the limit of how 'smart' it's possible to be"? It may be that there's no possible way to determine that (you don't know what you don't know and all that) but if there is, it might be enlightening.