The human brain is limited by its slow speed, by the amount of cortical mass you can fit inside a human skull, and by the length of human lifetimes. Computers will not have any of those limitations.
Right, but the idea I'm playing around with is this: Suppose you had a hypothetical creature with a brain 20x as fast as the human brain and with twice the volume. How much smarter would that creature be in practice. It's kind of an abstract idea (and I probably don't fully understand it myself), but I'm getting at something like "is there a point where that additional raw computing power just doesn't buy you anything meaningful" at least in terms of "does it represent an existential threat?" or "does the nuclear analogy hold?"
You could try looking at existing animal neural counts: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_n... Doublings or 20xing get you a long way: a 1/20x cerebral cortex neural count decrease takes you from human to... horse. I like horses a lot, but there's clearly a very wide gulf in capabilities, and I don't like the thought of ever encountering someone who is to us as we are to horses.
I don't know. If you dropped a human infant into a den of bears (more or less the equivalent situation), I don't think it would be the bears who would be at a disadvantage. So even if we were able to create an AI as far above us as we are above bears (a pretty huge if), it hardly seems certain that it would suddenly (or ever) dominate us.
But we do dominate bears. They continue to exist only at our sufferance; we tolerate them for the most part (though we kill them quickly if they ever threaten us), but we could wipe them out easily if we wanted to. We probably won't do it deliberately, but we drive a number of species to extinction if they're in the way of resource extraction by us - there are estimates that 20% of extant species will go extinct as a result of human activity, and that's with us deliberately trying not to cause extinctions!
(There are more technical arguments that an AI's values would be unlikely to be the complex mismash that human values are, so such an AI would be very unlikely to share our sentimental desire to not make species extinct)
But that's the point. A human society dominates bears. But a solitary human, raised by bears, wouldn't dominate them. So to assume that a single that a solitary AI, "raised" by humans, would somehow be able to conquer us a pretty problematic assumption (on top of a string of other pretty problematic assumptions).
An AI would likely be able to scale and/or copy itself effortlessly. A hundred clones of the same person absolutely would dominate bears, even if they'd been raised by them.
Think of it like Amdahl's law vs Gustafon's law. Maybe a field like calculus is a closed problem- there's not much more to solve there. But a computer can discover new theorems and proofs that would take a human two-three decades to get to the point of discovering them. Consider that a computer doesn't just have the ability to do what humans do faster, but has the ability to solve problems beyond the scale of human.
Right, but the idea I'm playing around with is this: Suppose you had a hypothetical creature with a brain 20x as fast as the human brain and with twice the volume. How much smarter would that creature be in practice. It's kind of an abstract idea (and I probably don't fully understand it myself), but I'm getting at something like "is there a point where that additional raw computing power just doesn't buy you anything meaningful" at least in terms of "does it represent an existential threat?" or "does the nuclear analogy hold?"