Furthermore, there is little or no defense against a full scale nuclear attack, but a benevolent AI should be sufficient defense against a hostile AI.
I think the true fear is that in an AI age, humans are not "useful" and the market and economy will look very different. With AI growing our food, clothing us, building us houses, and entertaining us, humans don't really have anything to do all day.
"Hackers aren't a problem because we have cybersecurity engineers". And yet somehow entire enterprises and governments are occasionally taken down.
What prevents issues in redteam/blueteam is having teams invested in the survivability of the people their organization is working for. That breaks down a bit when all it takes is one biomedical researcher whose wife just left him to have an AI help him craft a society ending infectious agent. Force multipliers are somewhat tempered when put in the hands of governments but not so much with individuals. Which is why people in some countries are allowed to have firearms and in some countries are not, but in no countries are individuals allowed to legally possess or manufacture WMDs. Because if everyone can have equal and easy access to WMDs, advanced civilization ends.
I mean, hackers aren’t such a problem that we ban developing new internet apps to only licensed professionals by executive order, so that kinda proves the parent posters point?
The difference is the scale at which a hacker can cause damage. A hacker can ruin a lot of stuff but is unlikely to kill a billion or two people if he succeeds at a hack.
With superintelligent AI you likely have to have every developed use and every end user get it right, air tight, every time, forever.
Yes, but the AI that is watching what every Biomedical researcher is doing will alert the AI counselors and support robots and they will take the flawed human into care. Perhaps pair them up with a nice new wife.
Can you imagine how much harder it would be to protect against hackers if the only cybersecurity engineers were employed by the government.
And the best path to a benevolent AI is to do what? The difficulty here is that making an AGI benevolent is harder than making an AGI with unpredictable moral values.
Do we have reason to believe that giving the ingredients of AGI out to the general public accelerates safety research faster than capabilities research?
I think the true fear is that in an AI age, humans are not "useful" and the market and economy will look very different. With AI growing our food, clothing us, building us houses, and entertaining us, humans don't really have anything to do all day.