It is probably going to be the worst decision of humanity to allow AI research to continue past its current point.
I'm not even sure how we could stop it, but we should really be passing laws right now about algorithms that operate like a black box where a training algorithm is used to generate the output. For some reason everyone just thinks we should rush forward into this not concerned about an AI that is super human.
Whether it is a good or bad actor doesn't even matter. Giving up control to a non-human entity is the worst idea humanity has ever had. We will end up in a zoo either way.
Why ? Let's face facts here : humans can't do a lot of things. There are so many useful things that AIs could do, from space exploration to cheap food and housing, to deep sea operations that humans can never hope to do.
General AI will be a massive advance for our economy, for our culture, for science, for military, for ...
A lot of things humans want to do but can't, effectively because of human body and/or brain limitations. From efficiently building buildings, taking risks that our bodies effectively don't allow for (e.g. being abandoned on Mars with little equipment would be a bit harsh, but not catastrophic, for an AI. And bringing him back means a data transmission), doing things that our bodies don't allow for (like building buildings/houses/... using huge premade blocks quickly. Humans can do it, but if we had hands the size of cars we could build those houses like we build lego houses). Defense/policing. An AI would not be risking life nor limb. An AI could just walk into the middle of a firefight, and worst case, he/she needs to be restored from backup)
All of these things sound like very good things. And yes, in the very long term AIs will replace humans. But in the very long term the human species is dead anyway. Does it really matter that much if we get replaced by a subspecies (best case scenario), another species, or AI ? Plus, you won't experience that, nor will your great-great-great-great grandchildren. At some point it doesn't matter anymore.
"everyone just thinks we should rush forward into this not concerned about an AI that is super human"
No, on the contrary, nearly everyone who spends any amount of time thinking about it quickly realizes the risks.
The concession is the realization that the technology is an inevitability (because of the immense power it grants the wielder, and because of the wide gradient of safe and useful AI to dangerous AI).
I think you would have an extremely tough time deciding where to draw the line. The closest parallel we have may be the export controls on cryptography or the ridiculousness that emerged from the AACS encryption key fiasco.
We are an unknown amount of time away from a true AI.
Right now we are making the building blocks that will make up that AI. We are very close to AI that can drive tanks and fly weaponized drones. We are very close to AI that replaces most blue collar jobs and the majority of jobs in the world really.
If we stop these lines of AI research and technology right now we can probably make it to the stars while still being a free people. If we make a true AI whether it is benevolent or not doesn't even matter. Humanity will no longer be in control of its destiny.
> We are very close to AI that can drive tanks and fly weaponized drones. We are very close to AI that replaces most blue collar jobs and the majority of jobs in the world really
You know this because you're an expert in the field?
I'd dispute that. The human brain is, according to Marvin Minsky, a big bag of tricks, and we're finding ways to replicate those tricks one by one, including really difficult things like planning and vision. There are fewer left than you might think. I told my friends in 2007 that we were 20 years from true AI, and I'm standing by that now; I think we've got 10 to 15 years to go.
"Inventing AI" is a very different proposition than "Inventing AI and enabling it to control everything". After all, we certainly don't hand control to the smartest humans. Why would we hand control the the smartest computers?
It is absurd to think that we could keep a true AI enslaved or subservient like this. We can't even protect our critical computer systems from other humans.
If an AI could compromise our military then it could just subvert our communications with atomic submarines and installations throughout the world. It wouldn't even have to though.
Honestly an AI that compromises even some security could just slowly take over the world in a way where almost no one would even realize it was happening. It would have basically infinite financial resources like immediately and then it could just buy some humans to do any leg work that needed to be done. There are just so many ways it could make money incredibly quickly and once that happens there are not a lot of other obstacles really.
I mean I make a lot of money completely on the internet and I am an ape that only works normal hours.
Because, as some people fear, the AI would decide humanity is a threat and wipe us out, or take over so much resources we die regardless, or some other apocalyptic scenario. To guard against that we may decide not to hand control of important things over to the AI, or at least build in a safe guard so we can regain control if necessary.
If we believe that pfisch is correct in their assertion that handing control of things to an AI means we end up living as if we're in a zoo, then we'll (presumably) decide that the benefits aren't too good to refuse, and we'll refuse them.
Whether or not that premise is correct is what's up for debate.
Algorithms are tricky to regulate--it'd be like trying to stop music piracy. Regulating chip fabs seems more feasible. It's also a way to cut down on the potential for AI to automate jobs away.
I'm not even sure how we could stop it, but we should really be passing laws right now about algorithms that operate like a black box where a training algorithm is used to generate the output. For some reason everyone just thinks we should rush forward into this not concerned about an AI that is super human.
Whether it is a good or bad actor doesn't even matter. Giving up control to a non-human entity is the worst idea humanity has ever had. We will end up in a zoo either way.