Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Almost no one who actually works or has done serious research in ML is genuinely concerned about "malevolent AI". We are so, so, so far away from anything remotely close to that. Please stop trying to gin up fear and listen to the experts, who uniformly agree that this is not something to be concerned about.


"AI: A Modern Approach (...) authors, Stuart Russell and Peter Norvig, devote significant space to AI dangers and Friendly AI in section 26.3, “The Ethics and Risks of Developing Artificial Intelligence.”

https://intelligence.org/2013/10/19/russell-and-norvig-on-fr...

In addition, there is the AI Open Letter, which is signed by many "who actually works or has done serious research in ML", including Demis Hassabis and Yann LeCun. From the letter:

"We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. "

http://futureoflife.org/ai-open-letter/

Are the experts concerned about a "Skynet scenario"? No. But there is certainly genuine concern from the experts.


Slight nitpick at "devote significant space"... AI: A Modern Approach is a 1000+ page book, 3.5 pages is a footnote by comparison.


The issue is not "malevolent AI", the probability of producing an AI with "malevolent goals" by random chance is so much more unlikely than getting any AI at all that it's not even worth it to worry about that.

The problem is "any AI" whose goals are not positively aligned-with/respectul-of human society's values. In terms with Asimov's Lay of Robotics meme, the problem is not robots harming humans, but robots allowing humans to come to harm as a side effect of their actions. This is an important ethical issue that, IMHO, technologist in general are failing to address.

Unethical AIs are only a special case in the sense that they are believed to be able to cause much more chaos and destruction than spammers, patent trolls or high frequency trading firms.


Personally think that the 'immediate' problem will be AI being used as assistants by unscrupulous humans. Imagine being targeted by semi-sentient ransomware.

Edit: typos


I was going to quip something about meta-data, drone-strikes and automated targeting... but then I realized that the most likely immediate threat to life from AI is probably in trading and logistic algorithms. As in efficient privatization of water resources, to the point where those too poor to be a "good" market have to go without, or the possible drastic results of errors in such algorithms (eg: job massive loss as a result of bankruptcy). And then there's AI augmented health insurance - mundane things that might ultimately take human life - and perhaps no one will even notice.


The fact they aren't concerned is exactly the problem. They should be. And in fact many are. See the SSC link posted below (http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-ri...), or this survey (http://www.nickbostrom.com/papers/survey.pdf). Certainly I am worried about it. There is far from "uniform agreement" that AI is safe. And appeals to authority would not comforting even if there was an authority to appeal to.


Couple of things

1.: I take issue with non concerned issues being dismissed as exactly one reason why people should be concerned. I dislike circular reasoning. 2.: There does not exist uniform agreement that AI is safe, in fact at least in German hacker circles quite the opposite, but as far as I see the problems that AI risk evangelists emphasize are the wrong ones. Take over the world, paperclip etc...not going to happen. For that, AI is too stupid for now. And the rapid takeoff scenarios are not realistic. But ~30% of all jobs are easily replacable by current AI technology, once the laws and the capital is lined up. AI is also making more and more decisions, legitimizing the biases with which it was either trained or programmed because it is "AI" and thus more reliable(\s). And there is a great cargo cult of "data" "science" in development (separate scare quotes intended) 3. I am starting to dislike any explicit mentions of fallacies, especially if they were used just sentences ago by the mentioner


I agree with you, igk. AI is a threat right now and has to be countered. Now the problem is: is there an organization dedicated to countering current AI trends through government regulations to slow down AI research and prevent AI misuse? Or are we just going to talk about it on Hacker News and German hacker meetups?

Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?

Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.


>I agree with you, igk. AI is a threat right now and has to be countered. Now the problem is: is there an organization dedicated to countering current AI trends through government regulations to slow down AI research and prevent AI misuse? Or are we just going to talk about it on Hacker News and German hacker meetups?

We should not slow it down. We should push forward, educate people about the risks and keep as much as possible in public scrutiny and possession (open source, government grants, out of patents/universtiy patents)

>Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?

Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.

>Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.

Just like electriciy ate the world, and steam before...slowing down is not an option, making sure it ends up benefiting everybody would be the right approach. Pushing for UBI is infinitely more valuable than AI risk awareness, because one of the two does not depend on technological progress to work/give rewards


No, putting everything in the public domain and handing out a UBI doesn't solve anything at all. It's like worrying about nukes and believing the best solution is to give everyone nukes (AI) and nuclear bunkers (UBI), because "you can't stop progress". And then, let hand out pamphlets telling people how to use nukes safely, even though we know that most people will not read the pamphlets and (since the field is new) even the people writing the pamphlets may have no idea how to use this tech. Any cargo cult would only grow in number.

Oh and our "free" nuclear bunkers have to be paid for by the government. There is a chance of the bunkers either being too "basic" to help most people ("Here's your basic income: $30 USD per month! Enjoy!") or being so costly that the program will likely be unsustainable. And what if people don't like living in bunkers, etc.?

We are trying to apply quick fixes to address symptoms of a problem...instead of addressing the problem directly. Slowing down is the right option. If that happens, then society can slowly adapt to the tech and actually ensure it benefits others, rather than rushing blindly into a tech without full knowledge or understanding of the consequences. AI might be okay but we need time to adjust to it and that's the one resource we don't really have.

Of course maybe we can do all of this: slow down tech, implement UBI, and have radical AI transparency. If one solution fails, we have two other backups to help us out. We shouldn't put our eggs in one basket, especially when facing such a complicated abd multifaceted threat.

>Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.

You are right actually. My bad. I was referring to how AI Safety people has created organizations dedicated to dealing with their agendas, which can be said to be better than simply posting about their fears on Hacker News. But I don't actually hear of anything more about what these organizations are actually doing other than "raising awareness". Maybe these AI Safety organizations are little more than glorified talking shops.


First, slowing down is not going to happen. Capitalism is about chasing efficiency, and automating things is currently the pinnacle of efficiency.

Second, there's no magical adjusting. Just as you can't just adjust to having nukes in the living room.


It is also better to be far ahead of unregulatable rogue states that would continue to work on AI. Secondly, deferring AI to a later point in time might make self-improvement much faster and hence loss controllable since more computational resources would be available due to Moore's Law.



In contrast to popular fiction, my concerns don't lie in a conspiring super AI. Rather, we should take countermeasures against suffering from a thousand cuts. Flash crashes in financial markets are a precursor to other sytemic risks ahead. Maybe this is a distinction that helps to appreciate safety for AI more.


I am actually genuinely concerned about terrorists developing killer robots. There's already designs out there for robotic turrets that can utilize commodity firearms. I even feel uncomfortable thinking of all the nasty stuff you could build.


Yeah, this scenario is a million times more plausible. ML that augments human capabilities to empower bad people to do even more evil things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: