Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you are missing some nuance what people are concerned about and why. This article spells it out pretty clearly, I think: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

I agree that comparing nukes to superhuman AGI is an imperfect analogy, because nukes don't have goals.



I can't locate any nuance in that article. Hyperbole, panic and many unfounded assumptions serving those first two, easy.

Good for clicks and getting an extremely manipulatable public coming back for more, I guess.

Historically, whenever we have created new technology that is amazing and impactful but that all of the positives/negatives are not fully understood, it's worked out fine. If we want to be scientific about it, that's our observable knowledge here.


Nukes do have goals, they share the goals of those who launch them. What I am afraid of not AI, rather I am afraid of what AI is forced to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: