Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What happens if you present 500 people with an argument that AI is risky? (aiimpacts.org)
1 point by subvertify on Dec 23, 2024 | hide | past | favorite | 2 comments


In my opinion, all of the arguments are simple and none really analyze AI in depth. They focus on single outcomes, rather than talk about how AI might change society in detail. They single out AI as a separate technology, rather than talk about the existing effects of advanced technology and how AI is a force multiplier for these effects.

Unfortunately, the effects of advanced technology is not so simple and biases are present throughout society because technological development is tied to sustenance and thus biases arise to protect people from congitive dissonance.

Stronger arguments against AI have already been raised by several authors such as David Skrbina ("The Metaphysics of Technology") and Jacques Ellul ("The Technological Society").

In other words, by isolating AI, the issue is very unclear. Only by understanding it as an apex technology amongst advanced technology is it possible to understand its risks.


The authors' focus here seems to be on considering AI as something with its own goals and intent, however there's another category of risk which is much closer: What humans will do to humans using AI as a tool.

Consider the difference between "I worry inventing nuclear bombs will make us extinct" versus "I worry inventing the Hypermind from the book Don't Create The Torment Nexus will make us extinct." The former doesn't have agency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: