Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No. Cooperations are not necessarily like artificial intelligences. They are cooperations of human intelligences and these two classes of intelligences have actually very little in common if you look past the similarity that they are potentially very powerful and intelligent. Cooperations are driven by material profit, but in the end there is a reasonably large possibility that they are shaped by human values (because they are run by humans and otherwise people would also refuse to buy their products). The same cannot be said about AIs with high certainty.


The comparison is very apt - the first AI's will embody corporate values as corporations will build and be liable for them.

Likely AIs will be shaped by their builders, if corporations build them, they will adhere primarily to the profit motive, if humanitarian hackers build them they will have human values.

Reputedly the Russian Army has built guard robots, they will just be guns on tanks with a kill radius no values are required - yet are these less moral than the human controlled drones - at least with an AI it can get stuck in a corner or logic loop and you may effect an escape - with humans you need a whistleblower.

Asimov's robot books are informative: his robots are the most moral actors, obeying their 3 laws, often protecting humans from other human decisions.

Certain corporations dehumanise decisions so while processing occurs in wetware, the human worker is only a cog and the invisible hand of human values is removed.

Most workers today could be trivially replaced with a near future neural net.

Of course humans, conspire, complain, unionise, strike, work-to-rule, demand rights and empathise with their customers - so there is a maximum level of evil a corporation of humans can rise to - but as history has shown this is an unacceptably high bar.

The corporate board can make decisions based on human values so long as it does not go against the rapacious seeking of profit or the CEO will be deposed by the shareholders.

Once a corporation reaches transnational size, nothing can really stop it or even get it to pay tax if it doesn't want to.

I think that much of what people actually fear about the AIpocalypse is exactly the sort of dehumansing powerlessness and machine like cruelty they already experience from corporations and governments.

You may be speaking to a human who empathises but often one suspects they are there to sop up your moans not to help you.

An AI is an amplifier of what we already are, in fearing robots we rightly fear their creator's motives.


> they will adhere primarily to the profit motive

You are assuming that there is be an obvious way of doing so, an obvious solution to the control problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: