Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Which part makes you think that?


The declarations are very vague as to what will actually be done other than declaring but I get the impression they want to make it more complicated just to put up a chatbot.

I mean stuff like

>We underline the need for a global reflection integrating inter alia questions of safety, sustainable development, innovation, respect of international laws including humanitarian law and human rights law and the protection of human rights, gender equality, linguistic diversity, protection of consumers and of intellectual property rights.

Is quite hard to even parse. Does that mean you'll get grief for you bot speaking English becuase it's not protecting linguistic diversity? I don't know

What does "Sustainable Artificial Intelligence" even mean? That you run it off solar rather than coal? Does it mean anything?


The whole text is just "We promise not to be a-holes" and doesn't demand any specific action anyway, let alone having any teeth.

Useful only when you rejecting it. I'm sure in culture war torn American mind it signals very important things about genitals and ancestry and the industry around these stuff but in a non-American mind it gives you the vibes that the Americans intent to do bad things with AI.

Ha, now I wonder if the people who wrote that were unaware of the situation in US or was that the outcome they expected.

"Given that the Americans not promising not to use this tech for nefarious tasks maybe Europe should de-couple from them?"


It's also a bit woolly on real dangers that governments should maybe worry about.

What if ASI happens next year and and renders most of the human workforce redundant? What if we get Terminator 2? Those might be more worthy of worry than "gender equality, linguistic diversity" etc? I mean the diversity stuff is all very well but not very AI specific. It's like you're developing H-bombs and worrying if they are socially inclusive rather about nuclear war.


My understanding is that this is about using AI responsibly and not about AGI at all. Not worrying about H-bomb but more like worrying about handling radioactive materials in the industry or healthcare to prevent exposure or maybe radium girls happening again.

IMHO, from European perspective, they are worried that someone will install a machine that has bias against let's say Catalan people and they will be disadvantaged against Spaniards and those who operate the machine will claim no fault the computer did it, leading to social unrest. They want to have a regulations saying that you are responsible of this machine and have grounds for its removal if creates issues. All the regulations around AI in EU are in that spirit, they don't actually ban anything.

I don't think AGI is considered seriously by anybody at the moment. That's completely different ball game and if it happens none of the current structures will matter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: