Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The intelligent doomsayers say something fundamentally different.

They say that the space of possible objective functions (read: goals) for AI is huge, and the space of methods for achieving those goals is also huge. In contrast, the space of goals/methods that humans find acceptable is much smaller.

I.e., 99.99% of AI designs will be dystopian.

I'm not sure why you consider minds hardwired to enjoy serving us to be unethical. Is it also unethical for a human to enjoy serving others (some do)?



> I.e., 99.99% of AI designs will be dystopian.

Not sure if I consider this a valid premise, much less an intelligent one, but luckily I'm not the arbiter of such things. If this becomes a "number of possible goals times the means to carry them out" argument, you could easily construct the same one against human progress as a whole.

> I'm not sure why you consider minds hardwired to enjoy serving us to be unethical.

Holy shit, yes it's unethical.

If I asked you "would it be unethical to genetically engineer a new race of humans who enjoy being slaves to the upper class?" I hope your answer would be yes. The same rules apply to a non-human species as well.


Here's a way to recognize that it's almost certainly a true premise. Out of all the animal species on earth, which ones would (if they took over) create a world that is as pleasant for humans to live in as the current one?

If I asked you "would it be unethical to genetically engineer a new race of humans who enjoy being slaves to the upper class?" I hope your answer would be yes. The same rules apply to a non-human species as well.

We've already done this to several non-human species. Do you consider owning dogs unethical?

I don't think it's such a simple question.


> Out of all the animal species on earth, which ones would (if they took over) create a world that is as pleasant for humans to live in as the current one?

I see you're trying to create a partisan argument here, that only humanity will be capable of creating an optimal world for humanity - and since nobody is watching out for us but ourselves, we should put the foot firmly on the throat of any nascent species before it can have a say in the matter?

Up to this point in history, there is really now way to know, since none of the other animals on the planet are capable of reshaping our environment through wilfully considered acts. The question should rather be one of whether another race of technology users would be inclined to optimize the world, not only for humans, but for all sentient lifeforms - or whether any such group of minds would see more benefit in your way of reasoning that takes into account in-group loyalties only instead of broader ethical considerations.

> We've already done this to several non-human species. Do you consider owning dogs unethical?

I doubt there are many people who would not make the argument that this is a matter of degrees. Owning a dog may not be considered unethical, but strapping a bomb to a dog would certainly be. And once we reach a certain class of mind, which by consensus right now only includes humans fully, owning another mindful creature is indeed considered immoral. You can't just assert without encountering righteous resistance that owning a human is okay, for instance, because you personally happen to identify them as a separate group of humans from you. It's historically been done, but the reasoning behind it was appalling.

What we aim to build is not a species of cow-like AIs, we aim for at least human-grade intelligence if for no other reason than being able to hand off certain tasks to someone who "understands". When we do reach that capability, I very much hope the people making the decisions will have a better ethical stance than "well, they're not humans, so anything we do to them is by definition okay".


The question should rather be one of whether another race of technology users would be inclined to optimize the world, not only for humans, but for all sentient lifeforms...

At some point those optimizations will need to account for conflict: humans want X but other beings want Y. You can't simultaneously optimize two objective functions with different maxima; some species may want to eat humans, another might want to maximize paperclips, a third is humans. Why should human preferences win out?

You can't just assert without encountering righteous resistance that owning a human is okay, for instance, because you personally happen to identify them as a separate group of humans from you. It's historically been done, but the reasoning behind it was appalling.

There are humans who find happiness in master/slave like relationships. Should these humans not exist? Should these humans not have their desires satisfied?

Your analogy to historical slavery is silly. Historical slaves did not want to be slaves, unlike these hypothetical AI.


> At some point those optimizations will need to account for conflict: humans want X but other beings want Y.

Accounting for the possibility of conflicts of interest is entirely different from a priori taking an ethically bankrupt stance based on nothing but your own superiority. If an advanced species wants to kill humans, that's obviously just as unethical as the other way around.

To a certain degree I see where you're coming from. In a world without an objective morality, an unscrupulous individual could be tempted to conclude that "might makes right" is a perfectly valid ordering principle. But it's certainly not a humanist perspective (a peculiar term in this context, but that's what we called it), I would go so far as to say it's been associated with racism, sexism and other forms of extreme in-group philosophies in the past.

> There are humans who find happiness in master/slave like relationships. Should these humans not exist?

This is not about sexual kinks or individual lifestyle choices. It's also worth noting that at no point did anyone say people with specific traits should not exist, so it seems you're intentionally distorting the subject at hand in this case.

I think it's not particularly productive to intentionally try to muddy or distort the conversation until the subject becomes diffuse and malleable enough to drive your initial conceit through. It's also somewhat of a disrespectful gesture.

> Your analogy to historical slavery is silly. Historical slaves did not want to be slaves, unlike these hypothetical AI.

Even if historical slaves did want to be slaves, and I suspect a good number were indeed fine with it, it was still wrong to exploit that. Being unable to make or even imagine the choice of not working as a slave is in fact a defining factor of slavery. It's also wrong to genetically engineer people to be happy slaves, but that's just an extension of the initial assertion that slavery is wrong in general. It doesn't matter whether you assert that your slaves have better lives than free people, or whether they would even run if you cut their chains, or whether the economy would collapse without their labour. It's still wrong.


Accounting for the possibility of conflicts of interest is entirely different from a priori taking an ethically bankrupt stance based on nothing but your own superiority. If an advanced species wants to kill humans, that's obviously just as unethical as the other way around.

Why would a generic AI respect your code of ethics? My claim is that most AIs won't and we need to ensure that whatever AI is created does.

This is not about sexual kinks or individual lifestyle choices. It's also worth noting that at no point did anyone say people with specific traits should not exist, ...

On the contrary, we are discussing "AIs with fully competent minds that are somehow hardwired to serve us". You specifically said such AI should not be created.

Even if historical slaves did want to be slaves, and I suspect a good number were indeed fine with it, it was still wrong to exploit that.

So people who want to be slaves can't satisfy their desires? What other harmless desires should people be unable to satisfy? A desire for gay sex or drugs (to borrow two examples that go against the zeitgeist)?

You keep trying to turn a discussion of AI into an opportunity to express your conformity with the modern zeitgeist. Does that make you happy?


> My claim is that most AIs won't and we need to ensure that whatever AI is created does.

At least we can agree on that.

> You specifically said such AI should not be created.

I don't see how this is equivalent to me supposedly wanting to kill BDSM practitioners. I already laid out what I object to and why, no reason to rehash this a million times.

> So people who want to be slaves can't satisfy their desires? What other harmless desires should people be unable to satisfy? A desire for gay sex or drugs (to borrow two examples that go against the zeitgeist)?

I'm not sure who else you're talking to in your mind, but that in no way describes me or the points I made. I allege you know that though.

> You keep trying to turn a discussion of AI into an opportunity to express your conformity with the modern zeitgeist. Does that make you happy?

What do you define the modern Zeitgeist as? The Enlightenment movement happened in the 17th century, and I guess you could accuse me of conforming to that. My suspicion is that's where our inability to communicate stems from, is it possible you're not onboard with the whole humanism thing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: