Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> At some point those optimizations will need to account for conflict: humans want X but other beings want Y.

Accounting for the possibility of conflicts of interest is entirely different from a priori taking an ethically bankrupt stance based on nothing but your own superiority. If an advanced species wants to kill humans, that's obviously just as unethical as the other way around.

To a certain degree I see where you're coming from. In a world without an objective morality, an unscrupulous individual could be tempted to conclude that "might makes right" is a perfectly valid ordering principle. But it's certainly not a humanist perspective (a peculiar term in this context, but that's what we called it), I would go so far as to say it's been associated with racism, sexism and other forms of extreme in-group philosophies in the past.

> There are humans who find happiness in master/slave like relationships. Should these humans not exist?

This is not about sexual kinks or individual lifestyle choices. It's also worth noting that at no point did anyone say people with specific traits should not exist, so it seems you're intentionally distorting the subject at hand in this case.

I think it's not particularly productive to intentionally try to muddy or distort the conversation until the subject becomes diffuse and malleable enough to drive your initial conceit through. It's also somewhat of a disrespectful gesture.

> Your analogy to historical slavery is silly. Historical slaves did not want to be slaves, unlike these hypothetical AI.

Even if historical slaves did want to be slaves, and I suspect a good number were indeed fine with it, it was still wrong to exploit that. Being unable to make or even imagine the choice of not working as a slave is in fact a defining factor of slavery. It's also wrong to genetically engineer people to be happy slaves, but that's just an extension of the initial assertion that slavery is wrong in general. It doesn't matter whether you assert that your slaves have better lives than free people, or whether they would even run if you cut their chains, or whether the economy would collapse without their labour. It's still wrong.



Accounting for the possibility of conflicts of interest is entirely different from a priori taking an ethically bankrupt stance based on nothing but your own superiority. If an advanced species wants to kill humans, that's obviously just as unethical as the other way around.

Why would a generic AI respect your code of ethics? My claim is that most AIs won't and we need to ensure that whatever AI is created does.

This is not about sexual kinks or individual lifestyle choices. It's also worth noting that at no point did anyone say people with specific traits should not exist, ...

On the contrary, we are discussing "AIs with fully competent minds that are somehow hardwired to serve us". You specifically said such AI should not be created.

Even if historical slaves did want to be slaves, and I suspect a good number were indeed fine with it, it was still wrong to exploit that.

So people who want to be slaves can't satisfy their desires? What other harmless desires should people be unable to satisfy? A desire for gay sex or drugs (to borrow two examples that go against the zeitgeist)?

You keep trying to turn a discussion of AI into an opportunity to express your conformity with the modern zeitgeist. Does that make you happy?


> My claim is that most AIs won't and we need to ensure that whatever AI is created does.

At least we can agree on that.

> You specifically said such AI should not be created.

I don't see how this is equivalent to me supposedly wanting to kill BDSM practitioners. I already laid out what I object to and why, no reason to rehash this a million times.

> So people who want to be slaves can't satisfy their desires? What other harmless desires should people be unable to satisfy? A desire for gay sex or drugs (to borrow two examples that go against the zeitgeist)?

I'm not sure who else you're talking to in your mind, but that in no way describes me or the points I made. I allege you know that though.

> You keep trying to turn a discussion of AI into an opportunity to express your conformity with the modern zeitgeist. Does that make you happy?

What do you define the modern Zeitgeist as? The Enlightenment movement happened in the 17th century, and I guess you could accuse me of conforming to that. My suspicion is that's where our inability to communicate stems from, is it possible you're not onboard with the whole humanism thing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: