Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's the "AI Alignment" problem we've heard so much about recently.


The AI alignment problem for a decade used to be that AGI doesn't end humanity as a side effect. Now that the mainstream has caught on, it's been diluted.


Ha we don't even have "alignment" among humans. In the Kantian style of "humans are an end in themselves, not a means toward the ends of others" kind of way.


Again, that's a much more subtle version of alignment than what the term originally was used for. It used to refer to things like the paper clip maximizer and the strawberry problem which are much worse and fundamental.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: