Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is exactly how natural language is meant to function, and the intervention response by OpenAI is not right IMO.

If some people have a behavior language based on fortune telling, or animal gods, or supernatural powers, picked up from past writing of people who shared their views, then I think it’s fine for the chatbot to encourage them down that route.

To intervene with ‘science’ or ‘safety’ is nannying, intellectual arrogance. Situations sometimes benefit from irrational approaches (think gradient descent with random jumps to improve optimization performance).

Maybe provide some customer education on what these systems are really doing, and kill the team that puts in response, value judgements about your prompts to give it the illusion you are engaging someone with opinions and goals.



“Nannying” as a pejorative is a thought-terminating cliché.

Sometimes, at scale, interventions save lives. You can thumb your nose at that, but you have to accept the cost in lives and say you’re happy with that. You can’t just say everybody knows best and the best will occur if left to the level of individual decisions. You are making a trade-off.

See also: seatbelts, speed limits, and the idea of law generally, as a constraint on individual liberty.


Yes. That is exactly the point. The opposite of nannying is the dignity of risk. Sometimes that risk is going to carry harm or even death. I don't think anyone who is arguing against nannying in this way would bat an eye at the potential cost of lives, that's a feature not a bug.

Constraints on individual liberty as it harms or restricts the liberty of others makes sense. It becomes nannying is when it restricts your liberty for your own good. it should be illegal to drive while drunk because you will crash into someone else and hurt them, but seatbelt laws are nannying because the only person you're going to hurt is yourself. And to get out ahead of it, if your response to this is some tortured logic about how without a seatbelt you might fly out of the car or some shit like that you're missing the point entirely.


That’s a pretty limited take on “hurt”. A person without a seatbelt will get worse injuries, and require greater medical attention. In other words, it does hurt other people.


This is exactly the kind of tortured logic I was talking about. By going this route you're actually agreeing with me and then doing whatever mental gymnastics necessary to twist everything that only harms the individual into some communal harm. Your argument applies equally to riding a motorcycle.

Obviously eating cheeseburgers should be illegal because you'll put a strain on the medical system when you get hypertension and heart disease.


I think it’s a silly take. Companies want to avoid getting bad PR. People having schizophrenic episodes with ChatGPT is bad PR.

There are plenty of legitimate purposes for weird psychological explorations, but there are also a lot of risks. There are people giving their AI names and considering them their spouse.

If you want completely unfiltered language models there are plenty of open source providers you can use.


No-one blames Cutco when some psycho with a knife fetish stabs someone. There’s a social programming aspect here that we are engaging with, where we are collectively deciding if/where to point a finger. We should clarify for folks what these LLMs are, and let them use them as is.


> Situations sometimes benefit from irrational approaches (think gradient descent with random jumps to improve optimization performance).

What?

Irrational is sprinkling water on your car to keep it safe or putting blood on your doorframes to keep spirits out

An empirical optimization hypothesis test with measurable outcomes is a rigorous empirical process with mechanisms for epistemological proofs and stated limits and assumptions.

These don’t live in the same class of inference


They are the same type of thing yes.

You have a narrow perspective that says there is no value in sprinkling your car with water to keep it safe. That’s your choice. Another, might intuit that the religious ceremony has been shown throughout their lives, to confer divine protection. Yet a third might recognize an intentional performance where safety is top of mind, might program a person to be more safety conscious, thereby causing more safe outcomes with the object in persons who have performed the ritual, and further they may also suspect that many performers of such ritual privately understand the practice as being metaphorical, despite what they say publicly. Yet a fourth may not understand the situation like the third, but may have learnt that when large numbers of people do something, there may be value that they don’t understand, so they will give it a try.

The optimization strategy with jumps is analogous to the fourth, we can call it ‘intellectual humility and openness’. Some say it’s the basis of the scientific method, ie throw out a hypothesis and test it with an open mind.


I’m not narrow, you just wrote a lot of positive psychology babble.

This is an epistemological question and everything you wrote is epistemically bankrupt. To wit:

“Another, might intuit that the religious ceremony has been shown throughout their lives, to confer divine protection”

This kind of mythology is why humans and human society will never escape the cave, and semi-literate people sound smart to the illiterate with this bullshit


Well now, here’s a puzzle for you. If literate humans don’t believe in myth, and all US Presidents had religious affiliation, were they all a) semi-literate ‘cave people’, b) cynical manipulators of the semi-literate cave-people, c) something else?

And if a person practices any myth-based festival, Christmas, Easter, Halloween, is that indicative to you of a semi-literate cave-person? Or do you make exemptions for how a person interprets the event, and if so, how do you apply those exemptions consistently across all myth-based societies? Also do you reject science-fiction and fantasy works as works of idle fancy or do you allow that they use metaphor to convey important ideas applicable to life, and how do you square that with your treatment of myth in religion?

It is my hope, that you will consider my comment, and come to a better understanding of what LLMs are. They aren’t baking any universal truth, or world model, they are collating alternative narrative systems.


No exemptions, I don’t really mess with science fiction other than what I’ve written.

Are you seriously asking if the US president is a semi literate person?

The answer is obvious

Read this and be enlightened: https://kemendo.com/benchmark.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: