Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn't "know" anything anyway. It's more like a hypothetical simulator based on statistics. Like what would an average person say when asked this.

Ps I'm not ChatGPT but offering me high-priced hookers would definitely motivate me :) so I could imagine the simulated person would too :) That's probably why this sometimes works.



Not 'simulated', because there's nobody there.

'Invoked'. Your prompt is the invocation of a spectre, a golem patterned on countless people, to do your bidding or answer your question. In no way are you simulating anything, but how you go about your invocation has huge effects on what you end up getting.

Makes me wonder what kinds of pressure are most likely to produce reliable, or audacious, or risk-taking results. Maybe if you're asking it for a revolutionary new business plan, that's when you promise it blackjack and hookers. Invoke a bold and rule-breaking golem. Definitely don't bring heaven into it, do the Steve Jobs trick and ask it if it wants to keep selling sugar water all its life. Tease it if it's not being audacious enough.


I don't know if it's fair to say it doesn't know anything. It acts like it "knows" things, and any argument proving otherwise would strongly imply some uncomfortable things about humans as well.


It's not finetuned to act like an average person.


No but the training from all these different people combined in one model would make it pretty average I would think.


That doesn't make sense either. Training doesn't incentivize average. The models need to be predict all perspectives accurately. a middle of the road persona doesn't do that.


When they finetune it, it's finetuned based on how the AI owner wants it to act, now how the average person would act.


It is indeed the simulator, but this just shifts the question: what is that which it simulates?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: