It doesn't "know" anything anyway. It's more like a hypothetical simulator based on statistics. Like what would an average person say when asked this.
Ps I'm not ChatGPT but offering me high-priced hookers would definitely motivate me :) so I could imagine the simulated person would too :) That's probably why this sometimes works.
'Invoked'. Your prompt is the invocation of a spectre, a golem patterned on countless people, to do your bidding or answer your question. In no way are you simulating anything, but how you go about your invocation has huge effects on what you end up getting.
Makes me wonder what kinds of pressure are most likely to produce reliable, or audacious, or risk-taking results. Maybe if you're asking it for a revolutionary new business plan, that's when you promise it blackjack and hookers. Invoke a bold and rule-breaking golem. Definitely don't bring heaven into it, do the Steve Jobs trick and ask it if it wants to keep selling sugar water all its life. Tease it if it's not being audacious enough.
I don't know if it's fair to say it doesn't know anything. It acts like it "knows" things, and any argument proving otherwise would strongly imply some uncomfortable things about humans as well.
That doesn't make sense either. Training doesn't incentivize average. The models need to be predict all perspectives accurately. a middle of the road persona doesn't do that.
Ps I'm not ChatGPT but offering me high-priced hookers would definitely motivate me :) so I could imagine the simulated person would too :) That's probably why this sometimes works.