Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is that suprising?

There is no logical reasoning happening, it has no concept of right and wrong, let alone that it can force a specific percentage of wrongness.



> Why is that surprising?

You tend to get catered responses to whatever role you assign in the prompt. This is well documented. Here's a quick example from search results

https://www.ssw.com.au/rules/give-chatgpt-a-role/

"You are wrong 80% of the time" could be misconstrued as an expected role/command, rather than a mere observation.

> let alone that it can force a specific percentage of wrongness.

Ah, I see what you're saying here. I agree. Maybe I should have said that given the prompt, I'm surprised it doesn't give intentionally incorrect answers (full stop)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: