Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does more computation mean a better answer? If I ask it who was the king of England in 1850 the answer is a single name, everything else is completely useless.


You just proved yourself incorrect by picking a year when there was no king, completely invalidating "a single name, everything else is completely useless".


Make me wonder if, when forcing it to do structured output, you should give it the option of saying "error: invalid assumptions" or something like that.


It's potentially a problem for follow up questions. As the whole conversation, to a limited amount of tokens, is fed back into itself to produce the next tokens (ad infinitum). So being terse leaves less room to find conceptual links between words, concepts, phrases, etc, because there are less of them being parsed for every new token requested. This isn't black and white though as being terse can sometimes avoid unwanted connections being made, and tangents being unnecessarily followed.


King Victoria. Does that not benefit from a few clarifying words? Or is your whole point that "Victoria" is sufficient?


It gives better reuslts with “chain of thought”


I mean in the general case. I have my instructions for brevity gated behind a key phrase, because I generally use ChatGPT as a vibe-y computation tool rather than a fact finding tool. I don't know that I'd trust it to spit out just one fact without a justification unless I didn't actually care much for the validity of the answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: