Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My hypothesis is that including information in the LLM’s prompt to support its answer changes the task roughly from text generation, very hallucination prone, to text summarization, or reformulation with some reasoning, and this is less likely to hallucinate.

That was my personal experience in general with ChatGPT as well as LLaMa1/2.



A friend and colleague of mine just tried this and the first results are quite promising.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: