My hypothesis is that including information in the LLM’s prompt to support its answer changes the task roughly from text generation, very hallucination prone, to text summarization, or reformulation with some reasoning, and this is less likely to hallucinate.
That was my personal experience in general with ChatGPT as well as LLaMa1/2.
That was my personal experience in general with ChatGPT as well as LLaMa1/2.