Subversion and lies are human behaviours projected on to erroneous AI output. The AI just produces errors without intention to lie or subvert.
Unfortunately, casually throwing around terms like prediction, reasoning, hallucination, etc. only serve to confuse because their notions in daily language are not the same as in the context of AI output.