I sometimes give LLM random "easy" questions. My assessment is still that they all need the fine print "bla bla can be incorrect"
You should either already know the answer or have a way to verify the answer. If neither, the matter must be inconsequential like just a child like curiosity. For example, I wonder how many moons Jupiter has... It could be 58, it could be 85 but either answer won't alter any of what I do today.
I suspect some people (who need to read the full report) dump thousand page long reports into LLM, read the first ten words of the response and pretend they know what the report says and that is scary.
Fortunately, as devs, this is our main loop. Write code, test, debug. And it's why people who fear AI-generated code making it's way into production and causing errors makes me laugh. Are you not testing your code? Or even debugging it? Like, what process are you using that prevents bugs happening? Guess what? It's the exact same process with AI-generated code.
You should either already know the answer or have a way to verify the answer. If neither, the matter must be inconsequential like just a child like curiosity. For example, I wonder how many moons Jupiter has... It could be 58, it could be 85 but either answer won't alter any of what I do today.
I suspect some people (who need to read the full report) dump thousand page long reports into LLM, read the first ten words of the response and pretend they know what the report says and that is scary.