What are the specific prompts you're using? You might get those answers when you're not being specific enough (or use models that aren't state of the art).
"Shit in, shit out" as the saying goes, but applied to conversations with LLMs where the prompts often aren't prescriptive enough.
"Shit in, shit out" as the saying goes, but applied to conversations with LLMs where the prompts often aren't prescriptive enough.