The "attach the entire file" part is very critical.
I've had the experience of seeing some junior dev posting error messages into ChatGPT, applying the suggestions of ChatGPT, and posting the next error message into ChatGPT again. They ended up applying fixes for 3 different kinds of bugs that didn't exist in the code base.
---
Another cause, I think, is that they didn't try to understand any of those (not the solutions, and not the problems that those solutions are supposed to fix). If they did, they would have figured out that the solutions were mismatches to what they were witnessing.
There's a big difference between using LLM as a tool, and treating it like an oracle.
This is why in-IDE LLMs like Copilot are really good.
I just had a case where I was adding stuff to two projects, both open at the same time.
I added new fields to the backend project, then I swapped to the front-end side and the LLM autocomplete gave me 100% exactly what I wanted to add there.
And similar super-accurate autocompletes happen every day for me.
I really don't understand people who complain about "AI slop", what kind of projects are they writing?
I've had the experience of seeing some junior dev posting error messages into ChatGPT, applying the suggestions of ChatGPT, and posting the next error message into ChatGPT again. They ended up applying fixes for 3 different kinds of bugs that didn't exist in the code base.
---
Another cause, I think, is that they didn't try to understand any of those (not the solutions, and not the problems that those solutions are supposed to fix). If they did, they would have figured out that the solutions were mismatches to what they were witnessing.
There's a big difference between using LLM as a tool, and treating it like an oracle.