I find LLMs useful for quickly building mental models for unfamiliar topics. This means that instead of beating my head against the wall trying to figure out the mental model, I can beat my head against the wall trying to do next steps, like learning the lower level details or the higher level implications. Whatever is lost not having to struggle through figuring out the mental model is easily outweighed by being able to spend that time applying myself elsewhere.
I have some success by trying to explain something to an LLM, having it correct me with its own explanation that isn’t quite right either, correcting it with a revised explanation, round and round until I think I get it.
Sort of the Feynman method but with an LLM rubber duck.