I don't have any prompt customisations and am constantly amazed by the quality of responses. I use it mostly for help with Python and Django projects, and sometimes a solution it provides "smells bad" - I'll look at it, and think: "surely that can't be the best way to do it?". So I treat my interactions with ChatGPT as a conversation - if something doesn't look right, or if it seems to be going off track, I'll just ask it "Are you sure that's right? Surely there's a simpler way?". And more often than not, that will get it back on track and will give me what I need.
This is key for me as well. If I think about how I put together answers to coding questions, I’m usually looking at a couple SO pages, maybe picking ideas from lower-down answers.. just like in a search engine it’s never the first result, it’s a bit of a dig. You just have to learn how to dig a different way. But then at that point I’m like, is this actually saving me time?
My sense is that over time, LLM-style “search” is going to get better and better at these kinds of back-and-forth conversations, until at some point in the future the people who have really been learning how to do it will outpace people who stuck with trad search. But I think that’ll be gradual.
Assistants work the other way - do this task and please ask any needed followup questions if the task is unclear or you are stuck. And they go off and do it and mostly you trust the result.