Sure, but for an expert user, it doesn’t take long to figure out when you have a wrong answer. And in a case like this, the wrong answer is because it was set up with a question that doesn’t make sense - ripgrep is explicitly described as not being a library.
This kind of objection seems to me to be in the same general category as saying that you can use a programming language to write incorrect code, which is not a meaningful objection to programming languages.
Yeah but it really defeats the purpose if you need to be an expert in order to understand if it's telling you something dead wrong and sending you on a goose chase.
> I'm quite happy to have a tool that can assist me rather than replace me.
Another way to look at it might be that you assist the machine, rather than the machine assists you.
* Human engineers a prompt = preprocesses the messy reality.
* AI does the "creative core" of the task, comes up with a solution.
* Human post-processes the AI output back to reality, validating and actuating it.
Basically humans as a thin API layer. Who's helping who?
Right now, the prompt engineering is motivated by human desire for reproduction and survival (ultimately). But that's just incidental, the loop may be closed.
That’s not what it looks like at the moment, although it may in future.
The AI isn’t yet coming close to doing the creative core of technical tasks - in fact that’s pretty much precisely what it can’t do, yet.
Instead, it’s acting as a powerful interface to a large knowledge store - a bit like a search engine on steroids, but one that can usefully tailor the answers to queries, rather than just copying what someone else once wrote on the subject.
Besides, as long as AIs aren’t conscious, there’s not really any question about who’s helping who. If that changes, then it’ll become much like any paid service exchange between humans, including e.g. ordering a hamburger. Both parties are supposed to benefit, although there’s often an imbalance.
Right. People need to realise that ChatGPT shouldn't be used in the same way as Google or asking a question on SO. It doesn't have the same capabilities and drawbacks.
In particular because of it not understanding the big picture it won't point out that you're asking the wrong question like an expert human would. (Does it ever?)
> it won't point out that you're asking the wrong question like an expert human would. (Does it ever?)
I don't think it will - its design doesn't really allow for that. It's generating responses to the prompts it's given based on its trained knowledge, but it doesn't have (enough of?) the kind of meta-reasoning required to step back, gain an understanding of the context of the question, and propose a different solution.
This kind of objection seems to me to be in the same general category as saying that you can use a programming language to write incorrect code, which is not a meaningful objection to programming languages.