In my experience chatGPT is more useful than StackOverflow in 9 out of 10 cases because it can generate custom tailor-made code for your exact use case. Sure, it might not be "correct" in the ultra-pedantic StackOverflow definition of "100% generalizably correct", but usually it's 90+% correct for your use-case and only needs slight alteration to be 100% correct for your use-case.
Such generated code is not a "threat" or "misinformation" or whatever author's point is. This is going to be a productivity multiplier for programmers so that programmers can do more faster than ever before!
It's also more willing to confidently invent wrong answers than StackOverflow. Try asking it to generate a Rust function that uses the `ripgrep` crate to search text with regular expressions. The ripgrep create doesn't expose an interface to do that (as far as I can tell) but ChatGPT is happy to generate some plausible-sounding but totally incorrect code.
I'm reasonably used to investigating a handful of Stack Overflow answers before finding the one that actually works. This doesn't seem that much different.
Sure, but for an expert user, it doesn’t take long to figure out when you have a wrong answer. And in a case like this, the wrong answer is because it was set up with a question that doesn’t make sense - ripgrep is explicitly described as not being a library.
This kind of objection seems to me to be in the same general category as saying that you can use a programming language to write incorrect code, which is not a meaningful objection to programming languages.
Yeah but it really defeats the purpose if you need to be an expert in order to understand if it's telling you something dead wrong and sending you on a goose chase.
> I'm quite happy to have a tool that can assist me rather than replace me.
Another way to look at it might be that you assist the machine, rather than the machine assists you.
* Human engineers a prompt = preprocesses the messy reality.
* AI does the "creative core" of the task, comes up with a solution.
* Human post-processes the AI output back to reality, validating and actuating it.
Basically humans as a thin API layer. Who's helping who?
Right now, the prompt engineering is motivated by human desire for reproduction and survival (ultimately). But that's just incidental, the loop may be closed.
That’s not what it looks like at the moment, although it may in future.
The AI isn’t yet coming close to doing the creative core of technical tasks - in fact that’s pretty much precisely what it can’t do, yet.
Instead, it’s acting as a powerful interface to a large knowledge store - a bit like a search engine on steroids, but one that can usefully tailor the answers to queries, rather than just copying what someone else once wrote on the subject.
Besides, as long as AIs aren’t conscious, there’s not really any question about who’s helping who. If that changes, then it’ll become much like any paid service exchange between humans, including e.g. ordering a hamburger. Both parties are supposed to benefit, although there’s often an imbalance.
Right. People need to realise that ChatGPT shouldn't be used in the same way as Google or asking a question on SO. It doesn't have the same capabilities and drawbacks.
In particular because of it not understanding the big picture it won't point out that you're asking the wrong question like an expert human would. (Does it ever?)
> it won't point out that you're asking the wrong question like an expert human would. (Does it ever?)
I don't think it will - its design doesn't really allow for that. It's generating responses to the prompts it's given based on its trained knowledge, but it doesn't have (enough of?) the kind of meta-reasoning required to step back, gain an understanding of the context of the question, and propose a different solution.
The interesting thing is that on SO, the good stuff gets upvoted, while the bad/incorrect stuff is being pushed down or out completely. And it’s not rare that you see poor/wrong code snippets. Thanks to this mechanism, it actually shouldn’t be harmful to have auto-generated responses there, because they get reviewed and curated by humans. Isn’t the combination of AI legwork with Human review the best thing we have so far?
StackOverflow and Wikipedia both depend on a particular dynamic. It's costly to come up with nonsense that looks right enough to pass, and it's easy to hit revert or downvote that content if it's fishy.
AI generation of nonsense flips that - it's easy and limitlessly scalable to come up with "rightish" stuff, and cleanup still takes human effort that scales at the number of engaged users.
It works really well if slapping together something vaguely close to correct is good enough. This covers a significant fraction of all programming work.
Starting from subtly incorrect code makes it much harder to come to the correct solution.
To use a classic example, the A = (B + C) /2 line that fails due to bounds checking issues is quite literally worse than useless because you basically need to understand why it’s wrong before you notice it’s wrong. The eyes just slide over the common idiom thinking in terms of average not what the code actually does.
Code reviews most easily reveal a very different kind of error. ‘Why does this code look more complicated than it should be?’ is a great hint that something is fishy. However, chatGPT is basically engineered to pass the smell test not work.
Yeah I am surprised how many people are of the opinion that code reviews are easy. Errors like the one you mention are especially hard to catch unless you know what you are looking for.
Such generated code is not a "threat" or "misinformation" or whatever author's point is. This is going to be a productivity multiplier for programmers so that programmers can do more faster than ever before!