Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's the point of posting AI generated babble on stack overflow without checking it? If I wanted that kind of potentially useful answer I could have just asked ChatGPT myself.

But if the user has has the necessary expertise to make sure that what ChatGPT generated is actually correct before posting it, is there really an issue? It would save them some time and allow more questions to be answered.



Farming Internet points.


And? If the answers are getting upvotes then they’re by definition helpful. What’s the problem?


If the answers are getting upvotes then they’re in practice merely looking seemingly reasonable to clueless newbies who copy-pasted them. Or they are "funny". This nonsense post received 66 upvotes and 3 downvotes before it was removed 11 days ago: <https://web.archive.org/web/20220410125443/https://stackover...>. Getting an upvote earns you 10 points, getting a downvote loses you 2 points. You need to have earned 15 points to cast an upvote; you need 125 points to cast a downvote, and each one costs you 1 point. Do the math. Upvotes aren’t much more meaningful there than on Hacker News.


So your theory is that people go around upvoting posts based solely on them looking somewhat correct? I’ve never upvoted anything that didn’t help me solve a problem. I don’t imagine random upvotes are very common.

Also, to your point about humor, in my experience GPTs are very bad at it. If a post is funny it’s most likely not AI generated. Your expectation that all talking meat should maintain a consistently somber decorum while online is, needless to say, unrealistic.


I am not claiming GPT is any good at humor. I am claiming this is the sort of superficial quality that gets you upvotes on Stack Overflow. GPT is pretty good at superficial things.

> Your expectation that all talking meat should maintain a consistently somber decorum while online is, needless to say, unrealistic.

Everywhere on the Internet, probably not. In Stack Overflow answers, it wouldn’t be half bad. It’s what they are for. But I wouldn’t even go that far: for example, another’s answer joke that "That's a very complicated operator, so even ISO/IEC JTC1 (Joint Technical Committee 1) placed its description in two different parts of the C++ Standard." is fine in my book, as that answer is otherwise pretty informative. Unlike the one I linked to before, which is just a confusing mess of random access humor (superfluous Xkcd: <https://xkcd.com/1210/>).


I’d say humor gets upvotes precisely because it’s not superficial. Humor arises from presenting ideas in a simple but still surprising new way. It takes insight into both the subject and the audience. Humor is practically the antithesis of dry AI content.


This is what the now-deleted answer said:

> For larger numbers, C++20 introduces some more advanced looping features. First to catch i we can build an inverse loop-de-loop and deflect it onto the std::ostream. However, the speed of i is implementation-defined, so we can use the new C++20 speed operator <<i<< to speed it up. We must also catch it by building wall, if we don't, i leaves the scope and de referencing it causes undefined behavior. To specify the separator, we can use:

Do you feel informed by this? Do you think a newbie would be? What kind of enlightening insight flows from this? This about as funny as the output of a Markov chain: extremely hilarious… for about 15 minutes, after which it just becomes boring.


It was probably upvoted for the “inverse loop-de-loop”. That’s a great line.


So you admit it was for superficial reasons. Thank you for conceding my point.


I concede nothing. That clever play on words is comfortably outside the output distribution of a modern GPT. When the relationship between tokens can perplex a 175B parameter model, it’s no longer superficial.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: