I'm sceptical of the premise. ChatGPT doesn't watermark its answers. There's no decent way to detect what is "OpenAI garbage" and what is not. One of the comments says: "I detect such answers by the fact that they simply make no sense, although they seem well-written." I feel like this is a subject of survivorship bias. Would the commenter know a good ChatGPT answer from a human-produced answer?
A separate question is why there's still a lot of crap questions/answers on SO if quality is the goal? There's a plenty of low-effort and incorrect answers made by real people that are not penalised in any way.
You can ask all of these questions while also empowering moderators to use the tools at their discretion. To refuse to allow moderation while not providing any solutions is the worst of all options.
A separate question is why there's still a lot of crap questions/answers on SO if quality is the goal? There's a plenty of low-effort and incorrect answers made by real people that are not penalised in any way.