Please don't post shallow dismissals. It ruins what this site is for.
Interesting that the message content is not what’s being adjudicated by down votes here. If a mod says it: all good. If a cocommenter says it: very bad.
I suspect most of us would like that, but it doesn't seem feasible. Detection of AI text is incredibly difficult, and false positives would be a huge stain on the user base. Can you imagine posting a thoughtful comment, then having your user banned for a false positive calling you out as an AI? I would find that quite offensive and I don't know if anything could be done to reverse the negative effect it would have on me in regards to how I view HN.
Posting 100% AI generated content should be against the rules. (Outside of exceptions where it is relevant.)
But where should the line be drawn when a user collaborated with an AI on a comment? As an english-as-a-second-language speaker, I've been for years using tools like Grammarly or Hemmingwayapp to improve my writing. I will gladly use a GPT-based proofreader/editor browser plugin eventually, why not?
I agree but the alternative is the the end of HN + the end of the rest of the open internet in a year or five.
When you soon will only meet bots that are trying to manipulate you or sell you something - the value for everyone goes to zero pretty quickly.
I'm not sure how this will be solved besides most people ditching the open internet and 100% engaging in tiny groups of people they already know the mental capacities of.
Christ, this really is the end of the "social internet" where you could find inspiration and new perspectives isn't it?
It makes me want to revisit The Web Of Trust[1], and apps like Keybase where users have a cryptographically verified social graph comprised entirely of people who were verified by another human that knows them. That whole idea goes directly against anonymity though, so maybe that will become a more pronounced way to split the internet: verifiable human identities, and anonymous bots and humans.
This is a solution against botnets, but not against humans who use AI to enhance/write their comments for them, like the ancestral poster was accused of doing.
Might well be. It's also an opportunity to study (meta-study?) the behavior of populations under these changes. It's a lot like an A-life experiment writ large, and played out in real life.
I was under the impression that someone did the math a few years back on the US government making long-term/indefinitely-kept recordings of every phone call. Not every phone call for a calendar date, or for a city... but all of them, going forward, forever.
It was deemed expensive, but feasible given current pricing and technology. Especially when the cost would be amortized out over the next 15 or 20 years... it might even fit in a black ops slush fund budget.
Maybe I misunderstand, but the technical challenge has been lost. Only legislative obstacles are now possible, supposing they ever were.