Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think it boils down to a simple fact that trying to police user-generated content is always going to be an up-hill battle and it doesn't necessarily reflect on the company itself.

I think it boils down to the simple fact that policing user-generated content is completely possible, it just requires identity verification, which is a very unpopular but completely effective idea. Almost like we rediscovered, for the internet, the same problems that need identity in other areas of life.

I think you will also see a push for it in the years ahead. Not necessarily because of some crazy new secret scheme, but because robots will be smart enough to beat most CAPTCHAs or other techniques, and AI will be too convincing, causing websites to be overrun. Reddit is already estimated to be somewhere between 20% and 40% robots. Reddit was also caught with their pants down by a study recently, with an AI robot on r/changemymind racking up ridiculous amounts of karma undetected.



I'm not convinced that will fix the problem. Even in situations where identity is well known such as work or school, we commonly have bad actors.

It's also pretty unpopular for a good reason.

There is a chilling effect that would go along with it. Like it or not, a lot of people use these social platforms to be their true selves when they can't in their real life for safety reasons. Unfortunately for some people their "true self" is pretty trashy. But it's a slippery slope to put restrictions (like ID verification) on everyone just because of a few bad actors.

Granted I'm sure there's some way we could do that while maintaining moderate privacy but it's technologically challenging and I'm not alone in wanting tech companies to have less of my personal information not more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: