I always find it very strange that we don't apply spam filters and recent machine learning techniques to the problem of filtering comments.
If I looked at my email unfiltered, I'd think one in three of the people sending me messages in the past day were unusually concerned about the state of my penis.
From a not-adding-anything-to-the-discussion perspective, dead baby pictures and random threats of rape or acts of violence are even less useful than those adds telling me how to work at home. From a cost-benefit perspective for the people involved, it's very easy for someone relatively unimportant to get an outsized influence on social interactions if they can easily reach people who actually matter with harassing comments, at virtually no cost to themselves.
So why don't we use tools to shape the online discussion and feedback mechanisms, the same way we do with our email inboxes?
Perhaps I'm just too out of the loop to see it being done.
Changing the incentives of a situation, such as raising the effort required to get a harassing message through the filters, isn't a "purely technical solution" any more than most economics strategies are.
I'd argue it's making use of technology to make a people oriented solution: people do less of things that are harder to do, and seeing less garbage even if you still see some makes the place seem more pleasant.
> I'd like to think we can do better than "sorry about the rape/murder threats, we'll try to hide most of them from you"
The best we can ever do, without swerving way out in to super repressive regimes, is impact the rate at which it happens, the response of other people, etc and not that it happens at all.
Agreed. Applying filters hides the symptoms without treating the underlying illness, which is a society that is infested with sexism. We can treat the symptoms by filtering but at the end of the day we need strategies for making society less sexist.
> Applying filters hides the symptoms without treating the underlying illness, which is a society that is infested with sexism.
This doesn't follow from the symptoms being cited, which is that a small subset of people say incredibly offensive and inappropriate things online.
If a million people see a video, and 1% of the people respond, and 3% of the people responding are crazy, that's 300 crazy people responding. If each crazy person sends 15 emails before finding a new thing to obsess about, that's 4,500 threatening or otherwise crazy emails in response - which certainly seems like a flood to the person being responded to - but doesn't actually tell us anything about the population watching the video, at large, except that it has a similar percentage of crazy people to society at large.
Continuing the hypothetical: if you assume a normal person only sends 2 emails, then you only have 19000 normal emails to 4500 crazy ones, because crazy people are more talkative. At 1.2 emails for a normal person, It's 11000 to 4500, which makes crazy emails over a quarter of all messages received in response - a vastly disproportionate number considering they were 3% of respondents.
One thing that automated filters can provide is sentiment analysis and looking at related messages, to pull out the underlying distribution of people responding (and hence seeing it's just people being people), rather than just seeing the surface level "We have 25% crazy emails!"
I would have to see a much more detailed analysis of the emails, responses, and distribution of them to conclude that it's a widespread problem, rather than a small crazy contingent being amplified above their importance by just posting a lot.
My apologies, I phrased things poorly. I don't want to say a majority of society is bad. I just think that even a small contingent, if loud enough, can make society seem more toxic than it needs to be.
How would you feel if a government agency or ISP or some other adversary used such technology to censor all your online communications -- so you couldn't ever contact anyone, through any protocol -- because it considered you to be undesirable/objectionable/dangerous for its own reasons?
That would be both terrible and dramatically different from what I was talking about. The context here is private websites filtering comments. If you are in a private restaurant expressing some horribly offensive opinion it is appropriate for the owner to ask you to leave. It would not be appropriate for the police to come in and gag you.
How would I feel if every website on the internet decided that I was so toxic that they banned me? It's hard to say considering that has happened to a total of 0 people ever.
You have the right to free speech, but you do not have the right to be listened to. If you are on private property the owner can ask you to leave. If I wrote a script to send you an obnoxious email every 5 minutes I imagine you would set up a filter so you would not have to read them or even be aware of their existence. You don't have the right to make other people read your (hypothetically) obnoxious comments any more than I have the right to make you read my obnoxious emails.
> It's hard to say considering that has happened to a total of 0 people ever.
It's what you're proposing to do to other people, so you have the ethical burden of considering how you'd feel if it were done to you.
So how would you feel? Would you feel that it were ethical, even if it were somehow legally justified? Would you accept it? Would you consider it a good, just, humane use of technology? Stop evading the question and answer it already.
You and I are still talking about very different things. I am not proposing banning people from the internet. I am proposing that private websites should filter out particularly obnoxious hateful and unreasonable comments.
How would I feel if a website removed my profane, unreasonable, and offensive comment? Ashamed. Considering a website has no obligation to display my comment I think it would be ethical (and of course legal) for them to not display horrible comments. I would accept it. I see nothing inhumane or unjust about a website declining to distribute my hate speech.
We are talking about the exact same thing and you keep trying to evade the question.
How would you feel if websites colluded to prevent you from communicating at all, based on what they consider to be undesirable, not based on what you consider to be undesirable? By definition, you're already negatively predisposed to what you consider to be undesirable, so your proposed scenario never comes up. Mine does.
It doesn't need to have anything to do with profanity. It could be something as simple as you being a pro-censorship piece of shit. Or something else. Who knows. It doesn't matter. What matters is them oppressing you the way you'd like to oppress others, based on criteria you don't necessarily agree with, and possibly don't even know. How would you feel about that?
Stop trying to evade the question. If you evade it one more time, I will consider you to have conceded.
"How would you feel if websites colluded to prevent you from communicating at all"? They don't have that ability; a website cannot reach through my computer screen and gag me. Since the you question are literally asking is absurd, I guessed you really mean to ask how I would feel if a website removed my comment, which I answered. Now I think you are asking how I would feel if every single website agreed to a common blacklist and put me on it. This is still quite far fetched and drastically different from what I am talking about, but I will answer it anyway. I would feel annoyed and embarrassed.
What I am talking about (removing vile comments) is already done manually, so your notion that it "never comes up" is strange. If you right to force other to distribute rape threats is so important to you then I doubt we will come to an agreement.
Lastly, there is no need to call me a "piece of shit". If you stoop to this level again, I will consider you to have conceded.
Yes, they do. What stops private parties from colluding? Nothing. What stops your local ISP (in most locations, in a position of local monopoly) from denying you access, even without participation from websites? Nothing. What stops them from doing this for their own reasons, which you might not necessarily agree with, and possibly don't even know? Nothing.
And the actual point that you keep evading -- what stops other parties from approving of ideas and speech that you disapprove of, and vice versa? Nothing. What stops them from oppressing you, the way you want to oppress others, in a way that enforces this diametrically opposed configuration of approval and disapproval? Nothing. What stops them from considering every idea and form of speech that you've ever perpetrated, and every idea and form of speech that you stand for, to be vile, and to do everything in their power to censor, silence, and oppress you as a result? Nothing.
You have evaded once again, and lost by default. Thanks for your time, anyway.
P.S. I didn't call you a piece of shit, the hypothetical oppressive parties in the hypothetical scenario did. You lose by default for putting words in my mouth, as well.
You said "prevent you from communicating at all" I pointed out that websites clearly and obviously do not have this power to do what you are literally suggesting and asked what you actually meant. You ignored my response and my question. At this point I think you are being purposely obtuse. Either that or you did not comprehend my comment.
I have already answered the question you think I am evading.
get over yourself. you should see the things people say to me in facebook chess let alone playing starcraft. its not because im a male or female, its because people are assholes but we dont need some sort of mind police to 'solve the problem'.
Where do I argue in favor of mind police? Don't put words in my mouth for me, thank you. I look at how attitudes towards things like race have changed in the past 50 years. That is not a result of regulation imposed on society, that is a result of changing attitudes and culture in society. IMHO the same thing ought to happen with respects to sexism, we should hope that the attitudes and culture of society as a whole begin to shift.
I would love to see an analysis on a site by site basis of what they consider trolls and what they consider highly-rated comments. I suspect however it would be found it would align more closely around in-group/out-group than content.
It seems likely, but the whole point of censoring extreme comments is to make those commenters an out group. Comment filtering would have edge cases, just like spam and porn filtering does. Do the dangers outweigh the value of pushing people who speak like this into an out group?
I don't know how you interpreted Torgo's comment as worrying that inappropriate/extreme comments would be treated as an out-group. He/she said that he/she suspects that human-rating of comments would be based more on their (dis)agreement with the popular view (particularly on controversial topics) as opposed to the extremeness/inappropriateness of the content. There's a massive difference between a comment being 100% civil in their dissent, as in "I'm not convinced that you can attribute those causes to that effect, given XYZ" and "DIE YOU ASSHOLE YOU'RE WRONG I'LL KILL YOU". Torgo's suspicion is that filtering would be done based on disagreement vs objectively offensive/uncivil content.
My point is that a comment filtering system is consciously based on the identification and exclusion of a group, which we'll call the out group. Of course it's limits will be debatable, just as the definition of spam is debatable.
If what you're excluding is political speech of a sort, then someone is going to be able to accuse the system of censorship, no matter where the line is drawn. It's inherent in the system's goals.
Personally, I think that's a point where any system is going to be rightfully challenged and examined critically, but I think it's still worth doing on a greater good basis. Spam filtering works, it makes my inbox a more tolerable place. It's a good idea to extend it to other types of crappy speech that makes other peoples' lives worse.
I have just noticed that in many online communities, you couldn't easily filter on extremism because, based on the community values, expressing the wrong opinion is "trolling" but responding to that opinion "die in a fire you fucking asshole" is a top-rated comment. I don't even think these are outliers, a lot of communities are run like this.
Also, possibly this comment would be a false positive in some places because I included extreme text as an example.
Your point is very well taken. The choice of whose community values to use is an important one. If your goal is to try and exclude some actually popular but perhaps embarrassing opinions, then you are likely to need to incorporate some of the values of other communities where this sort of speech would be filtered.
Applying r/SRS standards to r/RedPill would be an extreme example of this, and a good thought experiment. Could you model what kind of karma score a RedPill post would get on SRS? How would it change what a discussion page looked like?
China has developed a lot of technology and expends a lot of man hours to censoring what it considers to be offensive opinions, and yet, every day, people find new and interesting ways to express those same offensive opinions.
It's as much of a lost cause as DRM. You can't silence people until and unless you're willing to kill them, and possibly not even then.
If I looked at my email unfiltered, I'd think one in three of the people sending me messages in the past day were unusually concerned about the state of my penis.
From a not-adding-anything-to-the-discussion perspective, dead baby pictures and random threats of rape or acts of violence are even less useful than those adds telling me how to work at home. From a cost-benefit perspective for the people involved, it's very easy for someone relatively unimportant to get an outsized influence on social interactions if they can easily reach people who actually matter with harassing comments, at virtually no cost to themselves.
So why don't we use tools to shape the online discussion and feedback mechanisms, the same way we do with our email inboxes?
Perhaps I'm just too out of the loop to see it being done.