We need to fix this. Whenever someone feels like they are in a position of power, things like this happen. This power can come from anonymity (eg: the internet), it can come from a position of authority (boss), or sometimes it can come from just being dominated by an in-group with a much smaller out-group.
What's happening here seems like a mix of the power of anonymity on the internet mixed with the fact that women are underrepresented in game development. The former lets people harass, taunt, and threaten with many times very few consequences, whereas the latter leads to favoritism to other members of the in-group, and derogation of the out-group.
So what can we do? Unfortunately, the issues are larger than me to grasp. But since we are on Hacker News, I'll give a few technological solutions I've taken a few minutes to come up with that may help:
1. We have spam filters right now on email -- what if those filters also watched for hate words? If it met a certain threshold, the message would not be shown in the main article. We could mark email/comments as "hateful", and when it reached a certain threshold that comment would be hashed and added to a repository that could be checked against for future comments. People would try to get around it like they do spam filters (I H.A.T.E Y.O.U., etc), but with enough training I feel a good deal could be filtered out.
2. HR departments should be moved to unbiased third party services. I feel many employees are afraid to report discrimination or harassment because many times their HR coordinator knows the people involved. If you move it to an unbiased third party, you can report pseudo-anonymously (they will know who reported, but won't tell the company until after they have confirmed or need to follow up with the company) without the worry that the third party is biased towards a specific party.
There needs to be a lot more thought put into how we can fix this than the few minutes I spent above, but it might be a good first step.
EDIT: modified for brevity's sake. (I'll replace with the original again if people want to read it)
I'd argue it's the opposite. The people send threats because they feel they have no power, no other recourse to take. It is a last ditch effort to change something. Upvoted ya btw, because you don't deserve to be downvoted.
what if those filters also watched for hate words?
I'd love to know what a "hate word" entails. Sounds like it would gratuitously bring down all negative opinions, including ones that aren't hurtful. Especially since you give the word "hate" itself as an example, which is ludicrous.
Again, I'm not saying it's a perfect solution. Perhaps a naive bayes classifier, with enough examples, could distinguish between negative opinions and hate speech. Maybe it couldn't -- it might be very similar to trying to create a sarcasm detector[1]. I'm hopeful to just get the ball rolling in terms of discussions -- even if I'm wrong, we are at least talking about ways to try to fix the problem.
How would you feel if a government agency or ISP or some other adversary used such technology to censor all your online communications -- so you couldn't ever contact anyone, through any protocol -- because it considered you to be undesirable/objectionable/dangerous for its own reasons?
What's happening here seems like a mix of the power of anonymity on the internet mixed with the fact that women are underrepresented in game development. The former lets people harass, taunt, and threaten with many times very few consequences, whereas the latter leads to favoritism to other members of the in-group, and derogation of the out-group.
So what can we do? Unfortunately, the issues are larger than me to grasp. But since we are on Hacker News, I'll give a few technological solutions I've taken a few minutes to come up with that may help:
1. We have spam filters right now on email -- what if those filters also watched for hate words? If it met a certain threshold, the message would not be shown in the main article. We could mark email/comments as "hateful", and when it reached a certain threshold that comment would be hashed and added to a repository that could be checked against for future comments. People would try to get around it like they do spam filters (I H.A.T.E Y.O.U., etc), but with enough training I feel a good deal could be filtered out.
2. HR departments should be moved to unbiased third party services. I feel many employees are afraid to report discrimination or harassment because many times their HR coordinator knows the people involved. If you move it to an unbiased third party, you can report pseudo-anonymously (they will know who reported, but won't tell the company until after they have confirmed or need to follow up with the company) without the worry that the third party is biased towards a specific party.
There needs to be a lot more thought put into how we can fix this than the few minutes I spent above, but it might be a good first step.
EDIT: modified for brevity's sake. (I'll replace with the original again if people want to read it)