I really don't see why Facebook is blamed for this. That ethnic tension was present way before Facebook even existed. It was kept in check by an authoritarian military dictatorship and then bubbled up in the transition to more democracy, which really shouldn't have been unexpected. Facebook was just there at the wrong time, and happened to be the way people were communicating at that time; Aung San Suu Kyi has also gotten a lot of criticism for being in this even though she has been mostly powerless to do anything about it either.
All Facebook can do is stop the propagation on their network. Even with China-level censorship, they would probably need serious manual effort or even a complete network shutdown to stop being a medium. But the same problems will still occur using other channels, maybe the hate won't move as fast, but it will still be fast enough to cause plenty of damage.
Facebook is riding in on its royal horses trying to connect (read: own the primary means of communication) the world like all the classically imperialist westerners before them.
The first thing they teach you in international business and/or marketing is that cultural considerations will destroy you if you do not devote effort to addressing them. If, in building a hyperlocal product (like facebook), you do not take the time to understand your local markets, then they will move in ways that you don't understand and you may not be able to control. Usually at your peril.
edit: as an example.. an international trade professor told us a story about how one of his expat factory managers had an affair with a local girl in a small town in a strongly Buddhist east-asian country. When word of this got out the woman's family raised royal hell for the factory and their operations and the situation was threatening to run the company out of town. Professor said the manager was given a choice: either marry the girl to assuage the locals or leave. (He chose to leave)
Wonder how it would have gone over if Facebook had just said from the outset, “We’re not entering Asian markets yet because they’re too culturally different than us.” Bet people would have really liked that.
That does amusingly sound very stereotypically Japanese corporation and the annoyingly common practice of ignoring viable overseas markets and demands out of a misguided sense of exceptionalism.
Isn't it their products? Why should they sell them to markets they don't want to sell them to? Because of those markets' misguided sense of entitlement?
you're playing games with words. The various East India companies were imperialist because they dominated the countries they "traded" with. It was more of a militant occupation than an act of mutually beneficial exchange.
But those were different times, when companies could get away with militant operations. To say nothing of strike-breakers, African rubber plantations, or what-have-you. Most companies we know of (esp. American) can't get away with those sorts of things now, at least not in the public eye.
Soft power is the new militant occupation. Facebook is taking cues from Pablo Escobar (minus the violence, we hope).
so in this new world where "soft power" (which I assume means any power imbalance) across borders is imperialism, how can a company be international and not be "imperialist"? Because if they can't then that's the kind of "imperialism" I couldn't care less about.
Sounds like the communist usage of imperialism where anything not part of the nation, them, or their fellow travelers is imperialist even if it is better for the people and noncoerceive because they disrupt the native one. Even if it is say actual honest nongouging medicinal care displacing hedge practitioners who were themselves exploitative. While outsiders looking to make money have a long unfortunate history of doing bad things it is absurd to call them all imperialist or even all bad.
Brings to mind a related observation that the USSR and most 20th century nominal communisms were fascism in communism's skin in practice but that is highly contentious from at least the two extreme sentiment ends on the subject.
> Brings to mind a related observation that the USSR and most 20th century nominal communisms were fascism in communism's skin in practice
They were totalitarian, a property shared with fascism. Also, China and the Soviet Union both claimed to be socialist revolutions on the way to communism, not actual communists, and along the way the dictatorship of the proletariat/vanguard party became "corrupted". Real communism (which is inconsistently defined between different theories anyway) does not and never has existed, and seems like it won't be possible before we reach post-scarcity anyway.
Both Soviet (and Chinese) parties in power were (and are) nominally "Communist", not "Socialist".
My favorite example of communism implemented in real life is a hermit / monastic retreat, a skete. People devote themselves to voluntarily living an ascetic life, aspiring to the absolute, caring about the common good, and eschewing things like money from daily life.
From the amount of hermits in a society you can estimate how popular this mindset might be, and how realistic is it to involve an entire population into it. Post-scarcity won't solve it, because it's not going to make people automatically care about the common good as a way of self-actualization. This is why the idealist Communist leaders of early Soviets spoke so much about "raising a new man", basically hoping to change the nature of humans.
> Both Soviet (and Chinese) parties in power were (and are) nominally "Communist", not "Socialist".
I have to disagree with this, especially given that the CPC claims that China is in the "primary stage" of socialism: https://en.wikipedia.org/wiki/Primary_stage_of_socialism. Communism in its original Marxist form is classless and stateless, which the PRC and the USSR certainly are/were not. Both countries were/are run by communist parties, but said communists recognise/d that a conversion from an agrarian, nominally capitalist society to a communist one would take a long time and require transition through a socialist state, as Marx originally laid out.
> My favorite example of communism implemented in real life is a hermit / monastic retreat, a skete.
The benefit to small communities is that it is easier to police for attitude and morality. Even more so if it's a religious community, though I'm sure Marx is rolling in his grave that you would call a religious community communist given communism's materialist philosophy.
> From the amount of hermits in a society you can estimate how popular this mindset might be, and how realistic is it to involve an entire population into it.
That's putting the cart before the horse somewhat. I know of many people who retreat from society in a hermit-like fashion for reasons other than asceticism or self-actualisation.
> Post-scarcity won't solve it, because it's not going to make people automatically care about the common good as a way of self-actualization.
But it will remove the primary need for an economy: effective distribution of scarce resources. It would also reduce the need for political hierarchy because property rights would become largely unnecessary to enforce.
> This is why the idealist Communist leaders of early Soviets spoke so much about "raising a new man", basically hoping to change the nature of humans.
They spoke about a lot of things that are purely based in ideology and do not gel with how the world works. Lysenkoism and the Four Pests Campaign strike me as good examples.
> The benefit to small communities is that it is easier to police for attitude and morality. Even more so if it's a religious community, though I'm sure Marx is rolling in his grave that you would call a religious community communist given communism's materialist philosophy.
See my comment above on how monasteries came from epicurean communes.
See the title or Marx's PhD thesis: The Difference Between the Democritean and Epicurean Philosophy of Nature
Not so far-fetched.
It's true that communism as envisioned by Karl Marx had a materialistic sense, but so did Epicurus philosophy.
Communism is Karl Marx's scaling attempt at Epicurean communes. Not a very good idea IMO.
that's an interesting piece of philosophical history I didn't know about, thanks for sharing that. I always thought there were strong parallels between ancient Greek philosophy and the primary strands of modern politics.
> Communism is Karl Marx's scaling attempt at Epicurean communes. Not a very good idea IMO.
I tend to think that the idea wasn't bad, but people trying to force it to occur faster than the natural societal evolution end up killing a lot of people. The debates we're seeing now about how to deal with automation of large swathes of the job market are similar to Marx's theory that socialism would automate production to the point of creating a post-scarcity society. Of course, that doesn't account for the environment, but then global warming wasn't on the menu in the mid 1800s.
The ironic thing is that it's capitalism that is doing the automating.
> I tend to think that the idea wasn't bad, but people trying to force it to occur faster than the natural societal evolution end up killing a lot of people.
Exactly, although I don't believe we can infer to which point the society would naturally evolve. If possible, the probable transition would be from scattered communes to a commune federation. The drawback being that it would be difficult to have political cohesion (not that this is bad, but easily corruptible by neighboring states).
> The ironic thing is that it's capitalism that is doing the automating.
Post scarcity wouldn't matter anyway. You'd still have the problems of who watches the watchers and the idea that some of us are more equal than others... There is no Star Trek utopia.
> You'd still have the problems of who watches the watchers
Could you expand on what you mean by this? If you remove the concept of material wealth (as a distinction between individuals), the cause for conflict is greatly reduced.
> the idea that some of us are more equal than others
I don't see how this would continue to cause conflicts once people's material needs and wants are all met. People tend to hate on exceptional people because they have access to more things.
> There is no Star Trek utopia.
Certainly not now, but it's a good goal to aim for.
Actually it's very apt, whether the intention was imperialist or not the result inevitably is: a single company from a single culture/country controls an important communications channel globally.
Besides, the intention is very much imperialist - to conquer the global markets. So much so that "world domination" is a standard part of business parlance.
that's a pretty loose definition of imperialism. Conquering markets is not coercive, it's not an act of one group dominating another (beyond the realm of permissible competition that is the whole point of markets existing). Every company wants to reach the top of its market so at what point does the average company become "imperialist"? When it starts trading overseas?
It weakens the concept of imperialism to the point of almost meaninglessness - it essentially becomes the same concept as globalisation. The nasty element of old-school mercantilist imperialism was its subjugation of the native people of other countries. If you remove the harmful element from the equation but continue to use the word for effect, you're playing games with words to cast something as bad without due cause.
If you want to suggest that, say, oil companies moving operations into African nations (I forget exactly which ones, but read about it a few years ago), hiring military contractors, and reaping another country's natural resources is imperialism, then yes I totally agree. Putting something on the internet and letting people from other nations use it? Not even slightly.
>that's a pretty loose definition of imperialism. Conquering markets is not coercive, it's not an act of one group dominating another (beyond the realm of permissible competition that is the whole point of markets existing).
Well, conquering markets also imposes one group's values (e.g. FB censorship based on what Americans deem offensive) and national interests (e.g. surveillance) on others.
>It weakens the concept of imperialism to the point of almost meaninglessness - it essentially becomes the same concept as globalisation.
Well, globalization is a form of imperialism. It's not equal access to the global markets and influence for all countries: it's a few countries dominating.
> Well, conquering markets also imposes one group's values (e.g. FB censorship based on what Americans deem offensive) and national interests (e.g. surveillance) on others.
That's a great pro-free-speech and privacy argument. Certainly interacting within the framework of a company with different values will mean you have to abide by those values. But the thing is - you aren't forced to. The various peoples of colonies usually didn't have a choice to go back to living how they did before their foreign occupiers landed.
> Well, globalization is a form of imperialism. It's not equal access to the global markets and influence for all countries: it's a few countries dominating.
As far as I'm aware, the living standard of all nations is going up. Nigerians have access to a safer banking system because they can use modern phones. China is dominating the world's manufacturing. Sure, the US and Europe have an outsized impact on the market - do you think that is more the case now than in the 1800s? I'd much rather people in developing nations have access to the international markets, because trade is mutually beneficial. Big companies make money, their customers also gain something. That's not imperialist in any sense, that's just free exchange of goods and services. Or perhaps we should all stick to trading within our own borders? Because that sounds like protectionist nationalism, which certainly isn't going to give people living in poorer nations access to goods and services they may benefit from.
If you believe the highest possible good is stopping hate speech cold, no matter the means, then any approach is viable. If it happens to set a precedent you can use to do the same thing elsewhere, well, that'd be pretty convenient!
Would Facebook being used in WWII Germany to discourage violence against the state be right or wrong?
The idea that everything is fine as long as it goes along with one's worldview is dangerous. Using powerful tools to manipulate society is something that needs to be done carefully and more often than not, not at all.
Companies working as carriers for our social exchanges and turning that into profit is probably less good than evil. I think I'd rather be (back) in a world where if you published something, you did it yourself and the only people controlling it were doing so if you broke a law. (in a society where freedom of expression is valued)
> I really don't see why Facebook is blamed for this. That ethnic tension was present way before Facebook even existed.
I think it's because they were given ample, clear warnings that their platform was being used to incite ethnic hatred and genocide, but chose to do nothing for years. I don't think they would be getting so nearly so much blame if they hadn't been so utterly negligent about the problem. They didn't even bother to translate their objectionable content reporting tools into Burmese until a few months ago, long after the persecution started.
When you are repeatedly notified by human rights groups and the local government that your essentially unmoderated social network is being used to organize lynch mobs, you should either shut it down or hire enough moderators to control the problem.
If the lynch mobs were being organized over the telephone, would it be the telephone company's responsibility to intercept the communications and stop it?
If not, what makes Facebook different? Do we want Facebook to be the arbiter of what we are allowed to say to each other, or do we want them to be a neutral carrier for communications?
The content in question isn't private messages, it's public (and shareable!) posts. We would absolutely expect, for example, the New York Times to disallow classified ads organizing lynch mobs.
Hmm, with a specific crime, like if someone wanted to print a classified ad that said "Hey everyone, let's go lynch Joe at 5:30pm this Thursday", yes I think we'd expect the NY Times to report it to the authorities. Who are the authorities in this case? The Myanmar government? Why aren't they doing anything? They obviously know the lynchings are happening. It's their responsibility to maintain order and prevent or punish crimes in their country, not Facebook's.
If it were just a general hateful statement like "I hate the Jews" that someone wanted post in a classified ad, I don't know that the NY Times would necessarily stop that, nor that they should if we're going to have free speech.
> It's their responsibility to maintain order and prevent or punish crimes in their country, not Facebook's.
Insofar as people are doing things on Facebook's platform that they themselves disagree with, it's absolutely their responsibility to stop it. How ridiculous does this sound: "Alice says she's going to buy this knife from me to stab Bob. I don't like stabbings, but I'm going to sell it to her anyway, because it's the government's job to stop stabbings, not mine."
> nor that they should if we're going to have free speech.
It's incumbent on states to recognize freedom of speech, not private entities. The NYT deciding whether or not to publish something has zero impact on that.
Facebook isn't selling knives. The bits in Facebook's database are not killing anyone. The people doing the lynching are.
Yes, it is ultimately the state's responsibility to uphold the freedom of speech. That doesn't mean that any private actor is prohibited from providing a platform that upholds freedom of speech, only that they are not obligated to do so. You can do something even if you're not legally obligated to, you know.
> Insofar as people are doing things on Facebook's platform that they themselves disagree with, it's absolutely their responsibility to stop it.
So you mean like, if there were a candidate for US President that Facebook didn't like, then Facebook should refuse to run ads for that candidate. And if people used Facebook to talk about or promote that candidate, then Facebook should delete their posts right? I mean, if Facebook is responsible for stopping everything they disagree with, that should follow then.
> Facebook isn't selling knives. The bits in Facebook's database are not killing anyone. The people doing the lynching are.
The knife isn't killing Bob, Alice is! The point being, both the knife-seller in the analogy and Facebook in real life bear a nonzero amount of responsibility for known consequences of their actions.
We can actually make this analogy a little clearer: would you sell a knife to someone knowing they planned to stab you, because it's the government's job to prevent stabbings?
And obviously Facebook isn't prohibited from allowing these posts, or we wouldn't be having this discussion. The claim is that they have a moral obligation to take down this content, any legal obligations notwithstanding.
Regarding your Presidential campaigns analogy, it's not that Facebook (or any entity) should censor speech they disagree with. They should prevent behavior they disagree with, in a moral sense; those can be the same or different. It's not incoherent to believe that political discourse is valuable even when you disagree with the ideas, and that hate speech falls outside the bounds of acceptable discourse.
It's not like people are approaching Facebook and asking "Please may I make a post to organize a murder". Facebook doesn't learn about the content of the posts until after the fact, so the analogy with the knife seller is not really valid.
> We can actually make this analogy a little clearer: would you sell a knife to someone knowing they planned to stab you
Absolutely. This would be an excellent opportunity to sell them a fake knife. Also, now I'm aware that someone is trying to kill me, so I can do something about it. Kind of a best case scenario for me.
> They should prevent behavior they disagree with
Voting for a particular candidate would be an example of behavior that Facebook might try to prevent. I find it highly disturbing that anyone would want to assign Facebook the power of controlling our behavior. That is a power we grant to governments, and then reluctantly, and only with appropriate checks and balances. Not something we should hand out to private companies with no accountability.
The difference is that Facebook is not, and has never been, a common carrier. They are not obligated by law to serve the public.
Secondly, telephones, unlike Facebook, are designed primarily for person-to-person, not group communication. Communication dynamics are very different between the two models.
> Do we want Facebook to be the arbiter of what we are allowed to say to each other, or do we want them to be a neutral carrier for communications?
Facebook is a private entity that can do as they please. It most certainly is not a neutral carrier for communications, and to treat it as such is foolish.
Why can’t it be neutral? The data is flowing over the same public right-of-ways and spectrum as voice communication? The phone companies are also private.
> If the lynch mobs were being organized over the telephone, would it be the telephone company's responsibility to intercept the communications and stop it?
If the cost and effectiveness was comparable, and the government showed no interest in stopping it, yes.
> Do we want Facebook to be the arbiter of what we are allowed to say to each other, or do we want them to be a neutral carrier for communications
If the alternative is genocide, I don't want Facebook to be a neutral carrier for communications.
I recently reported a post made in a regional language(language for which I'm sure Facebook doesn't have any translator/interpreter) as hate speech.
I got a response from Facebook saying the said post doesn't violate Facebook's community guideline.
they only care if they can get sued for it: that's why for example anti-white hate speech is not actioned. people love to think it's a global conspiracy but the reality is much simpler, they don't define hate speech, laws do. unless they are at risk for litigation, they won't remove posts.
I disagree with this assessment. They are a business, and they are going to treat it as a business decision. If we allow the speech in question, will it get us more eyeballs, or will it drive away eyeballs? Will the speech in question threaten our advertising market? This is how they are going to make their decisions.
1. It's not a trivial problem to be able to identify what is hate speech in a regional dialect automatically
2. It sets a problematic precedent if Facebook throws 100s of - hard to find, hard to pay and hard to support - native language local reviewers at every place where Facebook could be abused.
I'm no Facebook apologist, however I recognize how hard of a technical problem, and how it's equally a personnel management, prioritization and scaling problem.
Facebook risks chasing every group that makes a loud enough noise, and then being criticized for it irrespective of what they do. No win situation.
> It sets a problematic precedent if Facebook throws 100s of - hard to find, hard to pay and hard to support - native language local reviewers at every place where Facebook could be abused.
In what way is that a problematic precedent? Their negligence contributed to ethnic cleansing and genocide. If a multi-billion dollar company is too cheap to hire local staff to watch its platform, it shouldn't operate a localized version for that place. They need to act responsibly. Think if it like product safety testing.
Um wait you don't need a localised version of Facebook to type in a local language, you just need unfiltered internet... Or do you mean FB should prohibit sign ups from arbitrary countries?
Just yesterday I watched a Chinese woman on a flight from Beijing to Moscow struggle with the onboard entertainment system because it was only available in Russian and English, and I realized how much shared cultural context is necessary to operate even simple apps in a foreign language.
If Facebook didn't have a version localized to Burmese, it would never have gotten widespread enough in Myanmar to be a significant part of the problem.
> It's not a trivial problem to be able to
> identify what is hate speech in a regional
> dialect automatically
This sentiment, which I read repeatedly, really irritates me. Most of us program here. It's not that hard a problem! When social media execs claim it is, what they really mean is "it's a hard problem to solve for us, because we are unwilling to risk losing any percentage of account sign-ups ever, under any circumstances."
The irony is that they are so timid about their precious sign-ups that they act against their own interest. For example, YouTube won't risk a short term dip in user retention metrics to scrub garbage from their comments... even though doing so would likely greatly increase sign-ups in the long term.
> natural language processing is like one
> of the hardest problems that's left to crack
You don't need to "crack it" or come anywhere remotely near to a perfect solution. False positives are okay, providing you accept that some percent of your users won't tolerate the errors, and will leave your platform.
And we're talking about social networks. So there's a wealth of additional information (ie: who you follow. previous behavior. who has reported you. account age) beyond plain text to augment this. And there's the setting of policy (eg: how come I can post within an hour of signing up on most social networks?).
The point is, what we have now on most social networks is not geared toward long-term customer satisfaction or the public good. It's geared toward medium-term account retention at any cost - regardless of whether the accounts are bots, sock puppets, or trolls.
A combination of things. There's a wealth of information available if you're running a social network. Start by just banning text like "n-----" and "(((". Then give users a reputation, and do a page rank style analysis. So if someone has a good reputation, and they report another user, a moderator doesn't need to be involved. Take account age into consideration... you can't post publicly the instant you get your account. Be okay with some % of users quitting because of your false positives.
It's not hard to think of ways to aggressively moderate a social network. It's just that some users (certainly the most toxic ones) will leave.
First, because the use of the word can be in a positive or negative sentence, and thus can either agree or disagree with your "censorship rule". You need a deeper NLP analysis to disambiguate.
Second, because - if online fora with this kind of ban has taught us anything - it's pretty easy to work around it by using punctuation, replacement characters, or other style variations to write the word in a way that it won't be caught by the regex.
Third, this method ends up banning legit words. There was a forum where I couldn't write "booby trap". I don't need to explain.
WRT reputation, networks that allow "unlimited registrations" are easy targets for https://en.wikipedia.org/wiki/Sybil_attack - One can use account age to mitigate of course, but given the amount of fake accounts that exist on the market (look at fiverr marketing offers) with long life and activity, I wouldn't expect age to be that big of a helper.
It's a "poor method" if you're seeking perfection. We don't need perfection.
> You need a deeper NLP analysis to disambiguate.
Ixnay. Only if you have zero tolerance for false positives.
If users really can't understand why their website won't let them type "n-----" in a public message, even in good faith... well, I just don't know what to add.
> it's pretty easy to work around it by using punctuation,
> replacement characters, or other style variations to write the
> word in a way that it won't be caught by the regex.
Still an improvement. Easier to ID problem users. Not the only line of defense.
> "booby trap". I don't need to explain.
Sure. I can live with that. You realize the alternative is the Youtube comments section. I don't feel comfortable posting there ever. I can live with finding synonyms for "booby trap" etc. I might actually enjoy commenting on my favorite YouTube videos, an activity that feels utterly meaningless with the current milieu of lunatics.
> networks that allow "unlimited registrations" are easy targets
> for https://en.wikipedia.org/wiki/Sybil_attack - One can use
I imagine this could be handled much better than the social networks do now. Time-stamped accounts, data mining, user reporting. There has been ZERO incentive to rid ANY accounts till the past year, because bots, sock puppets and trolls still "help" a company's user growth, on paper.
The problem isn't that solutions are hard to think of, it's that the incentives are horrible for these companies. They don't really care about anything but growth.
Facebook isn't the state. In any event, I try not to think of problems that way. Like most things, state censorship is neither universally good nor bad. It depends on the state, and the censorship. The universe doesn't care that it's easier for humans to have hard and fast rules about things.
It's a problem that can't be reliably solved by machines yet, so they end up having to hire moderators. Which is a pretty miserable job and also hard to standardise.
They never really entered the Myanmar market in a serious way before recently: they probably did the localization work long before Myanmar had widespread internet (which was recent, I think in 2014 or 2015?), focusing mostly on some Burmese expats. It was probably considered to be a minor thing and they didn't follow up aggressively when the political situation in Myanmar changed rapidly, thinking they could just continue supporting Burmese language like they were doing for 10 years previously.
Ya, that is neglectful, but accusing them of aiding in inciting racial hatred and genocide is disingenuous.
Sadly this is just the way of the world. If Facebook completely blocked all access to users in Myanmar, there would still be equivalent horrors happening, just mobilized elsewhere.
People like to blame/target the largest entity around as though it's job is to be mediator and police. That's taking the bad with the good. You see a similar story with the US Government often. Groups will condemn US intervention in some places, while chastising lack of intervention in others. Certainly different circumstances, but the mindset of the criticizers is similar.
Your perspective is an interesting distillation of a popular meme.
In it's abstract form, the idea is something like "Anything I do is justified, because if I didn't do it, someone else would."
From the way I see the world, it seems completely insane but it obviously seems like common sense to a huge part of the population.
I just don't get it! How many other Facebook sized companies, with Facebook's staff of engineers, willing to completely disregard the ethical consequences of behavior, do you think are competing for Facebook's market?
If a company is able to censor pictures of breast feeding mothers, or even worse, of iconic historical pictures[1] on an industrial scale, they should be damn well able to moderate speech, which leads to genocide and mass murder and of which they're warned multiple times by a number of organisations.
Fucking hypocrites doesn't even start to describe such behavior.
Which is why the first thing that occurred to basically anyone ethical is that it would be very hard to create a social media platform that it would be really hard to prevent it from being used for harm.
But being informed in 2013 that your platform is a conduit for mass killings and not doing much about it for five years unless it becomes a PR issue makes you culpable to the atrocity.
I don't think Facebook could win that one. If they purged all instances of it they would be accused of concealing evidence of the genocide after the message got out - which it certainly would have. Like Syrian attrocity footage being taken down for policy violations. If they shut down during the time period they would be guilty of concealing and stopping pleas for help. Large scale moderation is a mug's game that ends only in being Kafkatrapped when it gets remotely political.
There has been a big problem in India with lynch mobs being formed in Whatsapp groups, over various unsubstantiated rumours, including apparent child snatchers travelling through villages [1].
This raises an interesting cross section of moral issues. Obviously end-to-end encryption is good for a lot of reasons. But end-to-end encryption is one of the reasons that these lynchings have been able to happen. These kinds of lynchings have been going on forever, but Whatsapp and mobile phones have allowed these rumours to spread faster than ever before, and for mobs to form so much easier.
Does the right to privacy outweigh people's right to travel without fear of being lynched?
Assuming that you agree that the government should not have the right to install mandatory CCTV cameras in private residences, where do you draw the line between that and eavesdropping on people's digital communications?
What is it about digital communications that makes it reasonable for the government to listen in? I don't see any difference between the government CCTV in your bedroom and the Whatsapp case, except that Whatsapp could be spied upon much more cheaply and efficiently (were it not encrypted).
Actually in many places if you suddenly gather a group of 20 angry people there is a high chance you WILL be listened with attention by the State for security reasons.
This is actually an interesting question, from a legal standpoint anyways. I don't think Myanmar has a safe harbor exemption like the US does, they actually have some pretty heavy handed censorship laws on the books. However, Facebook doesn't even have an office in Myanmar, they aren't officially there, and without a GFW Myanmar can't really block Facebook (nor would the people be ok with that, as Facebook has already become the internet to many).
Probably why Facebook won't be opening an office in Myanmar anytime soon, at least until they can sort these issues out and have some strong legs to stand on.
Linking this explicitly to safe harbour like you've done is important -- the knee-jerk response online is often to condemn Facebook for not policing content but to defend content hosters for not being able to filter out all instances of copyright violations.
Maybe those positions are held by different people, and maybe there's some good-faith "make an effort" middleground, and maybe the standards should be different because nobody dies when someone pirates Westworld, but they're clearly analogous.
If we do have different standards we should be able to back it up with argument, and try hard not to let how much we like the companies colour our judgement.
The middle ground is a great place to be if you want to take fire from everyone. In terms of legal and regulatory risk, it's an incredibly hazardous place to be because it has few defenders and many attackers.
Personally, I'm uncomfortable with the idea of demanding that megacorporations do censorship for us on whatever arbitrary standards they can be convinced are good.
Interesting that the project codename is “Honey Badger.” To me that connotes not caring less — you might remember this YouTube viral video from a few years ago:
We've asked you many times to please stop breaking the guidelines, so we've banned the account. We're happy to unban accounts if you email us at hn@ycombinator.com and we believe you'll start using the site as intended.
All Facebook can do is stop the propagation on their network. Even with China-level censorship, they would probably need serious manual effort or even a complete network shutdown to stop being a medium. But the same problems will still occur using other channels, maybe the hate won't move as fast, but it will still be fast enough to cause plenty of damage.