> Justin Osofsky – ‘Twitter launched Vine today which lets you shoot multiple short video segments to make one single, 6-second video. As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision.
What's really striking is how user-hostile this conversation is. Forget about whether the user wants to share their data or not - it's all about what Facebook wants to do. In this case, that's snuffing competition by denying access, in the case of Cambridge Analytica, it's sharing data for purposes of shady data "research".
Yeah, that seemed unexpectedly flippant/dismissive. But a couple things:
1. If you look at the docs, that's from Exhibit 44 which indicates that it's actually an excerpt from a messenger discussion, not email
2. Twitter had previously blocked both Instagram and Tumblr in the same way
3. Facebook had previously blocked Twitter in the same way
4. In some of the other docs here, you can see that there was much more discussion about what their policy should be around reciprocity and apps competing with facebook's features
5. The first line of that indicates that there was likely discussion/planning about this before that conversation
There's nothing that Facebook did here than any other company, in tech or otherwise, wouldn't have done.
And for the record, Facebook did not "share data with Cambridge Analytica for shady research purposes". A rogue third party developer created one of those shitty quiz apps for Facebook, and then proceed to get users to signup for it; several million did, which allowed said developer to harvest data thanks to the very permissive APIs that Facebook provided at the time. He then proceeded to sell this data to Cambridge Analytica. Facebook has a responsibility in what happened there, but "Facebook sold data to Cambridge Analytica" is a widly misconstrued story.
> There's nothing that Facebook did here than any other company, in tech or otherwise, wouldn't have done.
This isn't true. Lots of companies wouldn't steal users' call logs - eg, Mozilla, Signal, and plenty of boring, normal ones who make TODO list apps or whatever.
It also isn't relevant. See how that argument flies in criminal court. "Anybody else would have stolen that car."
What we see here (again) is that FB does nasty things and it's in the public interest to stop them - along with "any other company" who does the same things.
Facebook's business is car stealing. They tell you that upfront: we want your car, and if you park it in our garage we're going to take it.
All this anger over Facebook is ridiculous.[1] Now, if you want to talk about Android and Google's decision to make it more difficult to not only control but to know what data apps (especially theirs!) will take, that's a different matter....
[1] Especially from geeks, and particularly geeks from the 1990s and earlier when we were told that unless we promoted non-centralized publication models that we'd see the very constellation of centralized, user-antagonistic, profiteering services we now have. Whenever someone says, "but why would you want to host your own e-mail/web/chat server, my head wants to explode." It's always "why would you" (or "why would you, it'll never be as good as GMail/Facebook/Twitter/etc"; never, "maybe I should promote and help work on projects that make it easier".
We are all rogue third party developers, and clearly, the priority, much like most businesses is to make a profit, and the customers/ethics come second.
I love the fact that you get bent out of shape that Facebook didn't sell it though, it's a theme I've seen with Facebook employees: "But we didn't sell it!"
I'm not sure if they are lamenting the fact they didn't sell it but I sure as hell can tell what they give a damn about. If you want to hide under "anybody would have done it", lets take a trip down histories gravestones and figure out whether or not we should bother trying to do the right thing because it is the right thing to do.
If exfiltration of user information and data was not the explicit purpose of FB's API policies, they soundly rejected the principle of lead privilege, which dates back 45 years and is no doubt incorporated into FB's own systems.
thanks to the very permissive APIs that Facebook provided
Facebook improved this years ago and you can see the discussion surrounding this change in the released emails. These days a Facebook app can't ask for your entire friends list, instead, it only gets to see your friends that have also authorized that app. Also, user IDs now have a per-app namespace so they can't be (easily) correlated between different apps.
The discussion revealed in this release is pretty fascinating. For example, you can see that at some point Zuck's friends authorized 31 apps and 76% of those apps had "read_stream" access giving access to their entire newsfeed.
Through one lens this is Facebook locking down their API in an anti-competitive way, which is somewhat true, but mostly this feels like an API change making privacy improvements for users. (The Cambridge Analytica data came from an older app that was running before these changes were made...)
Facebook improved this years ago and you can see the discussion surrounding this change in the released emails...
This is the same elision they use. My question was, in the face of almost two generations of awareness of the principle of least privilege (almost typed "lead" again!), why did they design the API so that it gave away so much information and data in the first place?
Through one lens this is Facebook locking down their API in an anti-competitive way, which is somewhat true, but mostly this feels like an API change making privacy improvements for users. (The Cambridge Analytica data came from an older app that was running before these changes were made...)
Read the "Whitelisting" section. The only change they mention is turning off the ability to request permission to access the now-problematic data and information (let's say "D&I"). Of course, we also know that this is selectively applied. That's not "somewhat" anticompetitive, it's not necessarily different that the CA problem, and at any rate is only a marginal privacy improvement for users because there's (my estimate) no way in hell they're going to tell us who still has access to the APIs.
I don't think the Facebook API gives you access to your friends emails...but agreed there are still ways to correlate this. (hash of profile photos for example?)
The "permissive api" is facebook changing what words mean over time. People signed up, shared things, and then facebook changed default behaviors without communicating the change WELL.
That's not true man. Some companies were "allowed" to use/get the data even after it was shutdown and the API was created in the first place to entice the masses.
It's worth remembering that Facebook did share data in violation of their own terms of use with the Clinton came - in fact, that policy came about because Obama "abused" Facebook to collect contact information for friends of people who liked or followed his campaign. Despite this policy change, Facebook allowed Clinton to do the same. They claimed it was by mistake, but even after the mistake was revealed, they didn't change it or cut off Clinton. Clinton's campaign manager speculated it was because "they agreed with us", but also thought that the Trump campaign had similar access (so far, no evidence has emerged to that).
Facebook is not a good actor, anyway you look at it. They are selling data to first or second parties, who are using it to damage our country.
If you own a grocery store and there's a guy on the other side of town who is cheaper, the people who come to your store because it's more convenient would love for you to be forced to just give away half your space to your competitor. Doesn't mean it's "user hostile" to refuse to do so.
But being forced to give away half your physical retail space is hardly the same thing as just letting them keep using an API that you provide explicitly for such use.
Also, more broadly: one would have quite a hard time making the case that Facebook isn’t nakedly, gleefully, and rapaciously user-hostile.
I don't agree, but it's probably not worth making the case. Facebook has billions of users. I assume you think that they want to leave, but they "can't", or that they just don't know how hostile Facebook is towards them.
I think a lot of people who hate Facebook just have a hard time believing that most people just don't care about the same things you do, or to the same degree. They're still on Facebook and Instagram and Whatsapp because they see the world differently from you.
> But being forced to give away half your physical retail space is hardly the same thing as just letting them keep using an API that you provide explicitly for such use.
In this case, Facebook was deprecating the API and declined to provide special whitelist access to a competitor.
So they just heard about Vine, and decided to deprecate the API the same day? That doesn't sound right to me. That conversation seems to indicate they just wanted to block them ASAP (same day), nothing to do with deprecation?
> So they just heard about Vine, and decided to deprecate the API the same day?
No? Where are you getting this read from? The documents clearly show them discussing it from a year prior to shutting down Vine's API access, and planning on announcing it publicly ~6 months prior.
I can't find anything from a quick google search on when the API deprecation actually took effect, but assuming the timeline from Exhibit 43 is accurate, Twitter actually had whitelisted access for over 3 months before being shut down.
Nothing says evil more than preparing reactive PR to bury your competitors. And the nonchalant way his response sends chills down my spine. These people will suffocate innovation just to win.
CEOs and executives are the closest equivalent of royalty in the United States. Their media coverage is often hagiographic as a result. They are humanized and puffed up in the press to an extent that foreign press would never think to do about business leaders in their own countries.
Inch upon inch of columns are dedicated to their morning habits, favourite TV shows and fashion choices, and other fluff content to make them "relatable" to the average joe/jane. This is especially magnified when it comes to SV execs because they wear hoodies and tshirts instead of bespoke suits.
And that's what leads to reactions like "I can't believe he'd be so callous to users", as if the person in question is a hard working bootstrapper and not a billionaire looking to maximize market share and profit.
As the news coverage of the time pointed out, Facebook did this to Twitter a few months after Twitter themselves did the same thing to Instagram (which was already owned by Facebook at that point) and Tumblr: https://www.theverge.com/2013/1/24/3913082/facebook-has-appa... All of the big social networks were and still are like this.
> And the nonchalant way his response sends chills down my spine.
CEOs of massive companies don't have time to write long and explanatory emails. They put people in charge that they trust, so they can just say one word or sentence and know that it'll get handled.
I am definitely no fan of Zuck but on the subject of Elon Musk, this is the same guy who tried to use his high media profile to call an innocent man a pedophile just because he would follow Musk’s crazy plan.[1]
So that another one off the “CEO billionaire but not a sociopath” list.
I'd suggest you find a better source than The Guardian for news about Elon Musk. They have run a relentless smear campaign against him for years now. Just one more reason to loathe that publication, in my book.
> The apology comes after the spelunker, Vern Unsworth, who was involved in the early days of efforts to save the now-rescued boys’ soccer team, threatened legal action against the billionaire executive over the comment.
> Musk said on Twitter late Tuesday that he had made the claim out of “anger” because Unsworth had criticized his idea to rescue the boys with a “mini-submarine” made out of a SpaceX rocket part.
I have a hard time feeling bad for Twitter getting API access pulled out from under them. How many times have they done that to products/services that depended on Twitter APIs?
This was known for a very long time. The "Dumb fucks" comments were brought to the public attention many years ago. The problem is that Silicon Valley gave Facebook a pass on all of the ethical transgressions for years (most likely since they minted many millionaires and billionaires in the valley)
>Michael LeBeau – ‘He guys, as you know all the growth team is planning on shipping a permissions update on Android at the end of this month. They are going to include the ‘read call log’ permission, which will trigger the Android permissions dialog on update, requiring users to accept the update. They will then provide an in app opt in NUX for a feature that lets you continuously upload your SMS and call log history to Facebook to be used for improving things like PYMK, coefficient calculation, feed ranking etc. This is a pretty high risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.’
>Yul Kwon - ‘The Growth team is now exploring a path where we only request Read Call Log permission, and hold off on requesting any other permissions for now. ‘Based on their initial testing, it seems this would allow us to upgrade users without subjecting them to an Android permissions dialog at all.
This is huge, doesn't this make google guilty as well?
>‘It would still be a breaking change, so users would have to click to upgrade, but no permissions dialog screen.
Now remember that Facebook has made agreements with phone manufacturers to have fb installed by default and made un-uninstallable, with all the default permissions to share the users data whether they ever log in and use the app or not!
Sidenote: I've noticed via umatrix that Netflix on pc, during a show, is attempting to load fb js... Netflix wtf!
fb.js is Facebook’s standard JS base, with things like polyfills/ponyfills to ensure certain features in a browser environment. It’s imported by React, Relay, etc.
So this might be what you’re seeing, but normally it’s included in a precompiled JS application bundle.
I believe you're thinking of FBJS (https://github.com/facebook/fbjs), which is a library as you describe, whereas the comment above is referring to loading the Facebook SDK from Facebook's servers. Among other things, Netflix offers Facebook login, which would need the SDK loaded.
Taking advantage of everyone having it already cached on their machine maybe? Or it could just be standard ad retargeting - not unreasonable that Netflix would want to stream behavior data to facebook ads for targeting / lookalike purposes
As of a few years ago, Android asks users to agree to categories of permissions rather than individual permissions, and adding permissions from the same category doesn't count as a new permissions grant. Based on that description it doesn't sound like Facebook was abusing this since they still required users to opt in. (Though I seem to recall from older discussions of this that in actual fact, the opt-in process they implemented was sleazy and high-pressure.)
If their application only needed to run on newer Android, I think they could rely on runtime permissions and not request this permission at all unless the user actually turns the feature on - but even now about a third of Android devices in use are on versions too old to support this.
> This is huge, doesn't this make google guilty as well?
I'm not sure I follow. An app can request permissions, and the user can allow or deny them. I don't understand how this puts guilt on Google. Can you elaborate?
this seems like a hole in their design, additional access is being granted without the user really knowing what is going on and they are deliberately keeping the user out of the loop.
at least, that is how I am interpreting it, it seems that the functionality of their software is not functioning in the 'spirit' of what it is suppose to be doing.
In essence, Android permissions system have (had?) a vulnerability that Facebook exploited, and Google is responsible to a small extent as the maintainer of the vulnerable software.
Google is very culpable because the various problems with Android's permission system were raised hundreds of times by security experts, both internal and external, and they didn't consider it a high priority to fix.
Even when they added a sane permission model in Android $VERSION, developers were allowed to bypass it for years by just building apps targeting Android $VERSION - 1 instead.
Google's web security may be the best in the world, but Android security is a disgrace and they should be called on it. (Fuschia may put them on top of the world if they ever switch Android to that, but we'll have to see whether that happens.)
> Facebook had been aware that an update to its Android app that let it collect records of users' calls and texts would be controversial. "To mitigate any bad PR, Facebook planned to make it as hard as possible for users to know that this was one of the underlying features," Mr Collins wrote
So did this change? I installed Messenger recently and this is pretty much the first thing it requests (no thanks). It also asks to let people search for you by number (no thanks) and to sync with your contacts (no thanks, smells like LinkedIn).
I have zero permissions enabled for Messenger, so I guess it would then ask before uploading my call logs?
Public Relations - how they are perceived by the public. This was seen as a risky move because it had the potential (which Facebook realised and decided to press ahead with anyway) to anger a lot of people.
Public relations -- bad press. Here's some more context from the BBC article:
> Facebook had been aware that an update to its Android app that let it collect records of users' calls and texts would be controversial. "To mitigate any bad PR, Facebook planned to make it as hard as possible for users to know that this was one of the underlying features,"
I worked at Facebook for 5 years on Workplace and Internal Tools. I am typically very critical of the company nowadays, but even so find the discussion here difficult to digest.
People are complex. They are more complex than an action, or even a group of actions. To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them for the sake of making the world simpler. It is a poor model, and in a Dale-Carnegie-way it leads to poor outcomes, as you close the dialog with that person that allows you to change opinions and outcomes. It is the same with groups or companies: some parts of groups do good from some vantage point, some do bad.
I found myself inspired by a lot of what Facebook did. I loved working inside Infrastructure there, I was amazed by what people were innovating on every day. Projects like charitable causes have raised a lot for charity. I've seen the Are you Safe feature reduce so much stress during disasters. I keep meaningful dialogs going with friends I don't get to see often on FB and Instagram. It makes me really happy to see my friends thriving.
One of Napoleon's great gifts was in compartmentalizing pieces of his life. His tumultuous and frankly soul-crushing personal life (which affected him deeply) with Josephine never got in the way of his military victories. I wonder if that's a good model, up-to-a-point for people and groups. With people, by compartmentalizing some unsavory perspective someone has, you have the ability to change it later on through discussion.
... in groups: if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.
> People are complex. They are more complex than an action, or even a group of actions. To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them for the sake of making the world simpler.
Correct. But we're not talking about people here, we're talking about a corporate entity as a whole. It's a bit more complex than a person. It has an incentive model and a set of norms (culture) which enables people to do good/bad based on the situation to maximize their personal gains (whichever those are - personal, material, spiritual, you name it).
If this consistently enables people to do what is perceived externally as unethical, we have a problem.
Nice spin on it, but Facebook is still a bad actor (despite what you say). I held this opinion since the beginning, before all the leaks and all the scandals but nobody believed me. Very smart people were incentivized to join using the entourage (come work with other smart people) and money, then slightly brain-washed in a cult-like manner (us vs them, wartimes, etc..). This enabled them to ignore blatant unethical behavior. I've seen all of these first-hand and it's bad. Bad bad bad.
> Nice spin on it, but Facebook is still a bad actor (despite what you say).
This needs to be a conclusion, not a premise. I do not have a firm belief about whether or not this is true, but I keep seeing people list bad things about FB and draw a straight line to “thus it is evil”. Ideally, we would list the good/bad it does and assign weights to these points to determine if it is net harmful.
I want to be convinced, HN, but when I read comments like the ones in this thread (FB has no positive value whatsoever for its users, FB sells users’ data), it makes it hard for me to appraise.
At its foundation, Faceboot is a man-in-the-middle attack dressed up to attract users. The standard "web 2.0 defense" was that these companies would remain benevolent in the interests of their users to maintain their own profits. The onus was really on them to demonstrate that trusting third parties with our communications is a reasonable thing to do. The more time passes, the more blatant the evidence gets that this is not true.
> The onus was really on them to demonstrate that trusting third parties with our communications is a reasonable thing to do.
In general, I agree, but the context here is dozens of HN commenters claiming that FB is a bad actor/net negative for the world. These commenters cite lists of gripes about the company, many legitimate, some illegitimate, but get hand-wavy when someone asks about the "net" in "net negative".
I'm not rejecting the conclusion, here, by the way! If the company is ultimately bad for society, this line of questioning should embolden the consensus HN opinion.
The metric of "net (negative)" is nonsensical on a multidimensional question, due to varying utility functions.
Faceboot directly optimizes to increase time spent on their site, wasting human potential.
Faceboot also provides an effortless way to check in with loved ones after a disaster, easing human suffering.
Two people will look at both of these facts, value each one differently, and come up with a different "net".
Furthermore, it is not appropriate to give Faceboot all the credit for either facet! People could check in with text messages, and people would be in dopamine loops even with software purely under their control.
So the only thing we can really do is discuss each facet independently, in the context of first principles / morals.
I personally think much of what is wrong with Faceboot is due to the conflict of interest from a third party mediating social relationships, and the inhuman scale of centralization. But the more that comes out about inherent social media narcissism, I also see that there is no silver bullet.
I agree with much of what you wrote, but disagree that the question is nonsense. From a specific utilitarian perspective, "wasted human potential" only matters in so much as it impacts human suffering. If we want to maximize pleasure and minimize suffering, the units are still the same in both opposing points you mentioned - it is just really, really hard to measure.
In practical terms, coming up with qualitative points and weighting them in a good-faith-but-ultimately-arbitrary-manner is good enough, and something that everyone does constantly. For example, I suspect the average HNer weights "wasted human potential from FB" more than "benefit of disaster reporting from FB" (which I also agree with - however "benefit of disaster reporting" almost certainly outweighs "detriment from nebulous analytics firm scandal").
> If we want to maximize pleasure and minimize suffering, the units are still the same in both opposing points you mentioned - it is just really, really hard to measure.
It's not hard - it's impossible. Even we fully agree on a specific metric, a situation still has to be weighed against possible alternatives and integrated on timescales longer than our lives. A very basic example of this is a company optimizing for "profits", but they're actually optimizing for short term profits at the expense of going out of business two years later. The future is unknowable in the same exact sense that every NP-hard problem is. Heuristics are the only way to tackle this.
> coming up with qualitative points and weighting them in a good-faith-but-ultimately-arbitrary-manner is good enough, and something that everyone does constantly
Of course, which is why I'm referring to "Faceboot" even while recognizing that people get utility out of it. I just think that focusing on arbitrary pairs of specific facets is already headed down the path of madness, which is why I stated my initial critique in terms of basic principles.
The heuristic is the sum of what individuals are willing to put up with or not -- not something you can calculate and then just prescribe for everybody.
How do you weight the individual criticisms, or individual benefits to achieve an objective positive/negative score? How do you weigh "installing spyware to see what companies we might buy" against "share photos with nan"? How does the Facebook "only visible to people you interact with enough" algorithm affect the previous question's balance?
Without the weighting I think we've easily reached the point where the number of negative revelations is incessant and the benefits declining (thanks in part to that previously mentioned feed algorithm).
I think that may be the the point, to get too many people from acting on what they have enough of, to "let's talk some more first".
Such discussion wasn't required of anyone before they were allowed to express support or praise, so why would it be required from anyone who reached the point of boycott or criticism, before they are granted that they have reached that point?
I mean, it's fine to ask people why they think what they think of Facebook, but why refuse to accept others are moving for good reasons, maybe for better reasons than others have for not moving -- and ask them questions while letting them pass, without attempting to delegitimize them until they "explained themselves"?
What makes you think it's a premise and not a conclusion?
> Ideally, we would list the good/bad it does and assign weights to these points to determine if it is net harmful.
I have 11 years of data points and experiences about FB, I'm not going to enumerate all of them whenever there is another. I'll just say "typically fucking Facebook".
At this point, I don't even feel obligated to remember it all -- I can trust myself enough. We do that with "evil" people in our lives, too. We don't remember every dirty detail. We remember that there were a bunch of things, and that overall, we had it at some point. I save the conclusion and the checksums and that's enough.
If you think I'm operating on a premise, instead of having come to a conclusion, how is that not you operating on a premise?
> I want to be convinced
Maybe, maybe not. What you are doing is delegitimizing even the conclusions others arrived at, by simply calling all of that mere premises. You saw a bunch of posts that struck you as knee-jerk, so all of it is knee-jerk.
You have to form your own opinion either way, that burden is not on others. Do you also expect anyone who says anything positive to give some kind of thorough, 1000-page assessment of all the benefits and cons? No, of course not. Same goes for criticism.
I for one don't care about the "evilness" of people I never met. For me the harm done through ignorance or fear or "evil" (which is just another form of weakness really) or not caring enough is the same harm.
> At this point, I don't even feel obligated to remember it all -- I can trust myself enough. We do that with "evil" people in our lives, too. We don't remember every dirty detail. We remember that there were a bunch of things, and that overall, we had it at some point. I save the conclusion and the checksums and that's enough.
> If you think I'm operating on a premise, instead of having come to a conclusion, how is that not you operating on a premise?
A mental conclusion can be a discussion premise. It doesn't invalidate your conclusion to say it should be a conclusion not a premise, because you're asserting a premise in a discussion which you are not (yet) supporting.
Also consider that you have seen eleven years of data points and experiences from the point of view of a small subset of users; there could perhaps be an equivalent cache of positive datapoints which tend to be significantly less interesting to report on.
Thus, supporting your point with concrete examples is how you contribute to a discussion, because then you and any adversaries can challenge you on the merits of your argument.
That's what the conclusion/premise separation is about.
> Also consider that you have seen eleven years of data points and experiences from the point of view of a small subset of users; there could perhaps be an equivalent cache of positive datapoints which tend to be significantly less interesting to report on.
You know what a thief can be like? 99.99% of the time, they don't steal. They sleep, they brush their teeth, they do all sorts of stuff, and every 2 weeks they take all the savings from an elderly woman.
How often you do need to see someone doing that to consider them a thief? Would you really care about any positive stories after seeing what you saw?
> That's what the conclusion/premise separation is about.
You can't speak for that other person. Let them respond for themselves.
> You can't speak for that other person. Let them respond for themselves.
Sounds like you're more interested in competing with someone than talking about ideas.
> Would you really care about any positive stories after seeing what you saw?
...yes? Of course? I don't automatically dehumanize that hypothetical person for their deeds, whether I approve or not, or believe there should be consequences. Like, doesn't Facebook collaborate with law enforcement in tracking down predators and scammers and the like? It's not as simple as "bad. go away."
You should remember enough to make a proper argument, dude. A solid conclusion needs solid support.
> Sounds like you're more interested in competing with someone than talking about ideas.
No, I want to talk about the idea they expressed, not what you read into it. I can only do that with them.
> ...yes? Of course? I don't automatically dehumanize
Who's talking about dehumanizing? How is considering someone a thief dehumanizing?
edit2: Facebook is a company. It can't be dehumanized, it's not a person in the first place. People in it are responsible for what they do. Someone who fought shitty decision and then left is different than someone who, say, hires a firm to smear critics. That goes without saying as far as I'm concerned. But my thief example refers to Facebook, you see? Just because apparently my argument isn't easy to follow for everyone, doesn't mean it doesn't stand.
So, where is the dehumanization? Who is being dehumanized when someone comes to the conclusion that FB is on the whole "bad"? Because we're not appreciating all the good, supposedly? When someone is a thief, or a murderer, or a company is, then all their fantastic properties they may have is interesting for their personal friends. But not to the police, judges, or wider society. They know that the person has probably a lot of reasons for how they became that way, and nice sides to them, but they already have their own friends, it's simply completely out of scope of the subject at hand, unless it's directly related to the "crime".
> Like, doesn't Facebook collaborate with law enforcement in tracking down predators and scammers and the like?
Yes, and that thief who sometimes robs elderly women who then freeze to death outside, also has child, and he's very great with that child, and he's singing in a choir, and all sorts of great things. But you don't judge a meal by the freshest ingredients, but by the most spoiled. You judge a person by their worst deeds, and likewise a company. Again, we're talking about judgement with a capital justice here, not being friends, thinking we're better, or thinking they're evil and we're good, or any of that.
> You should remember enough to make a proper argument, dude.
I think my argument is just fine, and it even seems to get to you a little.
edit: And what post of mine are you even referring to? Where did I make an argument without examples? I was responding to someone else complaining that everyone who thinks Facebook is "evil" (let's just say bad) is operating on a premise. I was responding to that general point, I'm not decebalus1, who in turn didn't have "Facebook is evil" as their main point either.
Their main point, if you would follow the guidelines, hasn't been addressed by anyone. Their main point is the first two paragraphs, the rest is bonus. How come you are trying to teach how to "make a proper argument, dude", but didn't notice that?
Oh, and clicking buttons instead of reasoning kinda gives away who is interested in discussion, and who is interested in dehumanization and censorship.
Conducting psychology experiments on people without their consent (or awareness of it)
Documented evidence that Facebook as it is now decreases people's quality of life, but they don't want to mess with their formula because that's what brings the dollars in.
Theres many more. But these two are the biggest I can think of off the top my head.
There are certain forums in which the groupthink makes discussion of certain topics in good faith impossible.. sadly, HN is one of those when it comes to Facebook being anything other than cartoonishly evil.
> I wonder if that's a good model, up-to-a-point for people and groups. With people, by compartmentalizing some unsavory perspective someone has, you have the ability to change it later on through discussion.
Compartmentalization is part of the problem, not the solution. It's why some people can exploit and hurt a hundred thousand other human beings in the morning, enjoy their lunch break, fuck up the environment a bit more in the afternoon, and come home to a happy evening with their spouse. We shouldn't be encouraging more people to separate their private happiness from their interactions with society.
> ... in groups: if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.
The hope is that if enough people leave, the organization won't be able to keep functioning. Or, if it survives and becomes an evil-people-filled cesspool, it'll be easier to direct regulatory actions against it and just shut it down.
Also, Napoleon was famous, but not exactly a paragon of morality.
"Compartmentalization is part of the problem, not the solution."
This, this and this again. We as people are absolutely allowed to be complex, contradicting beings, but it is because of this richness of breadth and depth in our characteristics and beliefs that we ought to consider deeply the consequences of our actions.
Compartmentalization acts effectively in the opposite direction of that.
Like notacoward's post below, this one misses the point. The shaming is not about how good or bad or complex you are.
Some people have determined that Facebook is an organization that's producing some bad results. Socially shaming employees, removing the status they might have hoped to acquire and spiking attrition are tools being deployed to change Facebook's behavior and the behavior of Facebook's competitors.
You might claim that this pressure will never work, or that it will cause Facebook to shut down the "Are You Safe" feature, or gut the charity tools, or avoid developing these kinds of projects in the future. But we should avoid hand-wavy claims that Facebook and the people who work there are just too complicated to influence.
> you close the dialog with that person that allows you to change opinions and outcomes.
So if we're nicer to facebook, maybe it'll decide to not spy on people quite so much, undo the engineered-for-addiction notifications and feed, and curb it's anti-competitive practices (but not give back any market advantage it has already gained through them, of course)? The executives will voluntarily decide to make the company earn less money, so they can be more ethical?
Is there any precedent for a large corporation acting in this way?
It's not that you're wrong. You're not. People are complicated, and everyone's the hero of their own story. There are plenty of people out there who give millions to charity but steal from the poor. As a general rule, though, when someone accuses you of being bad and your best argument against it comes down to "what is bad anyway?," you're probably on the wrong side.
Is Facebook universally bad? No. It's done lots of good things. And lots of good people work there. But Facebook is also doing lots of bad things. Your policy chief hired a PR firm to push antisemitic conspiracy theories about George Soros. "People are complicated" isn't an excuse.
While I agree that humans are complex and should not be generalized by one or a few actions, Facebook as a whole has repeatedly demonstrated they are not good stewards of the data entrusted to them by users. The actions of their leaders is in complete conflict with the messaging they portray.
From the ad campaign run by Facebook in 2018:
"...From now on, Facebook will do more to keep you safe and protect your privacy, so we can all get back to what made Facebook good in the first place: friends. Because when this place does what it was built for, then we all get a little closer."
To anyone looking closely at the actions of executives and living outside the SV echo chamber, this statement is laughable.
While the actions of the many inside the company have been noble, they too have been taken advantage of just as those who used the platform for so many years.
> One of Napoleon's great gifts was in compartmentalizing pieces of his life. His tumultuous and frankly soul-crushing personal life (which affected him deeply) with Josephine never got in the way of his military victories. I wonder if that's a good model, up-to-a-point for people and groups. With people, by compartmentalizing some unsavory perspective someone has, you have the ability to change it later on through discussion.
Also, the Napoleonic Wars killed 3 to 6 million people.
Over humanizing people is also a bad way to look at these things as it diminishes their contribution towards a more problematic group and class structure. While yes, most billionares are probably fine people by many individual metrics, the billionaire class can only exist by extracting value off the backs of others.
Facebook as an entity might be run by great people who love their families and support their communities, but the entity has done tons of very sketchy things and anyone who willingly or knowingly takes a part within that, especially for personal benefit is responsible for the actions of the group as a whole.
Humanity might transcend class and social status, but it doesn't excuse anything.
> and anyone who willingly or knowingly takes a part within that, especially for personal benefit is responsible for the actions of the group as a whole.
And that is on top of "The Responsibility of Intellectuals"
> if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.
Is this not potentially an optimal outcome?
If companies who have a track record of doing unethical things suffer for it by being depleted of all employees who are not either unethical, incompetent, or both, this should dramatically compromise the effectiveness of the company. If this happens repeatably, it provides a disincentive for unethical corporate behavior.
The 76 year old woman I know who seems to be addicted to facebook and gets harrased by people reporting her posts and whatnot just wants to talk to people and share photos. The people and the photos and the emotions and the words people bring themselves -- Facebook just adds grief to that.
At best it's invisible, at worst it just kicks people out with no recourse, because some troll reported them, and because they're not a FB employee or a celebrity. These people are people, too. You barge into the public and drag it onto your platform, and then you don't just dehumanize people on it, you completely remove them. The rest either is friend or not friend, blocked or not blocked, posts are liked or not liked. Yes, it's a very poor model -- don't assume everybody else is using it, too.
> It makes me really happy to see my friends thriving.
That's the equivalent of "it's fun with friends" for game reviews. Every multiplayer game has that. Every site that allows sharing of photos and text has that. Because people have that.
> With people, by compartmentalizing some unsavory perspective someone has, you have the ability to change it later on through discussion.
When later? When was Facebook ever up for candid discussion? How come you blame it on those who, in absence of any say, ever, now think badly of Facebook?
> if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.
An interesting analogy... The 10th amendment to the Bill of Rights was intended on allowing each state to be diverse in its laws. If people in state x want the laws in state y to change, a constitutionalist would say that you should move to that state and work in changing the laws from the inside, via the 10th Amendment and the state's own Constitution. If you don't want to put in the work of changing the state from the inside, and would rather change the laws from the outside (i.e. via federal authority) then you're probably an authoritarian.
I guess the only point I'm trying to make, is yeah, it's hard. Thanks for your contribution.
As opposed to when? When you worked there? When was this period that Facebook was a beacon of ethical behaviour?
> to explicitly dehumanize them for the sake of making the world simpler.
Like aliasing people into a point on a social graph and a wallet? Dehumanising behaviour and monetising that is the main ethical problem with Facebook. It's a bit rich to accuse opponents of over simplification.
>It is a poor model
FB market cap seems to disagree. Pidgeon-holing people for profit is an exceptionally lucrative model.
> Projects like charitable causes have raised a lot for charity
And the trains in Italy ran on time!
You worked there for five years and never had a clue. Maybe you're not the right person to provide a judgement on judgement.
So... we should be nice to Facebook, so as not to distract them during their advance on Prussia?
...and because treating them differently based on their actions may cause good people to apply their skills to endeavors other than growing Facebook’s wealth of private data?
>>To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them
The only reliable method for evaluating someone’s values is to assess the choices they make. What values do their actions suggest? Do these choices affect your ability to trust these people?
I fundamentally disagree that it dehumanizes to do this, and I also fundamentally disagree that anyone intends to dehumanize them with this approach. I find it troubling that anyone expresses such a black and white assessment of people trying to determine where others should fit in the herd.
Put another way:
the only way to judge someone’s ethics is by evaluating their action and comparing it against their words.
> if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.
This is just complete rubbish and it appears you are still rationalising the time you spent working for the company. If you are trying to achieve 'good' inside a company like Facebook then you are a complete sucker or, as Zuckerberg would say, just a 'dumb fuck'. You're being played. Facebook is not interested in doing good; that has been abundantly clear since almost day 1.
If what people are taking away from their time working for FB is, "you know what? Napoleon was pretty cool!" then that really buttons things up tidily, though probably not in the way you intend. Too bad he had the sads about his partner though, I guess.
To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them for the sake of making the world simpler
Baruch Spinoza more or less invented the concept of ethics in the 17th century, which (partially, and simplifying) is premised on the effect of "an action"[1] being good or bad. If this doesn't reach, let's go back 2,000 more years to Jainism, in which karma is comprised of particles in the universe that stick to a person through their actions (good/bad/undefined).
tl;dr: If your argument is that a person's actions should have no bearing on a sense of "good" or "bad," know that significant parts of the entirety of recorded history contradict you. [insert your own Godwin reference here]
Point being, calling someone "bad" or "good" only speaks to the preponderance of evidence one way or another, and it's absolutely within all of our rights to have an opinion on what FB/Zuck/SS have done with the aspects of our lives to which they have had access, down to discrete decisions. I mean, I don't think anybody would say that FB has obviated the concept of reputation, and having an effect on society doesn't only refer to the good. And really, I don't see anybody saying "Zuck = bad" nearly as much as I see "Zuck = good" and "Zuck has done bad things and made bad decisions." He's the face of the company, this is how society works when you're not moving fast and breaking things.
It's not about people being complex, it's about people shitting in the world's PII sink at the party. Specific people, with witnesses. These were actual choices made under consultation with many people, all of whom are extremely highly-compensated, to reduce the control you have over information about your actual life (what focus groups pay big money for). It's not outlandish to say that the ability to trade pictures with and keep up with people has not been a fair trade.
Forgive me if I discount the hosting of online charity facilities (and really: [3]), and the supplanting of phone calls and neighborly phone trees in times of disaster or crisis, but in the words of Nelson Muntz, "If you hadn't done it, some other loser would have. So quit milkin' it,"[4] and maybe that other loser would have protected their users more. (and please don't insult us with anything along the lines of "personal information doesn't have value until someone sells it")
To conclude with an even larger shadow, FB has made a shitton of money from these policies, money they use to hire people and pay them enough not to work at other companies that also pay a lot, and this money has been used to drive up housing prices and rent for their employees' convenience (nobody overbids for fun). Thus this infrastructure of data scams affects the ability of teachers to live near the schools in which they teach, police to live among those they patrol, and low wage workers to not have to commute 2+ hours each way.[5] I don't know if working at FB inures you to these examples, but it seems clear that FB leadership does not display the maturity that I personally would like to see in those who have the ability to make these decisions. Zuck didn't personally raise rents, but he built machinery that did so. But hey, it aiin't his fault his exploding knife-gun hurt someone, he painted it pink!
To give the benefit of the doubt I'm not sure if OP was objecting to the first step of placing a person or company along a good-bad continuum or the second step of reducing that continuum to a binary label.
In either case though, no amount of complexity makes it unreasonable to consider whether a company like Facebook is a net positive or negative force on society -- which is a binary decision that requires reduction.
I'd argue that reduction is required for qualitative decisionmaking and so is no barrier to legitimate opinion.
First, because we can never know everything that is going on outside of investigations like this one and public statements by FB.
Second, language is a lossy codec for thought, so the investigations and public statements by FB are themselves reductive. It's turtles all the way down.
This applies to both examples then: thinking FB is good or bad, and being required to eliminate reduction in order to make that decision.
Not sure what point you're trying to make about Napoleon, he wasn't exactly someone to look up to. He was directly responsible for millions of deaths, basically a 19th century version of Stalin (and the other "great leaders" of the 20th century).
I would recommend educating yourself on a contemporary Napoleon biography (I really loved Napoleon: A Life) before making such a strong claim. Napoleon is a very complex figure and your kind of blanket statement here is the kind I was rebelling against in my original comment.
The issue here is that writing a good account on Napoleon is very difficult. His enemies disseminated nonsense about him, and he was very careful about his public image even as a very young man.
I could get into the history: but Napoleon was fighting feudal leaders who aimed to restore the monarchy in France. They declared war on him many more times than he on them. The Napoleonic Code is one of the most influential documents in history and through it and Napoleon's influence helped spread the ideals of the French Revolution throughout Europe. Of course there are many negative things to say about Napoleon, but he is a very interesting person who deserves a look.
>>Projects like charitable causes have raised a lot for charity.
Probably would have been raised by a company FB bought or crushed. Or a company in another field.
>>I've seen the Are you Safe feature reduce so much stress during disasters.
Not exactly charity, it gets people to use FB more.
FB's problem (for the world) is that it is big and as such it can be used to manipulate opinions or mess with people's psychology. Either by others or by FB for $$$. Same would happen if Fox News or CNN were watched by a billion plus people. They would have the power to move certain people to a different direction. Google is another one, a little bias on search /news results and maybe millions of votes would move one way or another.
Sometimes, even amidst the Brexit madness, I love our Parliament. Publishing and seizing documents like this is a move that proves politicians have the spine to look after people's interests. Time to get serious, act on this, and break up Facebook forever.
This is our committee system working as intended. I wish it still applied to other areas of governance like local government funding, lobbying, MPs in government and so on.
Late Edit: I must add to that list admission of ministerial responsibility. The last resignation with honour, rather than for political point scoring, was Lord Carrington resigning as Foreign Secretary in 1982. Since then it's a dead concept.
Seizing documents for a valid inquiry is one thing.
But does that follow that publishing them is OK too?
From the outside, it looks like these politicians are frustrated that a foreign CEO ignored their demand to appear before them (because he has no legal obligation to do so), and have decided to retaliate by releasing embarrassing private internal documents obtained during an investigation in the hopes that Facebook will be politically and financially damaged.
I get that people hate Facebook, but does that justify any level of bad behavior as long as it harms them?
All Parliamentary committees publish online, whether evidence is written or oral.
You can ask for an exception for part of your evidence, if you fully explain why, which the committee considers. The usual reasons for discretion apply. It's almost unheard of for some evidence not to be published at all, though it has happened. 1980s I think was the last case.
No idea what dusty precedent or procedures apply when someone refuses to attend or documents are seized. That doesn't happen much.
Maybe no one asked. Maybe this is the redacted for sensitivity version as we have no idea the amount seized in the first place. I think we'd have to wait for the report to know.
It's getting hilarious to see people more offended by FB's information being exfiltrated than the exfiltration of everybody's data that FB aided and promoted. Ah, but those are just your fellow people, not the money man.
The potential problem I have here is abuse of power by government.
I’d be pretty upset if Congress subpoenaed user data from Facebook and then selectively published embarrassing info on their political enemies. Even if I hated those people.
I’d be pretty upset if Congress subpoenaed user data from Facebook and then selectively published embarrassing info on their political enemies. Even if I hated those people.
I think it's more like this, which if you've been paying attention has been going on for a long time. If you want to bet that the US hasn't snatched data they want when they know someone is in the country even temporarily, I'll take that bet.
Among people who have a problem with the Six4Three situation, I feel like the issue is really just that we found out about it. The powers are already there, waiting to be used.
The government is invading our privacy with surveillance and border searches, agreed. No defense or denial of that here.
But I don’t see stories where some congressperson is then publishing some of the fruits of that surveillance or searches just to embarrass political rivals. That would be especially beyond the pale.
> I get that people hate Facebook, but does that justify any level of bad behavior as long as it harms them?
Personally, given the level of harm Facebook has helped to inflict on the world these last 2-3 years, yeah, I reckon I'm perfectly happy with people inflicting harm on the corporation, probably even to the point of ruination.
Seems like a pretty dangerous road to go down, personally.
Looking at some of our politicians in the US, I don't really want them to have unfettered power to misuse the law and their elected position to destroy any person or organization who raises their ire. Even if I don't like that person or organization and want to see them destroyed.
I believe Parliament has decided to limit its own powers - the Human Rights Act, membership of the EU, devolution of powers to Welsh and Scottish parliaments etc.
Of course these can be changed by Parliament... ;-)
I think this is probably something that needed to happen, for the furtherance of justice. What seems wrong to me is that politicians are essentially carrying out summary justice on particular people / a particular corporation, rather than fact finding to make informed decisions on legislation. If there's an argument that this publishing is necessary for an informed democracy, then I think that should be considered by a court.
That said, the UK has a very weak conceptualisation of separation of powers, thanks to parliamentary supremacy.
I don't think this move achieved much of anything, other than to encourage business executives to use ephemeral messaging for their confidential internal communications.
Indeed, kudos to them having the gall to do such a thing. Heavily lobbied congressmen would never do such a thing or even implement something like GDPR here.
> What laws did Facebook break that warranted seizing and publishing internal documents?
Parliament is sovereign. Facebook ignored Parliament. That is tantamount to blowing off an American court order.
More pointedly, Facebook has broken their agreements on keeping WhatsApp and Facebook data separate. These e-mails further show Onavo and Facebook conspiring to hide their intent around data collection users, which likely breaks British privacy and honest trade law.
Parliament is not into enforcing laws - it writes them. Parliamentary sovereignty means it can publish internal documents[1] if they decide it's in the public interest, if not, the MP will be appropriately censured by their peers.
1. In the US, the president (executive) can declassify any classified information, the DOJ or judiciary may publicize (discovered) internal documents for trials/indictments before guilt is established (note: IANAL). In the UK, Parliament is supreme to the executive and judiciary.
It didn't break a law, but it also didn't comply with our sovereign parliament's right to investigate matters of interest to its members. In a manner of speaking in this sense, they are the law.
The law against not screwing your users and not being anti-competitive. The latter is actual law, the former is the kind of thing that leads to street justice.
I'm glad to have the documents, but I have to admit its really sketchy for a government to step in and do that. People are saying 'the sovereign body wanted it', but that's a worrying way to run things.
It set a pretty awful precedent when Facebook spread misinformation so effectively that my country voted to cripple itself. We are going to have justice: no foreign intervention or dark money will stop us. I rarely feel pride like I did today, because for all their faults it is clear to me that the current crop of Parliamentarians will hunt down those responsible for weaponising disinformation in our politics.
Quick textual analysis: in a pithy 623-word statement, Zuck manages to mention "shady", "sketchy" or "abusive apps" no less than 7 times. 8 times, if you include the time he mentioned Cambridge Analytica without using a sketchy adjective.
Notice the spin as Facebook the White Knight protecting the public from the evils of sketchy apps. Unclear how this will play out given public losing trust in Facebook itself.
[Reference]
1. "some developers built shady apps that abused people's data"
2. "to prevent abusive apps"
3. "a lot of sketchy apps -- like the quiz app that sold data to Cambridge Analytica"
4. "Some of the developers whose sketchy apps were kicked off our platform sued us"
5. "we were focusing on preventing abusive apps"
6. "mentioned above that we blocked a lot of sketchy apps"
7. "We've focused on preventing abusive apps for years"
8. "this was the change required to prevent the situation with Cambridge Analytica"
> Facebook used data provided by the Israeli analytics firm Onavo to determine which other mobile apps were being downloaded and used by the public. It then used this knowledge to decide which apps to acquire or otherwise treat as a threat
> there was evidence that Facebook's refusal to share data with some apps caused them to fail
Stuff like this should trigger the EU Commissioner for Competition to withdraw the authorization that “allowed” FB to acquire Whatsapp and should force a split between the two entities. A fine (no matter how big) will be seen by FB and its investors just as “cost of doing business”. Facebook in its current form needs to be split up back again.
I have no particular love for FB or what they've done with data, but these I don't understand why either of those points is particularly controversial or even unusual.
> Facebook used data provided by the Israeli analytics firm Onavo to determine which other mobile apps were being downloaded and used by the public. It then used this knowledge to decide which apps to acquire or otherwise treat as a threat
Checking out your competition is pretty standard among all businesses, as is buying out the ones you can't beat.
> there was evidence that Facebook's refusal to share data with some apps caused them to fail
Sharing or not sharing data with another app is likewise not an unusual decision. FB is not a public utility - they can not work with other apps for any reason, or no reason at all. And this is a particularly ironic thing to point out, considering that the main thrust is complaining that they _did_ share data with other apps. So it's bad if they do share data, but also bad if they don't?
> Sharing or not sharing data with another app is likewise not an unusual decision. FB is not a public utility - they can not work with other apps for any reason, or no reason at all. And this is a particularly ironic thing to point out, considering that the main thrust is complaining that they _did_ share data with other apps. So it's bad if they do share data, but also bad if they don't?
To me, this would be Facebook taking advantage of their stronghold on the market to continue to dominate the market.
Let's remember this is not Facebook's data, this is their users' data. It should be up to the User who they can share their data with, but they weren't given the choice to use their data with an app because Facebook decided they didn't like the way the other business competes with them in some aspect or how they aren't making money out of the third party app. An example is Facebook won't grant Influencer Marketing Companies access to their API even though the user (The Influencer) wants them to have the data so they can get paid based on the views, shares, comments, and likes of the posts they've been contracted for. User has a legit desire to share that data to the point where they will take screenshots, GDPR deletions, etc to fulfil the need for that data to be shared. Facebook's reason for not allowing them access is simple, they don't make any money out of the deal.
So if they refuse to allow you to share your data with people you want to share it with, that is bad. Sharing your data with companies you don't want to share with is also bad.
> Let's remember this is not Facebook's data, this is their users' data.
No, it's not - short of a change in law it is Facebook's data. Facebook collected, maintained , and analyzed the data. It's their data, the fact that their users allowed Facebook to collect it does not change that fact.
GDPR allows users to view the data Facebook collected about them. If users want manually request their data from Facebook and provide it to competing apps they can. But this is the user's prerogative, not Facebook's.
> If users want manually request their data from Facebook and provide it to competing apps they can.
Actually, for a lot of it they can't. The GDPR downloads are incomplete and are missing lots of data like many other GDPR downloads.
And in some cases these aren't competing, they just aren't making Facebook money so Facebook denies them access and tells them what they need to do to add Facebook to process so they can get access.
And if I can tell Facebook to delete something and they have to do it, they do not own it. Facebook are not in control of the data, they have access to it, but they can't share it without permission, they can't sell it without permission, they can't do alot of things. This, to me, means they do not own the data.
The requirement to make GDPR downloads available was never well thought out. For instance, say you requested a log of your Facebook chat logs. Should Facebook provide this without redaction of the recipients' identities? I'd say no, providing this data without that redaction is a violation of the recipients' privacy. Especially for a service that is all about networking and sharing, it's difficult to walk the line between giving the user the data they requested, and respecting the privacy of other users.
The ability to tell Facebook to delete your data does not grant you ownership of that data. You do not have the right to tell Facebook to let some 3rd party to access it using Facebook's infrastructure (which seemed to be the claim made by the original comment I responded to). The fact that the government forces companies to abide by certain regulations does not mean that the company is not the owner of the things that are regulated. A company might have it's interview records audited to investigate discrimination. Does that mean the company does not actually own or possess these interview records?
>Should Facebook provide this without redaction of the recipients' identities? I'd say no, providing this data without that redaction is a violation of the recipients' privacy.
How is it a violation of privacy to tell a user with which other users they have communicated in the past?
Do you want your Facebook usage published because any one of the hundreds (thousands?) of the people you interacted with requested their data under GDPR?
It's not just telling the user. Once they info is published said user can turn around and tell it to the rest of the world. Some people have already published their requested data, it seems unlikely that people publishing said data face consequences.
Sure, nothing is stopping people from scrolling through chat logs screenshotting them. But that's difficult to do at scale, so there's de-facto limits on how much info can leak this way. Also consider the situation in which you unfriend someone, thus preventing them from seeing past chat logs - I think, I haven't used Facebook in a long time. If that person does a GDPR request should the company deliver data that the user is otherwise prohibited from viewing? If GDPR does mandate this it seems like a legally-mandated side channel attack.
> No, it's not - short of a change in law it is Facebook's data.
Er, actually it is. Under GDPR, and under previous data protection rules that lacked meaningful enforcement, users own their data. Companies at most get to "borrow" and/or keep it safe for them.
Alas, this fundamental fact seems to not have quite made it everywhere yet..
What about data that Facebook generates about you, but you did not provide? Is that information required by the GDPR to be provided to you, or does the GDPR only state that any information you provided to them must be returned? There's no way I'm reading the GDPR nor would I even understand all of it if I did.
We know that FB makes connections between you and other users even if you've never made those connections yourself. We've heard about the shadow profiles, etc. These are all data points that FB generated on their own. It seems like this is the information that is valuable. Information gleaned from ML analysis of images posted would also be their own. That has way more value than the image itself would. FB is doing way more under the hood than just slurping in the data users posted, and then displaying it back to them.
It doesn't matter where the data came from - uploaded, downloaded, generated. If it is PI it is the property of the identified user, not Facebook's. If they slurp 100 images of my face then that data is mine and it is my right to have it deleted and told who they shared it with and for what.
Possible reason: It is actually intentionally avoided to use the concept of "ownership", since those advocating for "data ownership" generally mean things like "if people own their data, that means they can give it to companies and then that company can do what it wants with the data, since it is its property", which is more or less the opposite of what the goal of this privacy regulation is. The term is used sometimes in more general material, but it is problematic framing. People have rights to their data and how it is used, but they don't derive from ownership.
So in that particular sense, it is even stronger than ownership, at least in terms of preventing facebook from ever becoming the owner. (Not in the sense of having all of the rights usually associated with ownership, which include selling)
The article linked doesn't back up the claim that GDPR makes the user the owner of the data companies collect. Being entitled to obtaining a copy of a company's records about the user is a very different thing than making the user the owner of that data.
I'm not finding where in the GDPR text the user is stated to be the owner. The text frequently refers to the user as the "data subject". Ownership is not mentioned. The article you link to claims that the EU resident owns the data, but never backs up this claim with references to the GDPR text.
Seriously, it's mind-blowing to me that people can't make this distinction. I should be able to ask the library for a record of the books I've checked out. But that doesn't imply that they should hand those records out to whoever wants them, or that they need to make them available for anyone but me, even if I think it'd be more convenient for someone else to pick them up for me.
Even if it was the User's data and not FB's (which is argued downthread a bit...) there's nothing that requires Facebook to make it accessible to other companies. There is real cost in creating an API, running it, managing it, etc.
One could also argue that the User already has all their own data, they just didn't collect it very well. So really they're trying to use FB as their data collection platform - which FB is free to do or not, as they choose.
Facebook won't grant Influencer Marketing Companies access to their API even though the user (The Influencer) wants them to have the data so they can get paid based on the views, shares, comments, and likes of the posts they've been contracted for. User has a legit desire to share that data to the point where they will take screenshots, GDPR deletions, etc to fulfil the need for that data to be shared. Facebook's reason for not allowing them access is simple, they don't make any money out of the deal.
This seems so ludicrously backwards to me. How is my decision to upload my data to Facebook a meaningful claim that they should run their platform in a particular way for my convenience, even if there's no business justification?
Allow me a tortured metaphor :)
Imagine you own a very popular art museum and you let any artist display their work there for free. Visiting requires a free membership, but visitors must apply for it. Competing museums and galleries are always trying to get a membership so they can come through and scope out which artists are most popular and then woo them away. You reject those memberships.
How on earth is this unfair to anyone?
Now, letting those competing museums in is obviously better for the artist, but does that give them any right to demand it? You're providing this space for them for free! You're providing a huge audience to them, for free! What possible justification do they have for demanding that you let your competition come strolling through and hurt your business just so it'll make their life a little better? If you don't like it, you can pull your work from the museum at any time.
So many of the critiques of companies like Facebook and Apple come down to people wanting all the benefits that these platforms provide, but without any terms, inconvenience, restrictions, or tradeoffs. Ridiculous.
Popular art museums generally run with allowing entry without requiring membership. They make up for that by collecting donations or entry fee, running a comparatively expensive cafe, and ensuring everyone leaves via the gift shop. They might heavily promote, and employ people to individually sell memberships, or give free membership in exchange for monthly gift shop and offers spam. One or two might be a little too heavy handed in promoting membership over visit.
They don't plant a bug in your pocket without knowledge, or consent to learn what other art museums and galleries you are visiting to assess if they are a worthy target of restriction or takeover. They don't insist on knowing your entire contact list and SMS history just to look at the pictures. They don't ban employees of other galleries and museums from visiting. Not least because 9 times out of 10 they would not know if they worked for a different city's gallery.
That's enough torturing. ;)
One of the above is a fair exchange, that can be freely and knowingly chosen. The other is using undisclosed and underhand methods to get extra leverage. That, by definition, cannot be fair. Turns out it's illegal too.
This is one of the very rare cases where I wish the UK had a little more of the US's litigious culture. In my 50 odd years it's the first and only case. Normally I wish for the reverse. :)
If the museum waived the entry fees for users that opted into a having their info tracked and the data monetized there are probably people who would prefer to compensate the museum that way. It'd probably open up access to the museum to poorer people that couldn't afford the ticket. Heck, I'd probably do it. My contact info has almost certainly been collected already, so for me it's basically instant savings. Your main criticism is that the data collection is undisclosed. I'm sure there's debate to be had over whether Facebook was transparent about it's data collection, but they always did tell users about it in some capacity.
How much would Facebook cost if users has to pay cash? I think the Economist did a survey on this topic with Google search and the average price people were willing to pay was $1,500 a year. If Google switched from monetizing user data and advertising to charging money, how much human productivity would be lost because some people can't afford a good search engine? How much would this exacerbate academic difficulties of poorer student if we add the fact that their classmates can search online information more effectively because their parents can afford Google search?
The lack of tangibility of paying for products with personal data instead of money can be irksome, but it has created an unprecedented ability to build large, complex services while delivering them free of (monetary) cost to the end user.
That might be the average price Economist readers would pay. I suspect it would be a lot lower for most users.
That aside - FB never gave users that choice. Where many media services - including AOL, streaming media companies, and newspapers (and the Economist) survive by charging a subscription, FB has never attempted to use that business model.
Why? Because FB has always been a data harvesting and user monitoring/profiling operation that happens to operate a social media front, rather than vice versa.
Ditto Google for search.
And "telling users about it in some capacity" is very different to giving users a list of buyers and full details describing what their personal data was used for.
Realistically, no one outside the industry - and not that many people in the industry - understand where this data ends up and what it's used for.
Pretending otherwise is casuistry and special pleading. There is no way users can estimate the true value of their data, either individually or in aggregate - because the value is determined by buyers who remain hidden and unnamed, and neither FB nor the buyers are obliged to explain any part of the process.
There is no informed consent here. It's a perfect corporate asymmetry, and very much designed that way.
Facebook probably would charge users directly if it were acceptable to do so. Unfortunately (in my opinion) the optics around providing that choice would be even worse than not providing that choice at all. People would portray this as akin to racketeering, making users cough up the dough if they don't want to be tracked. Ironically, concern over Facebook's data collection inhibits their ability to explore direct monetization methods.
Also, the value of this information isn't secret: Google and Facebook are public companies and publish their revenue numbers, no? It's not hard to divide by the number of active users to get an average value per user.
I added the membership bit just to make the whole "rejection of competitors" part make more sense.
They don't plant a bug in your pocket without knowledge, or consent to learn what other art museums and galleries you are visiting to assess if they are a worthy target of restriction or takeover. They don't insist on knowing your entire contact list and SMS history just to look at the pictures. They don't ban employees of other galleries and museums from visiting. Not least because 9 times out of 10 they would not know if they worked for a different city's gallery.
I'm not defending all of Facebook's practices. In particular, any surreptitious attempt to collect sensitive user data without permission, or in violation of permission, is terrible. We might differ on what constitutes "sensitive user data" or "permission" but whatever.
Regardless, that's not what we're actually talking about here, and if the museum in question did ask for user consent to a full cavity search before going through, I don't think that actually changes the calculus of what's fair for them to provide to their competitors.
Your metaphor is good, but misses the part where this art museum has either bought or forced downsizing/closure on all other competitive museums. That forces artists to choose this museum or one that it owns if they want any recognition/income from the work they do (this museum certainly isn't paying them for the work).
That would be a fair concern if it were true. But Facebook is hardly the only way to build an audience. What about Twitter, or Snapchat, or just building and hosting your own site and using Facebook as just another promotion channel?
Look at the T&Cs sometime. Once you upload something, it's theirs. Once they sniff something off your phone, it's theirs. Once you look at something in another tab while FB is open, it's theirs.
My understanding is that gdpr still doesn't say that it's your data and not Facebook's, it just gives you certain rights to that data such as the right to have it deleted.
I’m pretty sure Facebook has made a substantial investment in structuring their data flow & server locations with the specific intention of avoiding the reach/responsibilities of GDPR.
> Once you upload something, it's theirs. Once they sniff something off your phone, it's theirs. Once you look at something in another tab while FB is open, it's theirs.
According to Facebook TOS, you own the data but grant them an open-ended licence to use said data.
As a Facebook user I'm glad they don't grant access to influencer marketing companies. Those companies drive the creation of trash content which would harm the Facebook user experience.
"Sharing or not sharing data with another app is likewise not an unusual decision. FB is not a public utility"
[the following paragraph does not assert that FB is a monopoly, it is agnostic]
The law says that a company with a monopoly needs to act a bit like a public utility. Recall what Microsoft did to Netscape -- was there anything wrong with that? Is a company allowed to take action against a competitor? The normal answer is "Yes". But different rules apply if the government can prove that a company has monopoly power in some market. Actions that are allowed among normal competitors are no longer allowed once a company has near monopoly powers. I think fast growing tech companies get into trouble because the founders doesn't realize how much the company has grown. Bill Gates could use aggressive tactics during the 1980s because Microsoft was still small. He got into trouble in 1994 because he didn't realize how much Microsoft had grown, and how much the rest of the world suddenly regarded it as a behemoth. Therefore he was no longer allowed to pursue the aggressive tactics that he'd previously been free to use.
Arguably, the same reality has now caught up with FaceBook.
> I don't understand why either of those points is particularly controversial or even unusual
Facebook is unusually large and lawless. Their favoring industry incumbents decreases the economy’s dynamism. If you want to compete with Airbnb, now, you must not only launch a better product. You must also curry favor with Facebook. That’s a lot of trust for society to put in a company with a track record of terrible judgement. (This is also why we regulate monopolies.)
Sharing or not sharing data with another app is likewise not an unusual decision. FB is not a public utility
I think this point is, at least partially, arguable. Take "political media." FB is so dominant and powerful in this arena that excluding individuals or parties is very close to denying people their rights to free speech and/or associon.
Idk if you can make that case about apps, because their app platform is not that important, but you could make it about Android or ios... maybe Aws.
> Checking out your competition is pretty standard among all businesses, as is buying out the ones you can't beat.
To be clear-- they marketed Onava as a VPN service to protect your personal data from spying and then they spied on your personal data to figure out what products you were using.
Agreed. Facebook purchased third-party data to guide their acquisition strategy. Any other company could hire this firm and made the same decisions.
The selective access to competing apps, based solely on this information, is antitrust though. If the apps violate rules that apply to everyone, that's different, but that doesn't appear to be the case?
Edit: turns out Onovo had access to specific, proprietary data, acquired through it's own shitty apps, which facebook acquired. This is pretty shady.
> Did Onavo have access to data that other public BI firms don't?
Yes. Onavo built “consumer-facing apps to help optimise device and app performance and battery life on iOS and Android devices” [1] while piping their users’ data to Facebook.
So they were basically the spamware apps like flashlight and other 'performance enhancers'? That makes this kind of worse, especially if they continued it after acquisition.
Not really, they built an actual data compression VPN for users with limited data plans and did traffic analysis on the data that went through that pipe.
I agree that the first point seems pretty routine, but the second one is a problem. Look at Microsoft in the late 90's. They changed their system API's and refused to publish the changes for the express purpose of killing netscape. That was later deemed to be illegal. Facebook using their market dominance to kill other apps seems to be very similar.
I think they were mostly using their market dominance to kill apps that decided to live on the Facebook platform in the first place.
If you build an app that relies solely on Facebook login and access to Facebook data as key parts of your business model then are you really a competitor?
Well, if you build an app that runs on Windows like Netscape did, are you really a competitor of Microsoft? Turns out, you might be, if Microsoft want to make their own app. Similarly, Facebook might want to introduce or change native functionality, or monetize their platform, in a way that competes with your app.
That's a good point of comparison but the difference I think is at the time Windows was a dominant OS, and most people's only option. For them to come out and compete with Netscape and prevent Netscape from operating on their OS would be a death sentence for Netscape because it is not reasonable for Netscape's response to be "Fine we'll make our own OS!".
There's a huge difference between "build your own OS" and "build your own website login".
It's not a very tall order to create a social website that doesn't use FB login. Social websites should not by default have a right to all of facebook's users. They should have to build that user base themselves.
I agree with the conclusion, but I don't think it should be done through the current antitrust framework.
Your first point and most of the others others may or may not be illegal. If they are legal, what the MPs need to do is their jobs.. make laws. But, most are not directly related to company size and/or market share.
Your second point does sound like a trust and I think you're right to link this to the WhatsApp sale.
But... Regardless of the FB/WhatsApp conclusion, that is a one-off that won't lead to much systemic change whether it's a fine or an order to de-merge.
What we need (imo) is a whole new approach to antitrust that doesn't hinge on a definition of monopoly.
Companies beyond a certain size should just be put under a different set of obligations than smaller ones. They should be assumed to have market power by merit of their size.
When it comes to gdpr and such, these need to be written differently for large companies. Basically, no more equality before the law for companies. What we get in return is rule of law.
In theory and on paper.... Making that into a functional legal system, hasn't happened.
In any case, half my point is that pricing power is not the operative definition of market power anymore. If xonmobile can/do have all sorts of influence on governments, job markets and such. They can do tax stuff a restraint can't.
What I am proposing is that the definition of a monopoly is not useable, for policy. It's fine as an economic & academic concept, not as a legal one.
Large companies aren't bad just because they tend to monopolize the market they operate in. They can also lobby far more effectively, then there's the issue of the job market competitiveness etc.
A world in which companies are not allowed to selectively withhold proprietary data from some competitors, and form partnerships with others, is a world that suboptimally promotes innovation and economic growth.
That's a statement that requires evidence and proof.
Market fundamentalists are so used to advancing arguments like this without being challenged that they have become almost completely divorced from the scientific method entirely.
Maybe being forced to share this kind of proprietary data, in this specific type of circumstance, will actually promote a more diverse marketplace with higher innovation. Maybe it won't.
You're not a "fundamentalist" because you mention the fundamentals of a field of study. It's impossible to have a discussion about anything if some things are not taken as axiomatic.
And given that the person you're responding to is stating something that most economists would agree with, I think the impetus is on you to tear it down with "the scientific method" if you think the prevailing wisdom is wrong.
I have a degree in economics and I disagree with it.
If you were in fact an economist yourself, perhaps you would have recognized market fundamentalism as a term with a specific meaning, often used by economists:
No, you're a fundamentalist if you stick to an ideology (e.g. trickle-down) in spite of all evidence suggesting it is highly flawed.
> It's impossible to have a discussion about anything if some things are not taken as axiomatic.
You mean you personally find it impossible to have a discussion with someone who disagrees with you on the axioms of economics because it means you can't frame the argument in your own terms, which would be that there is no alternative to what you are espousing.
You don't actually know my views or what I'd espouse, so I'm curious about your hostility here :)
My only point was that you may have better success railing against the conventional wisdom by providing an actual argument that it's wrong, not by ranting and raving about the existence of prevailing wisdom itself, and demanding that everyone provide a detailed proof every time they reference it.
You should check out the works of Milton Friedman, you might find them interesting!
A world in which data monopolies and platform companies selectively withdraw much smaller companies' access to their services based on whether they wish those companies to fail or not probably isn't one which optimally promotes innovation and economic growth though.
Facebook is not a data monopoly, unless you take "data monopoly" to mean "has a set of a data that no one else does", in which case everyone is a data monopoly.
Comparable dataset of what, exactly? Of Facebook's internal user data?
Think about how much data Amazon has about all the suppliers and products they've sold over the last twenty years. Should they be forced to hand it over to anyone who wants it just because of the fact that its size and exclusivity makes it very valuable?
What about the governments databases on everything under the sun? Should all of that be public?
Maybe the answer to the above questions is yes, I don't know, but I don't think it's axiomatic.
Nobody is suggesting everyone has a right to Facebook or any other company's data (just like nobody seriously suggested MS shouldn't be able to impose any restrictions on OEM Windows installations). They're suggesting that if a company sets itself up as a provider of data and other platform services to encourage everyone else in the market to become dependent upon it then selectively pulls the plug on the most successful companies or forces them into deeper partnerships, it's definitely harming the ecosystem. And that if Facebook has done that, it might also have fallen foul of laws which are there to stop that sort of practice.
I definitely see those arguments being made (in this thread, in fact), but that may not be the argument you're making.
I think the issue I have is with the idea that Facebook is successful, and therefore can no longer prevent their competitors from blatantly taking advantage of them. But I'm not sure where to draw the line.
If Visa and Mastercard decided for whatever reason that they wanted to kill some small banking chain and not allow payments for them or their customers to go through their network, that would be bad. But I don't think that means that they have to let an upstart competitor credit card startup have access to the huge multi-billion-dollar network they've built up, does it? Even if they might extend that network to other entities that don't compete with them?
I'd be pretty concerned if Visa introduced a Visa Mobile Payments platform and withdrew functionality from the fastest growing payment apps because the mobile payments sector broadly competes with the credit card sector. I think that's a closer analogy than either of your examples
Well, that'd be the only use case for that platform, right? Maybe a better example would be if they explicitly told companies who joined the platform that they couldn't compete with Visa's core business, and then Mastercard signed up because they have a mobile payments division. Would Visa be justified in kicking them off?
I'm not advocating for anything here, but isn't hurting innovation and economic growth exactly what FB is doing by snuffing out promising products based on analytics?
I think when you choose to sell your products on Amazon, or host your videos on YouTube or use FB login for your app, you're necessarily taking a risk by depending on a third-party for your success. It's possible to create your own login system or host your own videos or own your own e-commerce site and there are even a whole host of valid competitors for all these services too.
I know you're being facetious, but it might actually be?
In nature, random mutation leads to greater genetic diversity and speciation. In some mathematical optimizations, introducing random noise can help avoid getting stuck in local maxima.
Using analytics to restrict apps based on their popularity - if that's what Facebook did - is definitely much more anti-competitive behaviour and even more likely to hurt growth than simply being arbitrary or completely random in their decision making. But yes, a non-arbitrary policy of allowing developers access if they followed a transparent set of rules would obviously be better.
Hidden in a lot of these economic growth arguments is an unstated assumption of the effectiveness of trickle-down economics which itself has been shown to be ineffective many times (see - https://www.thebalance.com/trickle-down-economics-theory-eff...)
And hidden in a lot of these anti-economic growth arguments is an unstated assumption of the effectiveness of planned economies, which itself has been shown to be ineffective many times :)
Planned economies need not be communist, and no, but I was attempting (probably poorly) to draw attention to the ridiculousness of the argument that being pro-economic growth requires believing in trickle down economics.
A world in which companies are not allowed to selectively withhold proprietary data from some competitors, and form partnerships with others, is a world that suboptimally promotes innovation and economic growth.
Maybe economic growth, but how is innovation promoted by limiting access?
This my be true as a rule of thumb. Different rules apply to monopolies (in most developed countries anyway), as different dynamics come into play and hinder innovation.
That's a rehash of the argument against the existence of proprietary tech (and patents).
It also has the same limitations:
If there's no competitive advantage to having proprietary data, then what's the point of developing the tech and services to try and obtain it?
I can't answer your question but i think you should be more specific.
Your phrasing obviously comes with the interpretation that if a split-up would bankrupt a part, it shouldn't be done.
Why should this be taken into consideration?
Secondly. It has value.
A different party could invest and keep it going.
Why is it important that it sustains itself?
If the objective is to increase competition, by making WhatsApp into the Facebook competitor they currently aren't, you need them to stay in business.
Of course, if you just wanted to punish facebook by destroying shareholder value, or you're certain WhatsApp users will move to an FB competitor rather than to FB itself, you might not care if WhatsApp stays in business post-breakup.
I guess it's not like anyone is employed to work on Whatsapp or anything. It's not like millions of people use it to stay in contact with their families every day.
Whatsapp has enough of a userbase and a usecase that enough people will be willing to pay to sustain it.
What _will_ happen is that other free services will pop up and be used, and some users will migrate to those. Which is a good thing. We'll have actual competition on the field instead of a single dominant player.
I've long held a theory that many of the "basic infrastructure" pieces of technology would best be put forward by non-profits. The profit imperative can be an insidious corrosive influence.
When the WhatsApp acquisition happened while they had only a few dozen employees and were reportedly profitable off of their $1 per year business model, it was clear to me that the days when a small non-profit could run the world’s chat infrastructure weren’t far off.
That the Signal Foundation is now basically funded by interest earned on money made from that very WhatsApp sale is quite poetic.
OT: Similar for health, in the UK you can see once some parts were privatised more and more gets privatised - not surprisingly, these companies have a duty to grow.
Can you really blame FB for not providing API access to competitors?
If you want make Farmville with FB login go ahead, but if you try to make "FB 2.0" with FB Login and all friend connections preserved but keep all the ad revenue yourself obviously Facebook is gonna put their foot down.
That's 5 years revenue. $100 per user, sounds reasonable. Facebook can put in an easy "claim your $100" button on their site. Would that just be a "cost of doing business"?
Interesting challenge: how to design that button to make it conform to mandated guidelines on its appearance, size, position and wording whilst also making it look enough like the sort of probable-scam advert users generally ignore to limit the number of Facebook members who actually claim it. Facebook probably has data to help them with that too :D
Just to underline the obvious point the lawmaker is making, perhaps if Zuckerberg would show up to hearings that lawmakers invite him to, then he would be able to provide the context that Facebook says is missing.
Translation: if you turn down our "invitation", we'll misuse our legal power to seize your internal documents and then publish them to embarrass you.
Accurate?
To be clear, I'm not saying that they can't seize and investigate or even punish if the law was broken. That's the job of government. But this just seems like a petty attempt to embarrass. However, I'm American and haven't been following the story closely, so I may be missing context.
You are missing context: look up "Parliamentary Privilege" and "Parliamentary Sovereignty" which do not have US equivalents. The "invitation" was effectively a subpoena.
But is the legal ramification for ignoring the invitation anything they can think of? If a British citizen did the same, could they claim that they were investigating them, barge into their home unannounced and seize all devices, and then later publish any nude photos found under the justification of “hey, you should have come when we invited you?”
Why not just prosecute for whatever law was broken?
No, since the person who's documents were seized was in the UK on a trip. Except for special cases where immunity is granted, physical presence in a country means that one is subject to the laws and jurisdiction of the country. In this case, the UK Parliament had the authority to compel the production of the documents in question, by threat of imprisonment if necessary.
A question for any legal scholars out there: the seizure of the documents would be contempt of court in the US, could it not? The person who was threatened with arrest has the defense of duress, so could the US court charge the MPs who ordered the seizure? I don't think the UK would extradite, but if the MPs were to visit the US without immunity, could they be arrested for violating US law (even though their actions are apparently within their rights under UK law)?
Is it misuse, or does it serve their ultimate purpose, which is to act in the best interest of their constituents? You could easily take the stance that parliament was being generous to give fb the opportunity to become part of the dialog beforehand (provide that context that was supposedly lacking), and that zuck was foolish to turn them down.
they cannot misuse their legal powers, as parliament is Sovereign in the UK.
Facebook took the risk by not complying with the wishes of a sovreign state, and that state then takes legal action against facebook.
facebook arrogance in this matter is staggering, and by some could even be seen as a direct confrontation with parlaiment.
If parliament did not respond, it would look weak and in some sense even delegitemize its sovereignity. If they backed down on facebook's apperent arrogance, what is stopping other companies from trying the same.
Let's not forget mark zukerberg was summoned before the parliament. Ofcourse he could decline (mainly because he is not a UK citizen), but being summoned is quite a big deal considering they (uk parliament) is literally asking you to come over and explain what your doing..
don't be suprised there are rammifications when a state asks for your presence, and you arrogantly decline it.
they cannot misuse their legal powers, as parliament is Sovereign in the UK.
Wow. So anything they do can’t be a misuse of power as long as it’s technically legal? And if it’s not legal, can they just change the laws and then do it?
My issue is not with retaliation, but if Facebook did something wrong, why not investigate and prosecute? Instead they’re going to just try and embarrass them. Which indicates that they don’t actually have any legal case.
> "As you know all the growth team is planning on shipping a permissions update on Android at the end of this month. They are going to include the 'read call log' permission... This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it."
Meanwhile, at the Growth Team™ office: "I think implanting these chips into our customer's brain is high risk, but we're going to charge ahead anyway".
Exhibit 79 – linking data access spending on advertising at Facebook
Email from Konstantinos Papamiltidas [FB] to Ime Archibong [FB]
18 September 2013 – 10.06am
From email about slides prepared for talk to DevOps at 11am on 19 September 2013
'Key points: 1/ Find out what other apps like Refresh are out that we don't want to share
data with and figure out if they spend on NEKO. Communicate in one-go to all apps that
don't spend that those permission will be revoked. Communicate to the rest that they need
to spend on NEKO $250k a year to maintain access to the data.'
I think Facebook is terrible. So I don't use it. I have a choice.
These same MPs would likely howl in fury if their secret communications were stolen and published by an entity like Wikileaks (for example). Be careful what you wish for.
Of course the difference here is that one is "legal".
I don't think your choice to not use Facebook alters by very much the amount of data they collect about you. I think other steps are needed if that is your goal.
Let them howl. When they've been involved in shady stuff that's against the public interest, I will have no sympathy for them, just as I don't when FB is the "victim". I am glad they didn't decide to sit on it because of any fears about "I don't want this happening to me, so I won't snitch".
By the way, these emails were not obtained via hacking/phishing, so the Wikileaks comparison doesn't make any sense.
How many of these MPs show up in Panama/Paradise Papers?
This was a valid question. When talking about them getting and publishing the FB files etc.. and asking if they would want their dirty laundry aired - asking if they show up in those releases is both relevant and obvious.
Exhibit 170 – Mark Zuckerberg discussing linking data to revenue
Mark Zuckerberg email – dated 7 October 2012
'I've been thinking about platform business model a lot this weekend…if we make it so devs
can generate revenue for us in different ways, then it makes it more acceptable for us to
charge them quite a bit more for using platform. The basic idea is that any other revenue
you generate for us earns you a credit towards whatever fees you own us for using plaform.
For most developers this would probably cover cost completely. So instead of every paying
us directly, they'd just use our payments or ads products. A basic model could be:
Login with Facebook is always free
Pushing content to Facebook is always free
Reading anything, including friends, costs a lot of money. Perhaps on the order of
$0.10/user each year.
For the money that you owe, you can cover it in any of the following ways:
Buys ads from us in neko or another system
Run our ads in your app or website (canvas apps already do this)
Use our payments
Sell your items in our Karma store.
Or if the revenue we get from those doesn't add up to more that the fees you owe us, then
you just pay us the fee directly.'
Seems pretty reasonable to me? Especially in the context of tossing ideas around.
Note that I don't read this as "let's get devs on board and then yank the rug out from under them later". Rather, it seems like they're trying to find a way to make it win/win, where devs can either pay for the platform directly, or use it for free if they can do so with a business model that helps Facebook make money elsewhere.
> devs can generate revenue for us in different ways, then it makes it more acceptable for us to charge them quite a bit more
Generate revenue for us, so we can charge them quite a bit more.
> The basic idea is that any other revenue you generate for us earns you a credit towards whatever fees you own us for using plaform. For most developers this would probably cover cost completely.
So devs can get credit (they can't convert to money I guess?), and Facebook gets more money. That's just win/win if you don't consider the audience FB "generates revenue" from, and then it also would depend on how much you would care to save, say $10 while generating $50 for FB, or saving $10 to generate $500, and what those figures would be.
Money isn't "made", it's always shifted around. Value can be generated, not money. So making money for Facebook means taking it from somewhere.
I have been involved with companies that have wanted to do things with Facebook non-personal data (eg. pages) and could never get anybody to talk. Some of these were little companies in the "flyover" states, one of these was pretty good sized in L.A.
When you see them playing favorites you see another thing that S.V. will struggle with for years.
One reason NYC is so important for finance is that people go have lunch and trade insider information without leaving a paper trail.
In the same way S.V. companies circle jerk each other giving each other special privileges, staging fake acquisitions so sons of investment bankers can make it look like they were successful startup founders, etc. Sometimes they even get a stooge to come in from a place like Saudi Arabia or Japan to buy them out so they can tell the people who put money in their fund that they made money. Those folks will lose a lot of their A.U.M. but they probably get paid off in some other way.
S.V. doesn't have any problems that wouldn't be solved by opening offices in the flyover states. But there are two things about those people.
Facebook has released a statement[1] that accuses Six4Three of "cherrypicked" document dump, that is mostly just denials with no supporting evidence. If only they hadn't undermined their own trustworthiness by, a few weeks ago, denying that senior executives were culpable in their relationship with Definers and then doing a news dump right before a major US holiday admitting that they had lied in their first response, then people might believe their unsupported assertions.
I worked for a company that was bought by blackberry and then split up. A large portion of our developers and engineers went to Facebook in Ireland and Seattle/California.
This is my personal experience, so I don't know how widespread this is, but I suspect its common.
I stayed friendly with many ex-coworkers on Facebook for awhile, and saw some attitudes move towards a harsh alt-leftist view and some went extremely violent. Anti-Isreal statements started popping up and some Jewish facebook engineers started to self-censor and talk in personal messages instead of posting, commenting to me if I was also seeing this attitude in posts.
Could just be how divided facebook made people, amplifying echo-chambers and enforcing views that "their side" is correct, but many of my old co-workers show a large political divide and a few just are wildly hardcore political in very violent tones. How this bleeds into their job, only time will tell.
I finally had to make facebook just for family and close friends and removed the app from my phone, and only use it in a web browser. I'm guilty of enjoying a good meme or political cartoon, but things are definitely more divided now. I would not doubt the documents reflect more political views seeping into their products.
The documents[0] contain market research done by Onavo[1]. This data may have been bought from them, and if so it was probably bought under a non-disclose agreement, as is typical for those arrangements (the reasoning being that Onavo can't sell the data more than once if it is released publicly). This could pose a problem for Facebook if Onavo decides that another release like this is likely to happen again. I'd be interested if anyone can weigh in on what the think the liability situation will be here.
Facebook apparently owns Onavo, but to answer your question more generally, liability is probably very low. In my experience, contracts that license or protect proprietary, confidential, or non-public data often have an explicit exception for following lawful government orders. Which makes sense, because in general, private contracts can't overrule the law. Even if the contract did not have an explicit carve-out in the language, it's hard to believe a lawsuit would succeed. Companies and people have to follow the law, even if they don't like it.
That said, companies are free to choose who they will do business with, so a data provider could always decide a certain customer is too risky for future deals. I wouldn't necessarily call that liability, though.
The bigger problem is that, as I understand it, all of these documents were obtained from someone suing Facebook who had access to them as part of a lawsuit against Facebook under a court order that required them to be kept confidential.
The law was followed to get access to these files. What are you talking about? Regardless of your agreement with the manner of procurement, nothing nefarious was done here. This is what happens when CEOs ignore governments.
Which law? They were seized by the British government under British law, but the court order which gave the person they were seized from access to them and imposed conditions on how they were used was a US one.
That makes me wonder if a confidentiality agreement should ever include sharing that information with the government? Maybe NDAs from publicly traded companies should always exempt sharing with elected officials?
Might work under US law, but not under UK law. In the UK Parliament is sovereign.
Yesterday, Parliament forced the Government to publish it's confidential legal advice on the impact of the proposed Brexit deal - they had previously pushed through a motion of a type called a "humble address" - basically a petition to the Queen (who is the head of state, and nominally in control of the Government), asking for the documents to be published. When the Government tried to wriggle out of it, there was a debate, followed by a vote on a motion that the Government was in contempt of Parliament (kind of like Contempt of Court, but more so). The Government lost, and immediately published the papers.
So be in no doubt, the UK Parliament (and a bunch of others), are all really p*ssed off with FB and other tech companies giving them the runaround. They are fighting back with the weapons they have, which turn out to be pretty toothy.
I guess that if you broke an NDA saying "You may not release this data to a government" you might get sued in a US court, but would a jury convict?
Keep in mind this is all punishment by tptb for Facebook execs getting out of line and thinking they had a seat at the table. Not enough dues paid yet.
I think that's a bit harsh. You may think Facebook is terrible, but it's not so clear that everyone ought to agree with you.
If someone was working for organised crime, everyone should agree that's real social ill and should be discouraged. There's enough evidence on the table, over many years, about it.
The FB story is still unfolding. We're still in the process of hearing exactly what was done, and arguments about how good or bad it was.
Changing jobs is a somewhat big decision. Let people have a little think about things before you shame them for not making the leap when you - a guy with no skin in that game - decide FB is untouchable.
Yes, I do work at Facebook. I'm not going to get into some protracted argument about whether that makes me a bad person, or already was, but I'd like to give you something to think about.
Let's say, for the sake of argument, that we can neatly divide tech workers into good and bad. Likewise for companies. All categories are non-empty, and certain to remain so. What's the optimal assignment of workers to companies? Is it never beneficial for a good person to work at a bad company, perhaps making it less bad? Is it never harmful for a bad person to work at a good company, perhaps making it less good? Is it better to have all of the bad people concentrated at a few companies, or distributed throughout the industry?
Once you start thinking about it, I think you'll see that the Manichaean "entirely and immutably good or entirely and immutably bad" model just isn't very useful. Maybe we should talk about good vs. bad behaviors instead of demonizing (or for that matter idolizing) people - especially large groups of people in aggregate.
I don't work at FB and wouldn't (nor do I even have an account), but even I can recognize that to many the benefits of FB outweigh the negatives. In a tech-centric board such as this one coupled with media/government-driven narratives to the contrary, it can appear to be obvious that there are way more negatives than positives, the world would be better w/out FB, and you should quit. But in a non-tech-environment the view is sometimes the complete opposite, and people just can't fathom why people want to use FB (as opposed to them being "forced" due to network effects and lack of alternatives).
This is a microcosm of our political environment where when one subjectively determines the weight of harm, they attempt to objectively apply it to others. Sure, FB does immoral things. But why can't we simply recognize that people disagree about whether harm exists, whether value exists, and what weights are applied to each? As you say, this is not clear good vs bad, this is some good vs some bad and at what levels.
This is a microcosm of our political environment where when one subjectively determines the weight of harm, they attempt to objectively apply it to others. Sure, FB does immoral things. But why can't we simply recognize that people disagree about whether harm exists, whether value exists, and what weights are applied to each?
This is a really important point. Political movements often attribute to malice that which is adequately explained by a differing value system, but that doesn't rile up your base like proclaiming the other side is evil.
> Political movements often attribute to malice that which is adequately explained by a differing value system
Evil is only meaningful in the context of a value system.
Malice is will for things which are evil.
Valuing something positively is willing for it to transpire.
Valuing something negatively is seeing it as evil.
Valuing something positively that others value negatively is, therefore, exactly malice in the context of the others’ value system (it's not wrongly attributed to malice.)
> you'll see that the Manichaean "entirely and immutably good or entirely and immutably bad" model just isn't very useful
Agreed. But these same arguments could be made by many terrible people throughout history. Associating with and assisting those who do bad things is bad behaviour.
Society is incentivised to restrict people who show a tendency towards assisting destructive behaviour. We aren’t at the point where everyone voluntarily at Facebook should be seen as dirty, but we’re closer to that Enron moment than many there would like to think.
This misses the point of completely. This shaming is not about how good or bad you are.
Some people have determined that Facebook is an organization that's producing bad results. Socially shaming employees, removing the status they might have hoped to acquire and spiking attrition are tools being deployed to change Facebook's behavior and the behavior of Facebook's competitors.
You might claim is that this strategy can't positively affect Facebook's incentives, operations, or effects. But nobody should be distracted by navel-gazing or fancy philosophizing.
I think it's going to boil down to whether a good person going to work at a bad company is actively working to change it for the better. In my experience, people who do this don't tend to last long.
If there are people who've found a good strategy for changing companies for the better without being exceptionally senior, i'd love to hear about it.
Not sure if I'd have the courage to do it, but whistle-blowing. You could even do that a while after you've left, and there are options such as media outlets with anonymous communication methods, or foreign governments/data protection authorities.
Whistle-blowing is fine for actual illegal activities (and many industries have legal protection for whistle-blowers), but for unethical acts (where ethics aren't protected by law) you don't really have the same options.
The real question is, can a moral person do more good by working at Facebook, or by refusing to work at Facebook? I would argue that Facebook is a bad actor and harmful to society, so the only people who should work at Facebook, morally speaking, are incompetent people who will damage the company.
If I were to divide large tech companies between "good" and "bad", the only one I would classify as "bad" is Facebook. Most are neutral-to-good. Facebook has a huge societal and human cost and has hardly any benefits to show for it (unlike, say, fossil energy companies, which are a threat to civilization but at least serve a useful role by fulfilling our energy needs).
Facebook is a democracy-threatening, attention-wasting, extractive institution, with a deeply unethical leadership. Most tech companies are neutral -- Amazon, Microsoft, Uber etc., I assume they're doing business for profit and not social good, but they do provide value and largely play by the rules. Some other companies I would classify as "good" as they provide considerable value to society as a by-product of doing business, like Apple and Google (disclaimer: I work for Google, and I am quite happy about that).
Facebook is the only large tech company I can think of that is just plain evil. There's no other like it. It's in a league of its own.
I would easily throw Uber into the evil category. "Contractor" loophole to evade minimum wage (min. wage being already pretty pathetic), Greyball, and a general culture of knowingly ignoring laws cultivated by it's founder.
>> Is it never beneficial for a good person to work at a bad company, perhaps making it less bad?
That's an interesting thought. If carried to its logical conclusion, the 'good people' had many, many, many years to influence the direction of the company, and whatever little 'less bad' the 'good people' are doing is being completely negated by the 'more bad' that the not-so-good people are doing. So I think its fair to conclude the 'good people' are completely inept when it comes to their ability to effect any kind of meaningful change from inside of FB, and should probably leave en masse. After all, given such across-the-board ineptness, what's going to change tomorrow that couldn't have already been changed yesterday?
Facebook is going to face a massive recruitment problem from here on. To paraphrase Richard Hamming, "the only folks left there are the moral dregs".
> the 'good people' had many, many, many years to influence the direction of the company
You might be surprised. After only a year and a half, I'm rapidly approaching a point where I've been there longer than 2/3 of my peers. There has been a huge influx of new people with new perspectives and new voices. Admittedly that's weighted a bit toward the low end of the individual-influence scale, but collectively all those people can make a difference.
Considering how downvoted this comment was within 10 minutes of posting, I wonder if people actually read it or just did a knee jerk downvote based on the first sentence.
Oh, I'm sure a couple downvoted it even before that, as soon as they saw my name. ;) That's OK, though. The important thing is to get people thinking about moral issues in a more possibly-productive way, and I haven't been disappointed.
Pretty sure you're right, but I figure at least a few people might be able to realize that "you should be ashamed" just doesn't lead in any productive direction. What if I was ashamed? Would leaving be the only positive reaction to that? As great as I think I am, I don't think Facebook would miss me. They'd just replace me. I'd just carry the taint to my next company, while my replacement also becomes tainted. Good job. Big thumbs up. In that hypothetical, is it not even possible that somebody could do more good by staying?
Leaving is by definition going to hurt them - if it wouldn’t, they would have already replaced you. It’ll either cost them (interviews, higher pay for someone else), leave them with a marginally worse developer, or both.
You've gone through some serious mental gymnastics to justify your lack of action, but it's clear that you know that Facebook is making you worse than you're making Facebook better. Rest assured that there are companies where you can work on interesting problems without facilitating genocide.
Whether or not that is true, my experience on here is that the amount of effort and self-doubt involved in this response by the average HNer is so low that I suspect jealousy and schaddenfreude are often involved. For example, consider how the root comment above ends.
I don't see anything shocking or surprising here, that said I'm glad I don't have shares in facebook
Here's one finding
> Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers,
> and apparently without their knowledge. They used this data to assess not just how many
> people had downloaded apps, but how often they used them. This knowledge helped them to
> decide which companies to acquire, and which to treat as a threat
Here's Onavo's wikipedia page last edited in October
> Facebook has leveraged Onavo's analytics platform to monitor the performance of its competitors,
> target companies for acquisition, and make other business decisions.
Hmm, looking at this with a glance, I am surprised the coverage around Onavo hasn't been bigger.
If I am following this correctly, it would be similar to Google creating a Firefox extension to enable "extra privacy," but secretly pumping the data back to Google? Or if Windows Defender sent all your web usage data and keystrokes to Microsoft?
I'm not sure how this doesn't fit the definition of Spyware?
Google got all of this data first-party, as search logs are very informative about which websites/services are popular.
I'm sure that things like that factored into the decision to buy YouTube, for example.
Onavo was a company that tried to sell VPN access to reduce data costs, found out that didn't make money and pivoted to selling reports to publishers and advertisers.
Then FB bought them, and treated them like an internal survey organisation. It's a shame really, Onavo could have been the Neilsen of the mobile age, in another timeline.
But yeah, people use data (especially private data) to make decisions about their business and others. News at 11.
Hasn't AppAnnie accomplished being "the Neilsen of the mobile age" in some ways? I'm not sure how AppAnnie collects its data. My guess would be that it is buying data from Foursquare, etc. I don't have much of a problem with that. I suppose I'm more opposed to a large company offering an app that promises privacy yet delivers the exact opposite of privacy.
So AppAnnie is directionally accurate, but not massively. Onavo (as far as I understand it), did lots of bias correction (standard survey/panel adjustments really) that tended to give them better population accuracy (which is what you normally care about).
I believe 99.99...% of interactions on Facebook are positive. They are friends and family keeping in touch, posting life updates, organizing events, hobby/support communities etc.
Most ads are normal ads from legitimate businesses including a lot of small and local businesses who find it very valuable for reaching their customers and growing their business.
The recent rhetoric and news coverage acts as if Facebook is entirely full of fake news and echo chambers and political manipulation when the truth is while that stuff is happening it's a minuscule percentage of the billions of posts per day.
>I believe 99.99...% of interactions on Facebook are positive
Ah, the old Sundar Pichai maneuver:
>"Yet, a little more than a week later, Google CEO Sundar Pichai attempted to invoke an engineering defense by arguing that Google would not need to censor 'well over 99 percent' of queries."
Didn't they try to push some rather questionable licensing terms on react or do I confuse it with a different open source library? If it was then its users almost became another victim of facebook trying out where it can get away with abusing its position.
They did have a license with a clause allowing them to sue or restrict access to it legally, but they changed it to a more permissive one. Also, even if they did try to do anything, no one was stopping you from replacing react with inferno or preact, they have compatible APIs.
This is clearly intended for actual staff, not support position like building support and maintenance. They aren't employees, in fact they rarely are anywhere anymore. It's all outsourced to maintenance companies which are cheaper and not held to the same level of scrutiny for how they treat their employees.
It's garbage. Companies should employ them and pay a fair wage, but they don't.
Facebook and other companies are more like bacteria that infect the wound of digital illiteracy. I think it's time we start seeing that as a wound, not as an unavoidable or even desirable state of things. An information age requires digital literacy, otherwise you will have a priest caste, and that will abuse its power. I see no way around it, and criticizing priests for being bad priests to me enforces the idea that there even should be a priest caste.
I can't quite tell you what digital literacy is, but I know that responses along the line of "do you know how a modern microprocessor works in detail?" are silly, because many people can read and write without being historians or etymologists. And even people who can't read or write often can speak and understand a whole lot. Contrast that with some priests mumbling in Latin, with people being subject to things they aren't allowed to understand.
So maybe knowing what files are, what memory is, what a program is, having their own webspace and email, stuff like that would be a great start. And the start to that is to stop pretending that's unattainable, or that "people don't want that". If they knew what it entails in the long run, most of them would want that. And people do much more complicated things than running a website. People raise children, they drive cars, they raise and train dogs, they work in the garden, they remember all sorts of stuff about sports, and so on. Most jobs require a lot of complicated edge-case knowledge, too. So knowing what an URL or a file is, and maybe some HTML, is trivial compared to that.
If people could get 20% off on all T-Shirts for the rest of their life if they had their own website, most people on this planet would have a website before the end of the year. I exaggerate, but by how much I'm really not sure.
Let's not shame workers at facebook. If you do you have to shame anyone who works at any big tech form for the decisions of the leaders. Google, Apple, Amazon, Microsoft, Oracle included.
Then everyone including the cleaners,anyone using facebook as a user, investors, goverment officials and every American as the government officials are decided by them. Include the press for promoting them through stories or by setting the discussion points for each election.
> If you do you have to shame anyone who works at any big tech form for the decisions of the leaders. Google, Apple, Amazon, Microsoft, Oracle included.
And? If you're not proud of the impact your work is having on the world, than you shouldn't be doing it. You only have one life. Why waste it on making the world a worse place?
I had an easy and well paying job working on an IT project at DHS. I left after the child separation thing started, even though I wasn't working on anything related to that. It just disgusted me to go to work every day.
I had an easy and well paying job working on an IT project at DHS. I left after the child separation thing started, even though I wasn't working on anything related to that. It just disgusted me to go to work every day.
>No shame in putting food on the table for your family.
Balderdash. Don't construct situations where people just shrug their shoulders and go along with morally inequitable because they're getting paid for it.
Besides, no one's throwing up their hands and saying "Gosh, I really messed up this time! I guess the only options left are working at Facebook or sucking dick under a bridge!"
H1B visa holders for example. To change job you need an entire new visa application which is not guaranteed to win the lottery. Even Facebook employees can have limited options.
And it’s not just engineers. Cleaners, cooks, maintenance people.
> To change job you need an entire new visa application which is not guaranteed to win the lottery.
Nope - visa transfers when changing employer aren't subject to the visa cap and don't have to go through the lottery. At present premium processing is suspended so it can take up to 6 months, but that's not always the case.
I have sympathy for those cleaning floors at Facebook HQ, but it's disingenuous to suggest that the engineers (with compensation packages many multiples the local median, getting daily LinkedIn recruiter spam) "have no choice" about where to work.
Wow, that seems crazy even for lawyerese.. I mean, if I have an "all you can eat" restaurant, do I sell food, or do I just sell access to food? What about a buffet where a person can fill their plate once, but can pick among many things?
Not really. It's like buying a space on the wall to advertise inside the all you can eat restaurant.
The restaurant isn't telling you who goes into the restaurant. They're just selling you access to those people through that ad.
-----
The other problem, the Cambridge Analytica problem, is a bit different.
It's like saying that the restaurant requires you to tell them your name and other personal information in order to eat at the restaurant. Maybe it's to send flyers or deals, doesn't matter.
The problem happens when the restaurant also asks for your friends personal information, and you give it to them without asking your friends.
That's basically what Facebook is under fire about. Allowing your friends to very easily give your information out without you knowing.
I think "giving information about friends" is kind of a stretch. Facebook has that information. Clicking something so Facebook does something with it isn't quite the same as, say, entering all that info into a text field. Maybe it is legally, but morally, from a commen sense perspective, FB gives the info out. Allowing API access is giving info. We all know how servers work. You can't "take" something, it always gets sent.
That's why I said "very easily". I'm not saying it's necessarily the friend's fault, but the trigger is one of your friends clicking "Accept" on one of these apps.
I'm also just clarifying what happened, because people don't seem to understand how any of it works. "Facebook sells your data" is very far from reality, and yet the reality can be considered just as bad.
You’d think that the fact that it is a commonly held belief that FB literally sold users’ data on Hacker News might cause some of us to reconsider our worldview with regards to FB and its practices.
It’s terrifying that people want to use the government to regulate a free website that nobody even has to use and which has not actually caused anyone harm.
GDPR tyrants: get a life and stop using Facebook. Seriously.
That means that any site that allows two people to communicate can be complicit -- be it forums, discussion boards or chat/messages. Criminals can easily adopt code words, or encrypt their plaintext communications in ways that a website would be unable to police. How about email? Are email providers also required to detect all criminal activity occurring in their emails?
Ultimately it's a massive burden placed on companies if they are required to read and sort through all communication occurring on their platform and it also necessarily removes any and all privacy from every communication platform. Do we do the same thing for cell phone manufacturers or telecom providers? What about monitors that display the messages or keyboards that let people type the messages soliciting illegal activity?
I think it's worth considering just how much influence Facebook has that many of the other entities you mentioned do not. I also think it's worth considering possible solutions to problems that Facebook may or may not be the direct cause of before jumping directly to "Welp! Slippery slope, cant do anything about it i guess!".
So, FB gave some companies preferential access after removing it for devs, they surreptitiously collected phone data after given permission to, used cross-business analytics for M&A or competition info, was choosy and chatty about who got what data and its value.
I agree there are clear immoralities at play, especially with collecting phone call info, but not sure that it is worthy of a government inquiry of such size. Much of this is standard business and worthy of being decried as other immoral business acts might. Would we expect such an inquiry for any other company or are there politics at play? I feel the Cambridge Analytica issue is more which hunt than a real-harm-exacted issue. How would one quantify the harm? Is this vengeance for Brexit, Trump, and/or Zuck no-show?
All I've learned is to be cautious when having email conversations on sensitive subjects (no paper trail) and to be cautious when flying execs to the UK and to be cautious having any real business there for fear of having confidential information raided and published. Obviously it's not standard practice but, as with other government-imposed internet tactics of late, as a business owner it just marks an increase on my "riskometer" that I use to evaluate where to do business.
You need to understand that this is a highly unusual situation.
Part of the UK Parliamentary system is that there are "Select Committees"on MPs, whose job is to review and hold to account the performance of government bodies and departments. They may also review areas of national interest within their remit, and invite witnesses to give evidence to them.
For example, there have been significant problems with a rail timetable change recently, and the Transport Select Committee invited the heads of various rail companies to give evidence. You don't not turn up - an invitation from a select committee to give evidence has the same sort of weight as a court summons to give evidence.
So Zuck was invited to give evidence to the Digital Culture, Media and Sport Select Committee (DCMS), investigating the impact of Fake News on politics. Twice (at least). Both times he sent a minion.
DCMS then set up a Grand Committee hearing, with parliamentarians from a bunch of other countries (Argentina, Belgium, Brazil, Canada, France, Ireland, Latvia, Singapore and UK), and invited Zuck. Again, no show.
So basically, since Zuck gave a whole bunch of governments the finger, the Uk government used its powers to collect relevant evidence.
I would suggest that unless you are
a) the CEO of a Unicorn, or
b) Currently suing a Unicorn and
c) that Unicorn is giving the finger to a government
I think you'll find under the British constitution, and according to Erskine May, the definitive guide to Parliamentary practice, that it does.
"any act or omission which obstructs or impedes either House of Parliament in the performance of its functions, or which obstructs or impedes any Member or officer of such House in the discharge of his duty, or which has a tendency, directly or indirectly, to produce such results, may be treated as a contempt even though there is no precedent of the offence"
The government itself was found in contempt of Parliament over Brexit just this week. As the linked article notes, applies just as much to refusing to appear before a Parliamentary committee.
It's unclear. Trouble is none of the powers seem to have been necessary in about a century, so it probably needs exploring in court.
Or formalising powers in some new legislation, which will eventually result if people start to treat committee summons as optional. This is probably well overdue. :)
Strictly, you may be right - it is not a criminal offence to ignore an invitation form a select committee.
I believe (but may be wrong), that the committee can request Parliament to consider the matter, and at that point the Sargent at Arms (who does have legal powers) can fetch you to appear at the bar (entrance to) the House of Commons chamber.
Practically speaking, no-one in their right mind, based in the UK, ignores an invitation to attend a Select Committee hearing. Very bad form :-)
"If a Select Committee wishes to require the attendance of a witness, an informal request is issued. If the witness is unwilling, a period of negotiation usually follows. If it is clear that the witness is not willing to attend, and the Committee wishes to insist, an order for attendance is made by the Committee, signed by the Chair, and then served upon the witness by the Serjeant at Arms or the Serjeant’s representative."
that may be the case, but snubbing reasonable requests from an institution with essentially unlimited power in its own domain is unlikely to end well for you if you want to remain operating there
Surely the only difference - from a UK perspective - is that one is a court, appointed (mumble mumble) by the Queen, and the other is the government, appointed (mumble mumble) by the Queen.
So yes, if you've been summoned by the Queen's representatives to appear, and you refuse, that's a pretty big snub.
Of course, with a court summons, there are well defined consequences - and everyone knows it - if you refuse to attend, because it's a common enough occurrence. I don't know what the consequences would be legally speaking if you were based in the UK and refused a select committee summons - I doubt it's a particularly common occurrence.
> I don't know what the consequences would be legally speaking if you were based in the UK and refused a select committee summons - I doubt it's a particularly common occurrence.
This is the problem - nobody knows what the consequence would be, so there really isn’t one.
The response to ‘you must attend’, ‘or what?’ is ‘or we’re not sure - please just attend’.
I can understand it's highly unusual. I don't have an absolute fear or no fear of doing business in the UK, it just ups my "riskometer" I mentioned. Surely it is clear that a vindictive government raiding a visitor and publishing confidential business documents should give one pause about the government's ideals? And I think we can make judgements about how decisions are reached instead of assuming that the exact steps that led to the decisions are the only reason they may go to such lengths. It is common for those dismissing chilling effects to say certain situations are special, but the effects are about ideals of the powerholders as much as their actions.
What? OP literally just described how skipping a request for info is comparable to skipping a court summons in the US. If you skip a court summons 4 times, the government comes after you. How is this situation different?
> publishing
This certainly does raise questions.
> government's ideals
Investigating companies who might be implicit in crimes? (Gov is investigatng fake news, most of which is shared via facebook. Fb /probably/ didn't commit any crime, but they are closely involved and failing to cooperate raises red flags. Fake news isn't a crime, but there were questions of brexit advertising funds, election funds, etc.
Why leave off the part of the quote that said "and publishing"? I assume an accident, buf if you include that in the quote, you will understand how the situation is different. The concern about ideals is how they chose the target and what they are doing with the documents. So many people naively get caught up in the overarching issue of guilt that they are eager to overlook improprieties by the investigators.
> If you skip a court summons 4 times, the government comes after you. How is this situation different?
They didn't come after Facebook to obtain the documents. They went after someone who was suing Facebook and who was visiting England, and threatened him with jail unless he stole Facebook documents for them (which he had restricted access to as part of the discovery processes of his lawsuit against Facebook).
If Zuck had given evidence it would be in the public domain. He didn't, they have gained access to the documents, and are putting the bits they believe relevant to their inquiry into the public domain.
Thx for link. Everytime i read the word ‘whitelist’, i think, havn’t they caught up with the times, thats racist, whitelist. Why does white suggest good?
"White" as symbolizing purity or goodness and "black" for contamination or evil is used by a wide variety of cultures around the world, only a minority of which happens to consist primarily of people with un-melanined skin. If you think "whitelist" and "blacklist" is racist, just wait until you find out about Yin and Yang.
We're changing instances of whitelist and blacklist to "allowlist" and "blocklist" similar to the conversion of master and slave to "primary" and "replica".
Edit: it's interesting that I'm being downvoted. What a weird world this is.
> We're changing instances of whitelist and blacklist to "allowlist" and "blocklist" similar to the conversion of master and slave to "primary" and "replica".
Are you going to change the whole dictionary as well? blackout? blackhole? dark energy? dark matter? dark side of the moon? and what not? Might as well deem the whole english language offensive since its the language of slavers?
Maybe you should focus on real problem instead of slacktivism Silicon Valley style? And I say that as an African descent: The great majority of us don't care about master/slave dichotomy in a source code. Stop being offended on our behalf, there are real problems in the real world that needs fixing and this is not one of them.
Changing these takes next to zero effort. "Blacklist" and "whitelist" are more confusing to new engineers than "allowlist" and "blocklist", since the latter terms speak for themselves.
Language changes over time. These are just words. I don't understand why we have to hold onto them like they're our precious children. If we can be more inclusive with the labels we choose, that's a good thing.
> If we can be more inclusive with the labels we choose, that's a good thing.
You are claiming these words are "not inclusive" at first place, which is purely a matter of ideology, not semantics. Words have different meaning in different contexts. If you can't understand that, off course you will find everything "offensive".
Starting with the word "inclusive" itself which basically is newspeak at that point. You are right, language changes over time, and it's being manipulated by people like you to steer controversy and division, to deem this or that racist, sexist because you willfully ignore context for pure political goals. There is nothing confusing about "blacklist" or "whitelist". anybody can look up these words in a dictionary, if your "engineers" are incapable of doing that, maybe you should hire better ones. "allowlist" on the other hand is a useless neologism only driven by the need to effectively control the language to force your ideology on others, under threat of being deemed "not inclusive".
This is a political stunt, nothing more, you are asserting control over others, in the name of "inclusiveness" which is rather exclusive as a matter of fact...
> You are claiming these words are "not inclusive" at first place, which is purely a matter of ideology, not semantics. Words have different meaning in different contexts. If you can't understand that, off course you will find everything "offensive".
But "blacklist" could be seen in a negative light in at least some circumstances whereas that's not possible for "blocklist" in any circumstance. So why not use it? What's the harm?
> There is nothing confusing about "blacklist" or "whitelist". anybody can look up these words in a dictionary, if your "engineers" are incapable of doing that, maybe you should hire better ones.
You're misunderstanding their point. They are not saying "whitelist" is too confusing, just that "allowlist" is equally as good and in some cases maybe even slightly better.
> you are asserting control over others
Just because someone is criticizing something you've done doesn't mean they are "asserting control" over you.
Well, who decides what is deemed "not inclusive" now? you?
> So why not use it? What's the harm?
The harm is you likes with your intimidation and harassment tactics on social media trying to force your morals and political beliefs on everyone else in every possible circle and the insane finger pointing at those who refuse to fall in line because they don't agree with your ideology.
Antirez, author of redis has something to say about it, given the pressure he was victim of under the threat of being "casted out" of the open source community since the verbiage of his project was deemed "non inclusive". And he is not the only victim. You've gone too far.
> You're misunderstanding their point. They are not saying "whitelist" is too confusing, just that "allowlist" is equally as good and in some cases maybe even slightly better.
A word already exists for this, "whitelist" and it has nothing to do with race. You chose to make it about race, because you subscribe to a specific political framework, not out of "good intentions". You chose to deem it "problematic", because it's just another political battle for you and any victory is good to take. This is madness. Do you want me to list the groups across history that used the exact same dirty tactics? Although I'd rather not.
edit: there is an obvious sophistry in the act of complaining about racism yet furthering racist "idioms" by claiming a word that has "black" in it is automatically associated with race, or the idea of "slave" is automatically referring to European slave trade thus it hurts the feelings of people of African descent. This rejection of semantics and context itself is not "language evolution", it's straight out language hijacking. Now of course, you're free to believe in whatever you want, but we are passed that fact, we are now in an era where people who don't subscribe to that same ideology are harassed, intimidated and coerced into submission.
I don't know why you're strawmanning me into being responsible for every act of suppression by a person who claimed to be acting for social justice. I am not deeming anything any which way. I am just saying that there are obvious conceivable negative interpretations of the terms "whitelist" and "blacklist" that don't exist with the terms "allowlist" and "blocklist". And the work required to change the language that you use in this case is very small, so it's worth considering. That's it, that's the entirety of the argument here. Nobody is demanding suppression of anyone's ideas.
You gave some examples of how social justice overreach can be used as a guise to suppress people. Which is not really what I asked. I asked what the harm would be of you adopting the more sensitive language in your own usage. I didn't ask what the harm would be of you becoming a crusader against anything that could be perceived in a racial way.
> I asked what the harm would be of you adopting the more sensitive language in your own usage.
"sensitive"
sensitive according to whom? what morals? what ideology? I'm not hurting my own feeling as a black man every time I use "blacklist" or "slave" in a specific context that is even explained in a dictionary, why are you trying to force your political beliefs (because that's what it is) on me? why are you constantly patronizing me? that I indeed find it offensive. I'm not going to change the canonical definition of the words I use to please you.
It really isn't interesting at all. It's actually quite banal at this point. I suppose you're one of the types that thinks that the word "blackboard" is racist too. People like you make the internet experience and life in general an absolute ball-ache to be a part of. Seeing a racial issue in words like "blacklist" and "blackboard" is like seeing an issue of decency in a woman breastfeeding, the commonality being that it is your problem alone.
No, that's much different. "Blackboard" means a board which is black. "Blacklist" means a list of things that are bad (black being a metaphor for bad).
You’re right regarding the meanings. They are two valid uses of the word black. Neither invokes any association to skin colour, but you choose to. Am I banned from falling upon dark times, because some people also have dark skin?
No, and I don't think anybody is suggesting you should be banned or that it's a grievous sin to use such an analogy. The point is just that it's something which could conceivably be seen negatively in some circumstances and it's also only a small amount of work to adopt different language.
This kind of dismissal is basically the same as telling investors in the tech sector that they ought to be investing in curing cancer instead. Nobody is asking you to drop everything and make social justice the #1 priority in your life. It's just a small issue which is worth a small amount of consideration -- certainly not vehement denial, though.
It's not the same. There is a measurable difference between solving cancer and not solving cancer. Your issue is actually not an issue at all, and it isn't worth any consideration.
I've wondered that too. My inexpert, unresearched guess is that people are scared of darkness, because we don't see too well. Hence black is a scary/bad color, and blacklists are the things to avoid.
Brightness is good - you can see, it's usually warm, etc. Hence whitelist.
But like I said, total off-the-cuff guess, take it for what it's worth.
There's plenty of similar cases in IT. Some people find Master/Slave offensive - though I personally believe this is a 'looking for problems'-kind of thing.
Yeah, we'd have to go through all sorts of books and documents on mechanics and electronics to whiteout (Doh! doing it again!!!) reference to those terms.
Actually, "rub out" goes all the way back to Teletype keyboards:
> "This code was originally used to mark deleted characters on punched tape, since any character could be changed to all ones by punching holes everywhere. If a character was punched erroneously, punching out all seven bits caused this position to be ignored or deleted, a computer version of correction fluid."
Historically (eg in different religions and such), light was associated with good and dark with bad/evil. You have yin and yang, light and dark, day and night, positive and negatice. It has no racial connotations.
>Thx for link. Everytime i read the word ‘whitelist’, i think, havn’t they caught up with the times, thats racist, whitelist. Why does white suggest good?
I can’t tell if this is satire. Which means it’s either really good, or sad times we’re living in.
"white" people aren't actually white-colored, and "black" people aren't actually black-colored. The real problem is that we don't say "fair" and "dark" skinned.
https://www.parliament.uk/documents/commons-committees/cultu...