Over the last few years I have moved my entire Digital life to the Apple eco-system with a deep belief that Apple would care about "my" privacy for the long-term. And with this move to create a backdoor on the iPhone is making me very worried about relying on the Apple ecosystem.
Apple has long set itself in opposition to user control and autonomy. That approach runs bone-deep in their DNA; it's practically their brand! To me, that DNA makes a move like this entirely unsurprising. To be clear, this isn't exclusively a bad thing: a paternalistic, limiting approach to one's userbase allows for everything from better hardware/software integration to lower risk of malware to Steve Jobs' famous "freedom from porn".
By contrast, their privacy messaging runs much more shallow, and those who uncritically swallowed it always struck me as rather naive. Apple's ostensible pro-privacy stance has nothing to do with their ethos as a company, but was just convenient positioning relative to Google given their respective lines of business. It's fair game to highlight this advantage, but it's a lot like Facebook saying they "prioritize" avoiding abusive manufacturing practices more than Apple...while technically true, it's a function of their market position, and is not very useful for understanding the org's future behavior.
For years I've found it interesting that Apple's privacy stance hasn't really shifted much over the past decade (other than their messaging in their advertising etc.) but Google's stance has changed so dramatically that by comparison, Apple's gone from invasive and overbearing to respectful and privacy-valuing.
I don’t want to be mean, but why would you entertain such a “deep belief” about a corporation? I’ve heard similar things from many Apple users. What leads people to have this kind of faith in particular corporations? It’s really alien to me.
I don't know how Apple has managed to get the image of a "privacy friendly" company while painting Google as "privacy unfriendly" at the same time. Apple is better than Google, but mostly because they can't gather as much data about you as Google does (this is not related to iOS vs Android; it's about how pervasive Google is on the WWW), and that's about all the practical differences between the two. And on iOS, Apple is also just one silent App Store update away from being able to exploit your data, not very different from Android.
Even if I consider Google enemy #1 right now, I also don't really see them as privacy unfriendly. They have a shitton of data, but what have they exactly done with it that warrants the moniker? Microsoft seems to me to be much more privacy unfriendly (e.g., LinkedIn)
This one is annoying to me. I'm pretty sure Apple literally paid for that reputation. They just started marketing themselves as privacy focused a few years ago and everyone slurped it up. What's annoying about it? Well beyond them milking the privacy wave, I do believe that at a hardware and system software level Apple did outplay the competition for a very long time. Keychain, Secure enclave, biometric auth, secure boot, signed binaries, pointer auth, etc. They've pushed the state of the art in regards to those technologies making it into consumer hands and did it better than anybody else. So it's somewhat justified. But then they milked it and because the left hand doesn't know what the right is doing, are now taking a complete 180 "for the kids".
With the exception of Keychain and Biometric auth, everything you mentioned ultimately benefits the party that controls the trust root on the device (apple) significantly more than the user of the device.
At least with the right Android device I can flash my own keys to the trust root.
Sure but it also is good security. I too would prefer Apple to let me flash my own trust root. Doesn't mean the tech isn’t valid and security/privacy centric just because Apple uses it to lock their platform down.
No what I’m saying is it’s annoying me because they’ve both earned the reputation and also marketed the hell out of it. Apple seems to want it both ways then they go and do this.
It’s not just marketing fluff though, Apple actually is better from a privacy perspective on many fronts:
- iMessage is E2EE (with a big asterisk here of course)
- Maps data stays on device unless you explicitly enable anonymous aggregated crowd sharing data. “Significant locations” never leaves your device
- Apple Photos machine learning recognition all happens on device, not on Apple servers. This is one of my favorite features of my iPhone
- As of the next iOS update, Siri voice command processing happens on device
- The new Apple Private Relay (or whatever it’s called)
No they’re not perfect and they could (and should) do a lot more, but there are objective things that they do better from a privacy standpoint than most other tech companies.
But it IS marketing fluff, because you literally have to trust them on their word. They provide no means to verify what they are claiming and even if you could verify they could still silently update the client apps the next day (since they _exclusively_ control all platforms, all clients and all servers) and you would be none the wiser.
Technically Google also "promises" practically the same (E2EE in Chats, only sharing map data if you enable it, local photo recognition, local voice assistant on the new Pixel whatever, etc.). But since you have to trust them on this, all of it is not really much better than just having a reasonable "privacy policy" which we all know is meaningless.
The local voice assistant model is on a fraction of Android devices.
E2EE for Messages literally rolled out last month, relies on spotty RCS support and doesn't work with group messages.
Apple Location tracking designed so that even when shared between your devices via Apple's servers it's not visible to Apple.
I have literally never seen Google claim Google Photos is using on device classification, I'd love a source for that since nothing about how it works implies that. Maybe you mean it does some very specific type of classification as a pre-processing step?
> But since you have to trust them on this, all of it is not really much better than just having a reasonable "privacy policy" which we all know is meaningless.
You're free to be wrong? You need to trust someone on a claim vs someone not making the claim at all... those are not the same thing.
Apple hasn't shown a reason to lie, and here's no equivalent financial incentive to Google unless you think Apple is pulling the equivalent of a fake moon landing and somehow running a 147 billion dollar data selling operation no one knows about...
I am not going to enter a discussion on who rolled whatever first, whose userbase has the most up to date OS, or the like. iOS is not going to win any privacy argument here anyway for the simple reason that at least I can de-googlefy Android.
The point is that they make extremely similar promises that you can't verify.
> You need to trust someone on a claim vs someone not making the claim at all... those are not the same thing.
What is the difference between someone saying that your data is not visible to them (oh but you can't verify it!) versus someone saying that they will not touch or store your data after processing it on their privacy policy?
When they control the entire platform, the fact that they claim to be doing something technical to prevent them from accessing the data is absolutely pointless. Even if it were true, they could be able to change it in an instant after a silent update (by themselves, by the government, or even by a third party attack!), and no one would be the wiser.
> Apple hasn't shown a reason to lie, and here's no equivalent financial incentive to Google unless you think Apple
Not sure I understand.
In any case, most companies have already lied multiple times, and _all_ companies share at least one big reason to lie: Come tomorrow, some three letter agency could send them the letter and outright force them to capture your data, and since the only thing that prevents them from capturing your data is empty promises, they could do it in an instant. And would do it. And in fact do it regularly (TFA could in fact be an example of Apple trying to get rid of that).
Btw,
> I have literally never seen Google claim Google Photos is using on device classification,
I have never put a SIM nor network credentials on my only Google Android device that has practically never abandoned my residence and yet it is tagging photos of objects. That's my source. I have not put a SIM either on my iOS devices and they do not tag pictures :)
I stopped reading at "what's the difference between a massive cover up of subverting the most basic tenants of how your systems are designed and never claiming those tenants in the first place".
That's exactly what I just explained, and I don't enjoy these kinds of circular back and forth.
If you feel like an OS where the biggest comfort is you can tear out it's core works best for you, go ahead.
But I will point out starting your comment with "I'm not going to get into the most important aspects of how secure a modern platform" is not a ringing endorsement.
-
I work on embedded Android for a living, so it's not like I'm afraid if what I don't know or something, iOS is a demonstrably privacy-oriented platform.
A mentality of privacy first even if it's just for the sake of marketing differentiation will always come ahead of a mentality of "data collection so we can sell ads", you're not going to change that for me with baseless insistence.
> I stopped reading at "what's the difference between a massive cover up of subverting the most basic tenants of how your systems are designed and never claiming those tenants in the first place".
I do not see who is "subverting the most basic tenants of how your systems are designed", and if this is trying to say that the other companies are never claiming those tenants in the first place, then that is objectively false. Google, Microsoft, Apple, etc. have claimed being "privacy first" more times than I count, and, my point is, they are all making similarly dubious promises regarding policy (like, closed-source encryption, no data sent unless you ask for it, etc.).
> If you feel like an OS where the biggest comfort is you can tear out it's core works best for you, go ahead.
This is misleading. Thing is, it is not "tearing out its core" precisely because Google services are actually _not_ at the core of Android, yet. You still cannot do that with iOS (because they do not allow tampering to begin with, least you figure out a way to avoid leaving a fingerprint in their servers).
> "I'm not going to get into the most important aspects of how secure a modern platform" is not a ringing endorsement.
First, none of the topics mentioned are the "most important aspects of how [to?] secure a modern platform".
Second, I distinguish privacy from security. i.e. it doesn't matter if it was the most secure platform in the world if I have to trust all the data to a distrustful entity to begin with.
In fact, I actually prefer privacy over security. E.g. for messaging, I value using servers _I_ control/trust much higher than E2EE encryption, which is definite secondary worry. Metadata is a dangerous thing to leak.
> A mentality of privacy first even if it's just for the sake of marketing differentiation will always come ahead of a mentality of "data collection so we can sell ads",
This is basically trying to answer "to whom I put my blind trust?", so it's subjective. But in my experience it is usually the most "privacy marketing" companies that are usually the worst regarding privacy. See most VPN resellers. Most privacy marketing is just bullshit.
I can't tell if you're unintentionally talking past the points everyone else with, or if you just think that your personal preferences/thoughts/observations just supersede the reality everyone else lives in.
Like
> they are all making similarly dubious promises regarding policy (like, closed-source encryption, no data sent unless you ask for it, etc.).
I just listed multiple of the most common pieces of software where Google and Apple make claims to wildly differing levels of privacy! Does your decision that none of that matters because both companies mention privacy in marketing somehow override reality?
> First, none of the topics mentioned are the "most important aspects of how [to?] secure a modern platform".
You're not sure if "to" is the missing word?
And if you think the number of updated devices is not one of, if not the most important aspect of securing a modern platform, then your opinion on security doesn't matter. And contrary to your implications, privacy and security are not something you prefer over each other, privacy does not exist without security.
> Second, I distinguish privacy from security. i.e. it doesn't matter if it was the most secure platform in the world if I have to trust all the data to a distrustful entity to begin with.
It doesn't matter if you have the most private platform in the world if it's not secure? You can control all the servers you want, if they're not secure your privacy is even worse off than it'd be with a 3rd party at least trying to anonymize it for an ad platform...
> This is basically trying to answer "to whom I put my blind trust?", so it's subjective. But in my experience it is usually the most "privacy marketing" companies that are usually the worst regarding privacy. See most VPN resellers. Most privacy marketing is just bullshit.
Lol so your hunch based on VPNs is supposed to just supersede basic reasoning that if a company can profit off not selling your data wholesale, and is openly designing systems that do benefit your privacy... they're somehow less trustworthy than one that is openly selling your data and requires that they do to exist.
> I just listed multiple of the most common pieces of software where Google and Apple make claims to wildly differing levels of privacy!
Wildly different? Apple claims E2EE, Google claims E2EE, _Facebook_ of all companies claims E2EE! Did it really change it your opinion of Whatsapp the fact that they claim E2EE? Everyone just laughed and forgot. Whatsapp is going to E2EE your chats right until they moment they don't, and without warning, and you have no way to check! Why take Apple's word differently? They also lied multiple times already! (e.g. Jabber federation)
> And if you think the number of updated devices is not one of, if not the most important aspect of securing a modern platform, then your opinion on security doesn't matter.
I am not sure why my "opinion on security" would be relevant but I for sure think that _ability to verify the security claims_ ranks much higher than anything that has been mentioned so far, including "software updates" of unknown content.
> You can control all the servers you want, if they're not secure your privacy is even worse off than it'd be with a 3rd party
The example is just to show the difference between security and privacy. Your analogy is creating a false association since I can just keep my data offline.
> Lol
And you have the incorrect "hunch" that Apple is not a services company. And that they design systems that "benefit your privacy". They only design systems that tie you to _their_ systems. Wake me up when they design something that doesn't, which would start to look like real privacy.
Like, I spoke about some very specific features, and now you've literally reduced it to "E2EE"... like the entire honking app that it's built around doesn't matter.
If you won't accept that Android's carrier-dependent, 1:1 only RCS app isn't equivalent to iMessage, then there's nothing to talk about.
> The example is just to show the difference between security and privacy. Your analogy is creating a false association since I can just keep my data offline.
How do you keep a messaging app offline.
> And you have the incorrect "hunch" that Apple is not a services company. And that they design systems that "benefit your privacy". They only design systems that tie you to _their_ systems. Wake me up when they design something that doesn't, which would start to look like real privacy.
Do you even knows what you're talking about? Or Apple is 100% a services company that's the whole point. They're a public company, following the money is easy, they make a lot of money off selling people services and hardware.
Meanwhile Google makes most of it's money selling people's data.
It's not rocket science figuring out which is easier to trust, but you just seem hellbent on rationalizing your opinions with more opinions stated as fact and vague paranoia.
That's your right, but don't be surprised if people call you out on it!
> Like, I spoke about some very specific features, and now you've literally reduced it to "E2EE"... like the entire honking app that it's built around doesn't matter.
Okey, choose any other! What else did you mention? Location services? The same. * Local photo classification? The same. And so on and so forth.
* Both claim "Anonymous and encrypted", for whatever is worth, but we all know how "anonymous" and "encrypted" it must be since they are both using to build their beacon/SSID location database "to be used for augmenting this crowd-sourced database of Wi-Fi hotspot and cell tower locations.", wherever you want it or not (source: Apple's privacy policy).
> How do you keep a messaging app offline.
Or accessible to the relevant parties only.
> Meanwhile Google makes most of it's money selling people's data.
Google, Microsoft, Facebook, etc. also all are oficially _services_ companies....
> It's not rocket science figuring out which is easier to trust
It's not easy because you have practically nothing material to base your trust on, so you have to resort to fluffy marketing.
Claiming that Google is not that much worse than Apple is hardly "paranoia" material. Hitting a nerve there, I guess...
The only nerve you're hitting is the one that fires when I read poorly informed pseudo intellectual drivel
You're asking why someone would trust someone making a claim over someone not even making the claim.
I could ask a toddler "who is more likely not to eat your lollipop, the man who says he will eat it or the man who claims he won't" and they'd understand, yet you've managed to convince yourself that's a tough question.
Have a good one, good luck with your privately owned servers and home-brew OS. I'm sure your privacy is very well protected by giving the world a fingerprint on your identity.
... Since we are dropping the standards now, I will accuse you of being brainwashed. At one point you said:
> I have literally never seen Google claim Google Photos is using on device classification, I'd love a source for that since nothing about how it works implies that. Maybe you mean it does some very specific type of classification as a pre-processing step?
When Apple puts fluffy marketing claiming that now they are doing photo classification on-device, you immediately assume not only that it is true, but that they are _the first_ to do it, and that everyone else is doing photo classification on some fancy remote service. "Why, if Apple markets it, then everyone else would also have marketed it, otherwise it means they are not doing it!"
The thought that perhaps it was actually the opposite - that the majority of vendors were already doing photo classification on-device, and that it was _only Apple_ who was doing the stupid move of sending your photos to the cloud for tagging - never entered your mind.
This is the power of marketing.
And guess which one is rather likely to be true. I just took a couple of pictures of bananas in my 2018ish Android device with no network connectivity of any kind and after one minute they were tagged as "bananas" and "fruit".
This is precisely what I was complaining on my original post. Apple's privacy strategy is mostly marketing fluff at best, and yet it is having an unreasonable effect on people like you.
> I could ask a toddler "who is more likely not to eat your lollipop, the man who says he will eat it or the man who claims he won't" and they'd understand
A more correct analogy would be: who of the men from the shady vans is most likely to kidnap your children. The ones who claim to be "experts in not kidnapping children" or the ones who claim to be "experts in not kidnapping children, those guys at the other van are the real kidnappers".
> Have a good one, good luck with your privately owned servers and home-brew OS. I'm sure your privacy is very well protected by giving the world a fingerprint on your identity.
Again another ridiculous analogy that does not work.
You do not need a "home-brew OS", and I have in fact mentioned several alternatives during the above conversation (e.g. de-googling).
You'll never get the developers (who are largely fungible) to fully grok the long term consequences of what they are creating because they are blinded by the (entirely natural) fact that the company is keeping them fed.
I think you’re being unnecessarily cynical and actually incorrect here. It’s not like I’m told to implement some tiny thing where the left hand doesn’t know what the right hand is doing. The entire premise of whatever system is being built is clear, as are all the discussions with the privacy folks, the execs, marketing, designers, etc. Privacy is a constant point of discussion, and often times what engineering wants is made far more difficult by privacy, and they get the final word.
A single person can implement a chosen weakening, compromising everything done by lower-level staff. If you don't think this is trivially possible, you know nothing about the fragility of security.
“ Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages.”
The asterisk I was referring to is the fact that the E2EE keys are backed up to iCloud Backups along with your messages, and those backups are not E2EE. So the encryption of iMessages is fairly easy to get around if either the sender or the recipient uses iCloud backup.
> Even if I consider Google enemy #1 right now, I also don't really see them as privacy unfriendly. They have a shitton of data, but what have they exactly done with it that warrants the moniker? Microsoft seems to me to be much more privacy unfriendly (e.g., LinkedIn)
Collecting and having that data to begin with is awful & dangerous. The behavior of collecting it is creepy and gross. The existence of the collected data is threatening.
The reason I consider Google a public enemy is not because they collect data or have a poor privacy policy or whatever (because I don't think they are particularly bad at that). The reason is because I think they are "unavoidable". It's very hard to do anything online without it leaving a fingerprint on some Google server. Even governments seem to be ready to endorse Google services (e.g. recaptcha, Android) for fulfillment of basic services.
Apple, Facebook, etc. may have (or not have) worse privacy policy, but it is much easier to avoid leaving fingerprints on their servers. Amazon & Microsoft are on the other hand particularly dangerous, though.
You hit a nerve. I have to go through two reCaptchas to prove that I am not a robot so that I can pay my children’s school activities. Why would they care if I am a robot that wants to pay them? For some time checking in for COVID in NSW Aus took 3 reCaptchas if you were in private mode on your browser. Similar in many government services where they are pervasive and injected right in the middle of potentially very private workflows.
> Why would they care if I am a robot that wants to pay them?
I’ve seen scammers find obscure payment portals and run through a bunch of card numbers to see if they approve/deny. Recaptcha prevents that. It’s obviously not the only way, but it’s low dev effort and it works.
This is an easy fix. Just require being signed in and only allow payment with a valid child attached to the account. You can allow anonymous payments only if you have a valid child account code.
> They have a shitton of data, but what have they exactly done with it that warrants the moniker?
Uh... they've aggressively gathered that data? If I take a bunch of photos of you sitting on the toilet, the concern isn't whether I've publicly done anything with those photos, it's that I took them in the first place.
> I don't know how Apple has managed to get the image of a "privacy friendly" company while painting Google as "privacy unfriendly" at the same time.
I guess that's all about incentives. Since Google is an advertising company, it has most to gain by knowing your deepest personal info, i.e., so that it can "sell" this info to their customers.
> don't know how Apple has managed to get the image of a "privacy friendly" company while painting Google as "privacy unfriendly" at the same time. Apple is better than Google..
Most of the commentators are upset because they don’t understand cloud and what it means. As an individual, once you provide data to a third party, you’ve lost exclusive control, period. How you feel about a company is just a feeling.
With respect to this Apple thing, it’s pretty clear that Apple thinks this is more respectful of privacy than what the other providers do. IMO, this is what allows E2E messaging to actually remain E2E.
The EFF and some HN commenters are blending together the parental control aspect from the detection of illegal images aspect. They are two very different things.
And when you talk about the reality of privacy, you can, trivially, turn off the the third party features that trigger the use of cloud services.
With respect to privacy, history shows that Apple keeps adding more and more user control of privacy. Per app location, access to other data, location permission in the browser, tracking, forcing app disclosure of data collection, etc. Can you point me to a privacy related preference they have removed and not replaced with something else? I can't think of one.
> Apple keeps adding more and more user control of privacy
Against other tech companies. This whole debacle is a clear example where customers have no control. They can't even not update their device because its already there.
I can actually name a privacy feature that they took a long time to replace: Siri voice recording accountability. Siri voice recordings were saved in the cloud. They were linked to your identity. If you GDPR'd apple they deleted these recordings. They stripped the identifiers. Then you can't GDPR them, and its arguably no more private. LATER they stopped saving them to the cloud (yay more private again!).
To be clear: the voice recordings can identify you even without identifiers. Its your voice. You might say "how do i get too <work address> or <home adddress>. You might say "Hey siri, tell <secret lover> that i love them" and boom now that is out there. Lots of reason people want control there.
Yes, they can identify you. First, you can always delete all of that data at any time in Siri Settings.
Second, starting in IOS 15 with an A12 or newer, the processing is moving on device. Here is an excerpt from https://developer.apple.com/ios/:
"iOS 15 introduces even more privacy controls to help protect user information. With on-device speech recognition, audio of Siri requests is now processed entirely on iPhone by default, and performance improves significantly. Mail Privacy Protection stops senders from learning whether an email has been opened, and hides IP addresses so senders can’t learn a user’s location or use it to build a profile on them. App Privacy Report offers an overview of how apps use the access that has been granted to location, photos, camera, microphone, and contacts in the last seven days, and which other domains are contacted."
I wouldn't say I have a "deep belief" in Apple, but I trust Apple more than the other available options, because Apple's financial incentives are basically the opposite of Google or Facebook's.
Instead of being motivated to use and/or sell user data, it's in Apple's best interest to keep that data secure and play to its competitive advantage as the company who is not trying to pry into its users' lives.
That said, I really don't understand why Apple would do this, and it is very disconcerting. Obviously child pornography is absolutely horrible, but this move seems to throw out everyone's privacy in exchange for catching what I have to imagine is a tiny minority of users who are involved with that.
> I wouldn't say I have a "deep belief" in Apple, but I trust Apple more than the other available options
That, for me too. From my perspective they basically don't have competition. The only viable alternative to Apple is "do everything yourself with open software, and just accept everything being buggier, jankier, and less helpful, while eating way more of your time". Obviously I'm not in love with that option, but what else? MS? Google? Ha.
If Apple stops being Apple it'll be the elimination of a whole category of products and services, essentially. Just won't exist anymore.
This is all less a product of Apple being wonderful than of user-facing computing everywhere else being an embarrassing shit-show.
While also seeing deep belief in corporations sceptically myself, I don't think it should be the root cause of scepticism toward apple. The maximisation of proprietariness paired with this "deep belief" breads a really hard possibility to switch vendor (of whatever product), locking people in so much that the perceived cost of privacy-breaching software is lower than the cost of switching.
>I’ve heard similar things from many Apple users. What leads people to have this kind of faith in particular corporations?
It is all part of their PR and marketing. And they are very good at it. This happened in Tim Cook's era, not Steve Jobs. May be too good it backfired. If you pay close attention to all the Apple news, ( not some, but ALL ). Pick the top 10 Apple news site and read through it for a few years and it will surely get to most people. And Apple news are the most sort after, highest paid Ads revenue in tech sector. So it is sort of a feedback loop. Worth remembering there are relatively little journalist left, only reporters. No one bother to fact check anything any more. That is why we have so much crap that anyone has some domain knowledge on the subject will immediately smells BS.
And their PR tactics are.... dull or should I say predictable? Controlled Leaks ( I mean come on from the same press again? ) on the same subject or using same allies in testimony ( SnapChat again? )? Unfortunately most people dont do any analysis like this. ( Although I guess that is borderline forensic ).
It probably stems from the well-known history of the start of the company. Who knows who started Dell, McDs, any number of SSD manufacturers, Lenovo, BlackBerry, etc.? I guess we all know about Microsoft and Bill Gates, but their story trajectory went the other way (more corporate). So, I imagine people say that because something Steve Jobs or Ives said or was recorded as saying resonated with them, and thus they share that deep belief.
Cynically, I would say I also had a deep belief that Apple would respect user privacy because Apple and Google are in a fight over control over the internet and your whole life. Google goes at it by giving away free shit then making billions off your data, Apple goes at this by making billions off hardware and software. From that point of view respecting user privacy is Apple's most powerful weapon in their fight against Google since its a place that Google simply cannot go.
Because privacy is an obvious goal that Apple can crediblebly target because of the nature of what they sell. Google is fundamentally an advertising agency, Amazon sells cheap stuff but steals your data in the process. Apple sells premium hardware and services, which means they have the margin to do things that neither google or amazon has. They also make money the old fashioned way: when you pay them.
Therefore Apple could focus on privacy in a credible manor, because it didn't threaten their business model and it was an argument they could use, but their competition couldn't.
I imagine this is the result of some sort of negotiation with the US government. US government wants backdoors, Apple doesn't want to provide them. Apple also wants to store pictures on iCloud encrypted, without encryption backdoors. The compromise to protect the rest of encrypted things on iCloud was to implement this client side scanning before upload to iCloud to ensure iCloud does not contain any illegal material without exposing backdoors directly to the US government.
In light of that, I then imagine that other providers are already storing photos and data unencrypted and did not object to adding scanning on the server side, and also did not share anything about the scanning with the public.
> In light of that, I then imagine that other providers are already storing photos and data unencrypted and did not object to adding scanning on the server side, and also did not share anything about the scanning with the public.
It is well known that competitors scan on server side, I don't think that's any kind of a secret with Google Drive et al. Outrage is exactly about that -- client side scanning is seen as way more intrusive, for a good reason.
I think Apple sees the writing on the wall in terms of governments soon forcing this kind of oversight into platforms. They have tried to create a system that will not create false positives and will still protect the 99.99999% of their users that simply want privacy. Its a messy business and fraught with downstream implications, but I think Apple is trying to avoid a future where end-to-end encryption of communications is made illegal by a tech-ignorant congress.
I also think they are simply pissed off that their platforms are being used to shield these activities.
Is being preemptive to avoid imposed regs that might be much worse for their users the right approach? I don't know. Based on the tech papers linked at the end of their announcement, they are trying hard to thread a needle here.
But, we've fought this type of bull crap entering via our government in the past. We rule ourselves we can literally say, "no, government, you can't do that". Why is it different now? Are people getting tired of the recurring battles? Is human trafficking on the rise?
First, yes human trafficking is on the rise. It is a horrible blight affecting millions of people globally. Most people are not aware of how pervasive it is.
Second, what is different today is that pervasive tech has made it possible for people to communicate over distance in a shielded way. Before modern tech, people had to physically stand next to each other to have a truly private conversation. That is new. I think that makes legislators feel that since tech has enabled this capability, tech should be enabling a way to "fix" the issue. The argument isn't completely without merit. Along with all the positive benefits of tech, there are of course some downsides. The question is what role should tech have in mitigating the downsides. This is an attempt by Apple to provide a mitigation. Time will tell if this attempt was a good idea. It could provide some benefit, but will the cost be too high?
No we’ve had the backdoor discussion on and off for the last 30 years at least, during which people have been able to communicate privately over long distances.
I’m interested in reports that human trafficking is on the rise. I wasn't aware this was becoming more of an issue. Is there correlation between the introduction of mobile phones and an increase in human trafficking that would suggest new technology is an enabling factor?
Given the rate of government recognition and response to issues, I would classify the last 30 years as new. However, I'm not sure about the private assertion. The first cell phones were analog and anyone with a scanner could listen in. Any agency with a court order can listen to digital cell phone calls. Email, SMS chats, etc. are all accessible via court sanctioned search permits. End-to-end encryption has enabled truly private communication (ignoring NSA high compute capabilities) and that has not been available in a widespread way until the last few years. I'm not referring to computer knowledgable HN types.
I'm not aware of a study looking at that particular correlation, but there is plenty of data available on the problem itself. I wouldn't be surprised if such a study exists though. I am no expert, but I have read reports from what I at least consider reputable sources. Its an easy search.
Law enforcement likes convenience as much as the rest of us humans.
The counter argument is that LE did okay before modern tech, and that the technology they already us extends their reach rather than just keeping up with an arms race.
The “next to each other” bit establishes a pattern of behavior that creates an investigative priority that might lead to probable cause. Do they have other sufficient options to do “old fashioned detective work” under the new system?
If so, then as the goal of society is not to make law enforcement as efficient as possible, how much pushback is reasonable and healthy, and when do we just cross into anarchy?
True. Apple is trusting that the National Center for Missing and Exploited Children will provide authentic and verified images. It is also clear from the details that some small number of matching images will not be enough to create a report and any report will be manually verified. A single matching photo will not create a false positive. Given the need to insert a lot of photos combined with manual verification of a flag, it sounds like a lot of people have to be corrupted to lead to a problem for someone. Also, NCMEC is well known. I hope they stay true to their stated mission. Time will tell.
As others have pointed out, the longer term problem is that the capability may tempt governments to put pressure on Apple to work with other image sets. However, Apple has of course had this capability for quite some time and (as far as we know) have not been doing this so if any company is large enough to resist they are. It is a risk, but not much more of a risk than it has been to date. I do think it is a positive sign that they are being open about what they are doing.
In the context of software, beliefs are meaningless, religious issues. You do not need trust when you can have proof (i.e. source code that you can compile and modify yourself).
Apple cares about your privacy as long as it helps their marketing position against Amazon, Google and Facebook. If they decide for a different marketing, you will lose all your privacy. If their bottom line dips they will sell your data in a heartbeat.
I did the same. It's not just concerning that this occurred, but that we effectively got no warning about it at all that this would be starting! Its already running now.
As long as the US government is pushing laws that makes it easier to spy on people, it's not going to get any better.
The only hope is for other countries to (legally) force global companies to abide by local laws, just like EU did with GDPR. So user experience will vary by country of residence.
“ Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages.”
Such a Faustian bargain; giving up the ability to store child porn in the cloud for reliable backups and seamless access across devices... I can't believe people put up with this!
And giving up the ability to carry a gun in exchange for safe streets for everyone! Lunacy! /s
If it would 100% eliminate child pornography and not produce a residual erosion of personal privacy no one would be arguing against it. It won't solve the problem. It will erode privacy. It's perfectly reasonable to be not ok with it.
Why does the elimination have to be 100% and the privacy erosion be 0 for it to be worthwhile?
To be clear I'm not saying that Apple are right to do this or that there aren't privacy concerns. What I am saying is that those thresholds seem unreasonable.
Because that's the only way an individual's value system wouldn't come into play. If it's anything less than completely fixed and anything more than no erosion of privacy, then it just depends on how important privacy is to the person. I personally think privacy is super important, and it would take massive changes in how society and our government work before I was willing to give it up for just about anything.
Sure. It's a bad faith argument though. Nowhere in the world, with or without guns, has absolutely safe streets. So where on the sliding scale of safe it becomes worth it is subjective.
What information leaks from the system? That a user isn't storing child porn on Apple servers? That's literally the only information emitted by the system _unless_ you have actual child porn on your system.
One, software sucks and is full of security holes, so whether or not Apple intends to leak any other data is irrelevant, because other data will leak regardless.
Two, you're assuming this is the only thing they are going to intentionally do with your data. That will change. In an infinite future, things change. Every decision is eventually overturned, and it's luck of the draw which direction that change is in.
Three, the U.S. defence apparatus has proven time and again that they will violate the law and use whatever means necessary to spy on who they want, and this is another kitchen window for them to climb through.
> Can you point to a single instance where it was used for ulterior purposes?
That's the sick genius of parallel construction: without leaks, we'd never be able to know. (And yes, there are plenty of instances of abuse, brought to the public despite the risk whistleblowing entails.)
Ask yourself this: even if the "right" people are in control of this system now, could the "wrong" people ever gain control of it? If the answer is yes, maybe the system should not exist.
Then to be consistent, you cannot use closed source, proprietary systems. That's the trade off.
You _NEVER_ know what Apple is doing under the covers. This principle is well understood and has been espoused by RMS for over a decade.
But saying "well Apple could use this system for evil if they want..." is silly. Apple could ALWAYS use their system for evil if they wanted. This addition does nothing to improve that position.
> This system already exists on iCloud. It's been in use for 2 years.
Source? That is not true as far as I'm aware.
> Can you point to a single instance where it was used for ulterior purposes?
What do you expect them to do with they receive a national security letter? Shut down? Go to prison for decades? Of course not. They will comply, as they have always done when the technical means to do so have been available. Why do you think we would even be told about it in situations when it can be kept secret?
>What do you expect them to do with they receive a national security letter?
They can't do _anything_ with an NSL on your data because they don't have the encryption keys. Without this feature, they control the key. With this feature, you control the key and they only gain access when >N tokens cryptographically indicate hits on the CSIM database. Those have to come from _your_ device. And those tokens only give access to the data in question.
Everyone understands that Apple can do anything with unencrypted content, but now Apple can also identify encrypted content because they make the phone hash it before it's encrypted. Apple is essentially lying when they say this can only be used to identify child abuse material because it can clearly be used to identify whatever Apple tells it to identify. The system doesn't care if it's used to target whistleblowers or child abusers. Apple doesn't even need to know what content they are made to target. The government could just give them a hash and demand to know who has a file corresponding to it, with zero risk that an Apple employee will leak anything the government want to suppress.
There is no reason Apple can't give you E2E encryption without any backdoors or spying features. They should act on abuse when it's reported to them by someone who can give them the key, but otherwise not concern themselves with the content that's stored.
WhatsApp is quickly earning back my trust as the most privacy-respecting chat system for laypeople right now.
Obviously, actions matter more than words, but the actions they have been taking as of late are promising. I certainly won't be putting all my eggs in one basket after being burned this many times, but for now I am happy to recommend WhatsApp.
Right, they guard the sanctity of your content fiercely. And make enormous bags of money off of your contact list, frequency and timing of contacts, awake timing, location, etc etc.
And even if you're in a country (only really Germany AFAIK) where the worst use of metadata for advertising is legally tricky... WhatsApp is the very definition of the fox guarding the henhouse. Do you really trust one of the largest data harvesters andy advertising marketplaces in the world to guard your data? Their business model literally depends on violating that trust. Do you honestly think that's a safe long term bet?
Opposite directions. If you put a clean shirt and a dirty shirt in a bag together, the dirty shirt can rub off it'd dirtiness on the clean one, but the cleanness can't rub off and make the dirty shirt clean.
I remember the terms of service change and I almost left the platform because of it. But you are also greatly mischaracterizing the change. It actually only allows those things in some pretty specific circumstances which are under the user's control (although that's still not ideal, of course).
I am only trying to convince WhatsApp that the stance they are taking here is the right one. Unless we give credit where it's actually due, what will be the incentive for any bigcorp to take any privacy-protecting actions?
WhatsApp refuses to take action against misinformation or political spammers. They understand that their products are being used to undermine democracy in the whole of Latin America and they refuse to implement one spam detector.
There are large scale implications to these "privacy" features.
Take it as a lesson in critical thinking and trust. I presume this is a lesson you already learned long ago, but clearly it's a lesson some people haven't learned yet, because a lot of people seem to have believed him.
Fool me once, shame on you. Fool me twice, shame on me.
Facebook has made promises, then broken those promises, time and again. They've lied to founders they've acquired, to users, and to governments and regulators.
If anyone trusts Facebook at this point knowing that, I'm not sure what could convince them otherwise.
“Facebook continues to profit from allowing child predators to distribute pornography through WhatsApp, refuses to adopt industry-standard technology to block this”
Apple let Facebook (WhatsApp) grab the high ground on privacy?!
That's funny, in a boggling at a blunder of the powerful kind of way.
It's also sad, if we're reduced to watching a handful of organizations battle over who's the most privacy-respecting, relatively speaking, when their standards are so low.
Unfortunately any time a National Security Letter comes along, whatever it asks for will get implemented and the company will be unable to communicate anything for the gag period. It may already have happened.
Given what we've learned about the NSA over the years, the news that's leaked is consistently _worse_ than expected, not better. Pessimism is warranted.
This policy provides cover for the NSA's PRISM partner Apple to search and exfiltrate data on devices en masse. Apple and the NSA no longer have to do this in a targeted way to remain undetected. It's now done to everyone. Obviously this is a huge problem for privacy.
If Apple doesn't reverse course despite all this backlash, it's reasonable to assume the alternative for Apple must be worse. There must be an explanation for their behavior.
Apple's decision makes sense if the NSA were already doing this on devices with Apple's cooperation, and Apple feared the scope of the operation would be discovered. Then making it public "to catch child predators" seems like far less damage to their brand than appearing to buddy up with the feds.
I can't think of another reason which justifies Apple stomping on their own privacy narrative.
This is the biggest issue, IMO. As I understand it, with governments, particularly the US, wanting Apple to get data off an Apple device, they couldn't because forcing Apple to write software to do this would be coerced speech, which is forbidden by the US court's interpretation of 1st amendment of the US constitution. (Obviously, this doesn't address breaking into the device, which the US did in the San Bernadino shooter case, which is more akin to law enforcement breaking into a safe or house with a warrant.)
By implementing data harvesters, Apple's 1st amendment protections goes away, since the code is already written. What must be changed, by court order or what have you, is the configuration for what gets reported back. Again, IMO, it becomes much more like a "give us your files" subpoena, like Apple, Google, etc. get on a regular basis and must comply with. But IANAL.
EFF (where I was working at the time) promoted Apple's first amendment argument in the San Bernadino case, which is a genuine argument in Apple's favor but not one that a court ended up deciding on in that case. There are also other arguments in Apple's favor there.
In general, it hasn't been very clear what the courts will or won't allow law enforcement or spy agencies to commandeer tech companies into doing. As we saw with several other countries' legislation in this area, it's also something that typically comes with strict gag orders and so we don't really even know much about what governments have tried.
I guess my main points would be:
* when these issues actually end up in court, it's not yet clear whether the first amendment issue about software will or won't be the most important legal issue, and
* especially in Australia and the UK, but very possibly also in other jurisdictions, there may already be tech companies that have lost secret fights over related issues and been forced to do things that users (or even most workers in those companies) haven't been told about, and
* if you don't like governments being able to backdoor stuff, maybe try to promote technology architectures where it's harder to hide features and functionality!
That last point reminds me how there's a kind of complicated side argument about how Apple's proprietary technology and paternalistic control most often make it harder for governments to spy on users, because there are comparatively fewer people to suborn or coerce in order to backdoor things, and it's harder to run code on the device that Apple doesn't approve of, and the company is very sensitive to public perception (maybe not this week!?), and has a big legal team to fight back against things. But when Apple does cooperate to increase governments' power in various ways -- e.g. banning specific apps from its app store, like it did with VPN and other anticensorship apps in China, among other cases, or doing this new scanning stuff -- it's also harder for users to override those decisions.
(I'm definitely not speaking for EFF here -- I stopped working there a year ago this week.)
I was about to post this. I am surprised most publications give FB/WhatsApp the moral highground, given what this tweet suggests. Is there an angle I am missing?
I think this is actually a hedging by the author. If someone public is criticizing a CSAM system, they have to admit CSAM is a problem to avoid being considered a sympathizer and damage their rep.
Then we need to damage the rep of the people spouting blatant McCarthyisms. If their retort to serious security concerns is to call someone a pedophile, they aren't worth giving a platform to.
The other day I was texting Messenger about a stove and then some stove ad showed up on Facebook Marketplace, so you can bet that they keep track of your other inputs as well.
"It's a bit awkward to see a Facebook representative dunking on Apple about privacy. But in the end it doesn't really matter if WhatsApp implements this if the underlying OS does."
Phone software should be considered under constant surveillance by design , given their relatively blackbox design, and i think this is not a surprise to most of their users. It's more concerning that this is now coming to the desktop. How can we make sure that windows updates or virus definitions are what they say they are ?
A statement like this is meaningless. Changing business, regulatory, and social pressures can cause WhatsApp to change direction tomorrow. Remember how at one time Mark Zuckerberg promised that Instagram and WhatsApp data would remain separate from Facebook[1]?
Exactly. Or when Facebook promised you'd never need a Facebook account to use an Oculus. They've shown over and over again they'll do whatever increases their market power and revenues.
That's what I find so disappointing about Apple. They were the one large tech player that had begun to build their brand on taking privacy seriously.
> They were the one large tech player that had begun to build their brand on taking privacy seriously.
"Brand" is right, because it always was, and always will be marketing and nothing more (unless maybe laws are changed and/or regulators get off their lazy asses).
Apple have built their brand on more than privacy -> they are doing privacy / trust / security etc.
I think you may find that folks don't take as dim a view on preventing exploitation of children as HN crowd. Apple has arguably the #3 to #1 brand globally currently.
Let's give this some time to shake out and see how the android brands and others do (if they become the home of child exploitation and porn) brand wise.
I think you may find that folks don't take as dim a view on preventing exploitation of children as HN crowd. ... (if they become the home of child exploitation and porn)
The parent comment is anything but 'bull'. To date, every single messenger app with E2E encryption has wound up becoming popular with unsavory groups like ISIS, Atomwoffen, organized criminals, etc.
While I'm personally uncomfortable with the privacy ramifications of what Apple is doing, Apple's decision is sensible in the grand scheme.
That means you need to scan everyones photos and read everyones messages, because someone might be saying something bad somewhere?
They'll move elsewhere, and millions of people will lose their privacy forever, and some will get killed, because they'll save a wrong kind of meme, and the local dictator will mark that hash as "bad".
There's basically three approaches a company that implements E2EE can take:
1. Anything-goes. Accept that customers will use it for evil and for good, and justify allowing the former by the latter.
2. Lie to customers. Eg: include a backdoor for law-enforcement.
3. On-device detection of illegal or dangerous content (as Apple is apparently doing)
Regardless of whether approach #1 has merit, I don't see any large company staying with it long term because it's a PR nightmare. No big company wants to have an endless stream of stomach-turning stories in the news. It's easy to predict headlines like "Breaking! World's Largest Human Trafficking Ring Conducts Slave Auctions on Apple Messages. Tim Cook Silent!"
I can call someone using a phone, tell him "bring two pair of blue trousers and one shirt, and don't forget the bullets for the trousers", and noone will ever blame the phone company for me arranging some illegal deal/stuff. I can sell drugs in the McDonalds parking lot, and noone will blame McDonalds for it. I can take a cab to future crime scene, and noone blames the taxi... or a bus.. or a plane. I can use many of the ISPs to educate myself how to get rid of a body or make a bomb or whatever, and noone blames Verizon. I can buy a polariod to make illegal photos, or canon or nikon or whatever, and noone blames them. I can operate an illegal business from a hotel room, and noone blames the hotel. I can even send an encrypted printed out message, and noone blames the post office. I can pimp out prostitutes on ikea beds, and noone blames ikea.
This is just another case of governments wanting to snoop on their people, and companies letting them,.... and for good PR they just pack it around child abuse and terrorism.
There's actually another approach - no E2EE. Period.
Do folks really think AWS keys in china are not under control of govt?
Do folks really think MMS and other messaging in things like tiktok between chinese users is not accessible by govt?
Zoom?
Normal phone calls?
Folks understand that apple data centers will be run by state-owned Guizhou-Cloud Big Data Industry (GCBD). That Apple has had to agree to keep the encryption keys locally to those data centers under control of GCBD employees? Why would apple have to change how it does encryption for chinese users and make sure keys are locally available.
Make an actual argument. HN is getting a bit pathetic. Address the issues / topics raised.
What content does your comment add? Nothing.
I am pointing out that there already are efforts around total encrypted privacy. Tor network is doing it. So are others. As they are taken over by bad actors - the rest of the world starts cutting them off and blocking them.
When folks talk about this hurting apples brand - you could not be more wrong. Seriously, address the points I'm making.
Apple is currently providing relatively strong E2E encryption (a leader actually in that). They are now going to match that with screening on probably the #1 issue that is out there around encrypted networks, the issue that often forces folks to backdoor things.
They also are going to be offering what seem like thoughtful features for users. Parents in particular with kids accounts have concerns (reasonable) about tech - and they will help to address that.
For all the "brand damage" HN goes on about here, in the larger world this may be popular and or very accepted.
And yes, if android becomes known as the home of the pedo's - that's not going to be an apple brand problem, but an android one. Surprisingly often we've seen others COPY apple, usually after loudly pointing out that they won't.
Make an actual argument. HN is getting a bit pathetic.
Did you make an actual argument? What's pathetic is accusing anyone who disagrees with you of supporting child exploitation. None of the people I care about who were victims of some kind of abuse would have been helped by having spyware on phones. I don't make a habit of swearing on HN, nor of engaging with people who argue and accuse in bad faith, so I'll leave it at that.
They could prove it by publishing the code and making the builds reproducible so they have the same checksum as the binaries in the app store.
I consider whatever they say lip service until proven otherwise. How anyone can call their app privacy oriented without at least showing the code is insane. Use Signal, etc., etc.
Publishing the code and making the builds reproducible doesn't prove anything on its own. This can be used for evil just by manipulating the set of hashes that it matches against, even with all of the code doing exactly what they said it will do.
I don't think it's meaningless. The head of WhatsApp publicly opposing it affects the conversation and social pressures. This matters because politics is downstream from culture. Certainly it's better than saying nothing or being in favor of it.
> This matters because politics is downstream from culture.
I don't think this is true, at least not most of the time. We had a summer of "defund the police" protests and BLM signs seem to be everywhere, yet police budgets in most cities have only increased[1].
Good lord - these are the same folks who had to get rid of the whatsapp founder because he wouldn't go along with their ad targeting plans / their integration plans (all of which they had promised not to do).
That’s not what they said. They’re just pointing out that statements do not always hold true over time and history tells us that a Facebook division might not be the best at holding promises.
They also did not mention iMessage. There are several other popular messaging applications out there to choose from.
iMessage isn't spying on you. There are two aspects to the scanning. One is if you use iCloud. The other is if you send photos to a minor via iMessage.
If you get flagged by the iCloud scanning, you could be reported to the authorities. But you can opt out of iCloud scanning by not using iCloud.
If you get flagged by the iMessage system, you are not reported to the authorities. The parents of that person may receive a notification. But if you are sending photos to a kid on a device set up by their parents, you should pretty well expect that photos you send may be seen by their parents.
I understand the privacy concerns raised by Apple's about-face here. But we should be clear about what is and isn't happening.
WhatsApp comes off looking like a defender of privacy by saying "we won't do what Apple has done" but in reality the most objectionable thing Apple is doing relates to cloud-based photo storage, not messaging. The messaging piece is a new type of parental controls and does not involve the authorities in any event.
If you read GP's comment, it conflated these two issues as if they both applied to iMessage. The point is that you don't have to opt out of iMessage to avoid being reported to the authorities. You just have to not use iCloud, a system that has many excellent competitors. You can also roll your own. This is not as big a deal as if one of the major messaging systems were snooping through all your messages and photos. That is what GP made it sound like.
How do you know? Apple makes a lot of claims about their security but these claims probably don't preclude Apple spying on you somehow. Especially if a warrant compels them.
I'm sure he's an intelligent executive but can we stop congratulating people solely for being at the top of the org chart? What has Zuckerberg, personally and without aid or advice, done to prove himself a "master of pivoting".
> Apple, not you, controls what you can install on mobile devices
Apple corporate headquarters keeps a tight lock on the apps available for its mobile operating system (iOS), which is used on the iPhone, iPad, and Apple Watch. Software developers even have to pay a tax to Apple to publish their work in the App Store. With the vice-like grip they have on Macs now thanks to Apple Silicon, new Mac computers now forbid you from installing free software applications or operating systems, due to improper use of code signing.
> Apple also prevents you from changing the operating system on the devices, so there's no way to escape the restrictions. If you try to change the software on your device, Apple's lawyers claim you are a criminal under the Digital Millennium Copyright Act (DMCA). They've done this as recently as December 2019, when they used the DMCA to remove a post to Twitter that revealed an iPhone encryption key.
He will continue to be right until everyone wakes up and uses alternatives. Unfortunately the social inertia has caused people to stay because their friends are using it.
Given that the most overused words in services operated by FAAMG companies are 'privacy', 'security' and 'trust' one should question:
What sort of act or event will it take to get their complete attention in plain layman terms that it is so severe for the majority of users, that they now realise it irreparably violates all three of those words?
It would have to be far worse than the whole PRISM programme and this put together and motivate them to move to other alternatives. When that time comes, let's see who escapes before then.
> What sort of act or event will it take to get their complete attention in plain layman terms that it is so severe for the majority of users, that they now realise it irreparably violates all three of those words?
I think first developers need to wake up. Then users will follow.
However, for developers who make money in the Apple ecosystem this is harder. There's this quote: it's hard to understand something if your salary depends on you not understanding it.