If I have something that spies 24/7 in my home or around me I throw it in the recycle bin.
No more tech talk, implementation talk on closed software and dark pattern designed products by any company.
Apple is using its market leader in "user convenience" position to normalize on device user behavior analysis and data processing and active policing. This is madness.
I expect police and three letter agencies to use their own hacking tools and budget. I will not pay the bill on a multiple levels to be monitored and classified. Yes you can vote with your wallet as first reaction, as second you can push your representatives for political and legislative reaction.
Imagine this news in 2010. For sure some Apple stores will be burned down to the ground.
But make no mistake:
If you don't buy products created from the get go to snoop around and extract your data, you will have more privacy.
The only way into the future for people who want peace of mind is not to buy into
smart home, smart appliances, smart cars and smart surveillance. Period.
They are called smart for a reason. Because we are becoming stupider.
Start self-hosting and local backups, there are tech savvy people who are moving away from surveillance state and building more private and sustainable future for themselves.
Apple is the big writing on the wall. They act as political entity now, the users are their voters and as a classic politics move they are selling the audience to the highest bidder.
> If I have something that spies 24/7 in my home or around me I throw it in the recycle bin.
Unless you’ve figured out how to eliminate the pervasive use of common client SDKs from popular mobile apps, or forgo mobile apps altogether, this is hyperbole.
Watch a MITM while running Android or iOS, you’ll discover a wide array of apps using a ‘usual suspects’ set of client SDKs, all checking in frequently.
Go read the docs for those SDKs. Turns out they’re offering any number of user analytic, registration, or management conveniences. Easy to see why devs include them like boilerplate.
Look closer, you’ll find them doing cross app and cross device linking and behavior aggregation, and “sharing with our partners”. Tracing further, the partners are adtech. Look more broadly, you’ll find these used as sources by firms commercially offering user screening, fraud prevention, credit approval, etc. to enterprises that are making increasingly automated decisions about who qualifies and who doesn’t.
We aren’t talking about this*, and that’s weird.
---
* Footnote: We talked about it when weather apps were using some offending location SDKs, Apple got more aggressive on surfacing ongoing user location tracking. We talk about it when one firm such as Facebook does it across their own properties. We don’t talk about it when everyone from indie devs to mega banks are doing it by using these client SDKs and libs, either on purpose or inadvertently.
"Look closer, you’ll find them doing cross app and cross device linking and behavior aggregation, and “sharing with our partners”. Tracing further, the partners are adtech. Look more broadly, you’ll find these used as sources by firms commercially offering user screening, fraud prevention, credit approval, etc. to enterprises that are making increasingly automated decisions about who qualifies and who doesn’t.
We aren’t talking about this*, and that’s weird."
Agree.
I avoid mobile apps like plague, the only emergency thing is banking apps and for this I have an old iPhone SE. Other than this mobile apps are no go for me.
Until I have in my hands Linux phone with acceptable UX I am out of the whole mobile apps idea.
Banks are among those making telemetry calls they have no idea they’re making. Your bank app strategy is the same one has to use to install or update most IoT today, including appliances that shouldn’t be IoT at all. Those too incorporate these SDKs.
Perhaps having a “Linux phone with acceptable UX” in your hands will change product manager and developer behaviors with regards to client-side SDKs. Perhaps Linux will step up and block developers of these SDKs from monitoring Linux users, the way Apple updated their OS (against developer wishes) to block always-on location tracking.
Actually I am planning to shut down emergency scenario of banking apps (installed only because of early pandemic situation) and using only the web access point trough real computer.
There is no absolute in this, but I am certain that in my day to day life I have enough privacy compared to regular users of a smartphone or computer.
This old iPhone is only active for the banking transaction and only in use cases when I don't have a laptop around.
So the compromise is acceptable.
Closed markets guarantee income. Apple products are not simply consumer devices, they're used by big box stores to handle inventory, many companies require them for specific apps/ecosystem. Apple and universities conspire against students to require the purchase of MacBooks and other apple hardware, paid for by funny money student loans, which ultimately ends up being tax payers footing the bill for apple hardware.
Apple is a company reliant on public tax payer dollars, and should be regulated as such.
I worked for a university in the IT department for a few years. Everything we implemented needed to work for Windows and Mac devices. For mobile, both iOS and Android were supported.
Students were free to purchase whatever hardware they wanted. At the end of the day they could get their studies done on either device. The only courses that required an apple computer were the iOS development class, and with Swift becoming more popular I believe there’s a way to do that on windows now. Either way if a student did not have access to an apple computer there were entire labs with apple devices they could use.
Universities are not mandating apple hardware. Not by a long shot. Maybe your anecdotal experience is different but I can assure you that is not the norm.
"as second you can push your representatives for political and legislative reaction."
Apple are not doing this for anything other than Political reasons. They see the writing on the wall, governments want this level and are either explicitly asking Apple in secret or implicitly encouraging it with their rhetoric.
There's nothing implicit in what's going on. It's clear that Apple leadership promised the backdoor to people in governmental power and got great monopoly power in return.
I agree with the first part of your sentence (it's 100% speculatiom on my part), but the fantasy would be thinking that Tim Cook cares about child pornography so much that he would turn 180 degrees in his principles after just 1-2 years.
The fantasy part is thinking that someone in the DOJ is offering up some sort of antitrust amnesty in exchange for reporting potential instances of CSAM to a private organization.
While this collusion theory is unlikely to be true, NCMEC is not just any "private" organization. NCMEC was established by the U.S. Congress in 1984 and has received up to $40 million of funding from the U.S. Congress every year.
Yesterday while driving my wife to the nearest large town for shopping (one hour each way) I was explaining my thought process on getting an Android phone with an alternative software stack. The most painful part of this proposed transition is that our 2021 Honda Pilot basically becomes whichever of our iPhones is currently synched, and let me tell you, this is damned convenient (pun intended).
Currently I am using the full range of ProtonMail products, and have my Google, Apple, and device settings as strong-privacy as I can. Google is a good example: I use a range of paid for Google services but except for YouTube Premium, I have data collection turned off. I let Google store YouTube data for 30 days, and still get a good user experience.
As others here have said, stay away from any smart home devices.
EDIT: reading this article (and her book Privacy is Power) yesterday is what kicked off our conversation. People who protect their privacy have advantages over people who don’t.
People who protect their privacy have advantages over people who don’t.
This is the essential thing that we as a tech savvy professionals must communicate to the normal users.
The key word is "advantages". We all like having one or two.
Blurring the lines of "ownership" and "renting" is the global corporate plan for the future.
But I have a question:
If data is the new petrol, and we all are the source, why they expect us not only to give it free but to pay the extraction fee?
If they don't have pay us for our data, then they will not.
I am pretty much hard-over on privacy, yet I carefully use many fine Google services, keep a Facebook account because I really enjoy my Oculus, etc.
I am conflicted about Apple because my professional and personal digital lives are so smooth with an Apple Watch, and iPhone, iPad, and M1 MacBook Pro. Low friction.
I have several Linux laptops, including a really nice System76. The thought of getting a CalyxOS Android, and just using Linux is a rough decision.
I actually feel this one a lot: Android Auto and it's contemporaries are surprisingly great experiences on the road, and it's going to be tough to drop it going forwards. As long as my PinePhone ships with Bluetooth, I should be okay.
Yes, and this is a challenge. And nothing will change until your refrigerator reports you to the police for having too much beer in your possession:) Than all the "nothing to hide" people will revolt.
I'm not so worried about the fridge ratting me out. Cars on the other hand have lots of potential. Every traffic light could have a red light camera if it's the car watching and reporting.
In The Netherlands we had this problem with vehicle plate scanners. At first, they were strictly for vehicle tax purposes. Then later it came out they could be used by our three (well, four) letter agency in extraordinary circumstances. Then the police could request access in extreme cases, then less extreme, and now the ‘requesting access’ is nothing more than an automated process that is logged.
Sounds like you’re describing an external device monitoring vehicle actions. The person you’re replying to is describing the vehicle monitoring and reporting its own actions.
In my opinion, people have proven themselves too irresponsible in cars (in a way that results in them regularly killing other people), and while I dislike surveillance, I am for the monitoring/reporting measures that are in some European countries.
It’s because revoking privileges is not an effective means of getting these drivers off the road. It’s not a magic force field that prevents people from sitting behind the wheel. All you’re doing is taking a slip of paper away from people who have already demonstrated themselves to not only be irresponsible but also indifferent toward the law.
In fact, revoking driving privileges can be counter productive because it takes away their ability to be insured while driving.
Right. Like Apple's on-device scanning and reporting, cars could scan and report bad behaviours. Toyotas already can detect when a driver is drowsy based on their driving and prompts them to take a break. How hard would it be to make it call police when the driving is severely impaired?
Imagine Apple car, with closed software and monitoring 24/7.
Hey, if you want convenience and virtue signaling you will pay for the privilege. They will collect the data, feed it to neural nets and you will be happy to be the petrol of advancements. No need for repair and ownership, services, services, services.
Monitoring, monitoring, monitoring.
Hey there is no other way to deliver a product. This is the future.
Smartphones are privacy invasive devices for the individual. Cars will be the automated police and control of the public. Lidars, cameras, lasers and mobility, what a surveillance dream.
Imagine a city with thousands of Apple cars, who are connected to the mothership and are working for "the man" 24/7. You can collect payments from users and governments as a reliable partner.
I like cars, cars as sculpture on wheels, cars as a devices for transportation, cars as an expression of individuality.
Individuality which will be damned in the future, if you are thinking for yourself and not follow the trends you will be casted out as a lone wolf, a predator or even a terrorist.
I have chosen the hill on which I will die. Individuality. Land ownership. Energy efficiency of the grid. Skills for survival and manufacturing. FOSS.
And actually instead of buying "the new shitty cars" I will convert my classic car collection to EV https://bit.ly/3kXRe4Z.
Yeah, someone here told me cars have microphones installed somewhere now. At this point I wouldn't be surprised if they also had GPS and wireless internet connection.
>Yeah, someone here told me cars have microphones installed somewhere now. At this point I wouldn't be surprised if they also had GPS and wireless internet connection.
??? I am honestly not sure if this is this sarcastic or just entirely unaware of car infotainment systems development over the last 15-20 years?
Yes many cars have a microphone it is so when your cellphone connects via bluetooth to your car for music/audiobook/gps navigation/podcast etc you can answer calls via speakerphone.
As for GPS that has been pretty much standard for upper mid-range vehicles at least ten years and a upgrade option for going on 20. internet also for over a decade but is usually from phone tethering. welcome to the 21st century.
I don't care much about cars. I knew they had engine control computers but that's it.
So how many years do we have left before we need to worry about car malware? Am I gonna have to read a privacy policy next time I buy one? Are they gonna collect and sell my GPS data?
Apple is fine with that since they have enough of people hooked.
Google is fine with that too for the same reasons.
Facebook just knows that it's an empty threat and they have WhatsUp, Instagram and Oculus to fallback on to.
Microsoft just laughs all they way to the bank receiving government and business money.
This is my approach. If sovereign governments don't want to regulate in my interests and protect me from surveillance capitalism, I'm simply going to take every route I can find to opt out of the whole enterprise. I already have started just turning off my phone for the majority of the day and opted out of all the major social networks. Hopefully a sufficient critical mass of people take this route such that it destroys enough economic value to push society in a better direction with meaningful trust and security guarantees, but in the meantime as long as the entire digital ecosystem is untrustworthy and actively antagonistic towards me, I'd rather just have nothing to do with it.
If I have something that spies 24/7 in my home or around me I throw it in the recycle bin.
how do you define "spies"? As that could apply to pretty much any phone (smart phone or not, the cellular provider (and almost certainly the government)) tracks everywhere you go, modern TV's, cars, and many modern appliances have some degree of spying built in.
Technically, spying on someone involves the element of the subject not knowing that they are being watched.
In this case, if you haven't been following this CSAM scandal and making use of iCloud as usual, should Apple turn this enigma machine on, you don't know your device is being used to spy on you.
Am sure Apple will employ the best brains to craft some form of legalese to make sure you tick the "accept" button without fully understanding what it means in the event they roll out.
Do you know all the ways your TV is reporting your viewing history? Did you know it can also report your over the air viewing as well as what you play on your DVD player? Do they use the built-in microphone to listen for keywords?
What about your smart refrigerator - you might expect it to report refrigerator data (temperature, door openings,etc), but would you be surprised if it's also reporting your bedtime, wake up time, and when you're on vacation?
Do you know how long the cell phone company retains your location data, or what they do with it? Do most people even know the cell phone company tracks their location?
Maybe true, but I at least know in advance that these devices may be collecting data on me. I'd argue that this is mostly opt-in on my end since am fully aware of the data collection and have a rough idea of the extent this is done. Average Joe down the road doesn't give any care and still actively uses his Facebook account stands no chance against the spying. He can't see it coming.
Personally, I live a simple "non-connected" life as much as I possibly can. Any service providers I choose to entertain have to do with the minimum required information to hold the account.
I know excactly how my tv is spying on me. Its not. Its a dumb tv. The streaming dongle on the other hand is a snitch reporting back to the mothership, of course as the mothership in this case amazon is also one of the content providers subscribe too so I cant just block it at the firewall, (I am not concerned about the Atari 2600 or NES phoning home my old original xbox on the other hand complains and pretends to be gimped if it cant talk to Microsoft every so often)
The problem here is similar to what happens with the War on Terror.
1) It is noted that governments could, in theory, be spying on everything. Widely dismissed as silly.
2) There is a trigger event (eg, 9/11). Inside government, a standard slips. They start spying on everything.
3) Years later it becomes public knowledge. It is too hard to monitor what is going on now though, so it is difficult to tell who specifically should be outraged about what.
4) The situation persists because it is convenient to the people in charge.
5) (TBD) It gets turned on political dissidents. I'm expecting everyone will agree on this point in 5 or so years, at the moment that is probably maybe more a partisan observation.
Apple's partnering up will follow the same pattern. There just isn't enough transparency about what closed source software does.
This is happening with Extinction Rebellion in the UK now. The Home Secretary is using the police to harass groups she disagrees with and changing the law to stop protests she doesn’t like. I think having “infinite surveillance” with someone like this in charge is a terrifying idea.
Ever since Theresa May, I've always thought that the Home Secretary is one of the most dangerous people in the UK - they always want more surveillance and anti-privacy measures, it's never enough for them.
A good read covering immigration policies over the years - thanks for pointing me at it.
This quote stood out:
“They are terrified of the thought that any concession by them that can be perceived as soft or liberal or sympathetic, might leak out and be another set of Daily Mail or Sun or Telegraph stories,”
There is a really toxic and dangerous relationship between a large segment of the British media (especially the tabloid press), a large segment of the British public, and the government. It very often feels like the likes of the Daily Mail are the tail that wags the dog, and I just can't see how this long-running status quo can ever end.
Interesting. Personally, I feel that the Guardian, its readers, and government and media elites, are the ones in a toxic and dangerous love triangle. Do either of these ideas really mean anything more than "I disagree with the other side"?
I don't simply mean "I disagree with the other" side though.
I mean it feels like there is something very corrupt about the way the British public is kept whipped up into a FUD-driven frenzy by the tabloid press, and that the government (whether Labour or Conservative) seemingly makes many decisions to pander to that segment, rather than based on what's actually best for the country and all of it's people.
Ah, but you see, it feels like there is something very corrupt about the way that elites live in a self-reinforcing bubble padded out by Left media, and making decisions based on that fantasy, rather than what's actually best for the country and all of its people....
Wow, the parallels between that and our Department of Home Affairs here in Australia are striking. I guess it’s not surprising, I think it was modelled on the UK’s Home Office when they put it together six or seven years ago. It has similarly been a disaster, but the people in charge love the power, so it won’t be broken back up into separate departments as it was before…
The US government was spying on everything well before 9/11 [1]. This sort of surveillance has been happening since the dawn of civilization. It's imperative for elites that wish to stay in power.
It was more narrowly focused in the Cold War era, though. NSA was very focused on the USSR, which was quite opaque from the outside. "Team A" worried about the USSR, "Team B", everything else. Huge amounts of effort had to be expended to get basic info like "where are their air bases"? One head of the NSA said during that period that they produced about three pieces of actionable intelligence per day, of which maybe one per week was important enough to be brought to the attention of the President. Much info went into NSA, but not much came out. NSA belonged entirely to DoD, which had no domestic role and wasn't interested in anybody who lacked serious military power.
The "trying to find small numbers of people who might do some terrorist act via SIGINT" is post-9/11. That's when surveillance became dangerous. Now, law enforcement has access to far too much take. Worse, there's a huge US anti-terrorist establishment that's orders of magnitude larger than the number of terrorists and doesn't really have enough to do. So they self-generate work.
Didn't McCarthyism rely on or expand domestic spying? And didn't they extend suspicions and charges against far more people than the evidentiary basis would have supported? According to Wikipedia, the FBI's ranks nearly doubled in size to accommodate all the new investigations being pursued.
Yes. That was Hoover and the FBI. The CIA also did some domestic stuff, more during the 1960s than later. NSA, not so much.
There wasn't much wiretapping in the US before the advent of electronic central offices. Phone switches just weren't equipped for it. Somebody had to physically connect wires to listen in. (Yes, you could listen to a few lines remotely using the test gear. Telcos hated that, because it tied up the automatic line insulation test equipment, COs had only 2 to 4 sets of that, each set took up three racks, and each could only do one line at a time.) In Guliani's old book about taking down the New York mob, he writes about the government having to pay phone bills to New York Telephone for wiretap lines. Once they didn't pay the bill, and the billing system billed the party being tapped. That's where the pressure for CALEA and built-in wiretapping began.
> That was Hoover and the FBI. The CIA also did some domestic stuff, more during the 1960s than later. NSA, not so much.
Project MINARET was one of the first times the public was made aware of the NSA. It started in 1962 and it involved the wiretapping of US citizens like Martin Luther King Jr and Senator Frank Church.
Yes, and it's almost expected at this point. But a private corporation doing it voluntarily(apple really doesn't have to implement this, lots of articles on this point) is at least something we can discuss and hopefully vote with our wallets against. Can't do that against a government.
That's the thing, is it really voluntary? I'm not sure if apple has as much leverage as everyone is thinking. The Chinese government certainly forces companies to do this sort of thing, and there is precedent in the US as well with national security letters and warrant canaries. AT&T has basically been an extension of the NSA, with tap rooms built into their centers, and I'm certain all kinds of things have been forced on companies using some sort of leverage behind the scenes.
"Vote with our wallet" is an illusion. Do you really think you can cast a vote by _NOT buying_ an iPhone?
In large companies -like Apple-, there is tracking of conversions (see the ad -> buy the product), churn (a client who stopped coming/buying), interest/reach (who am I targeting with this ad). And that's it. There is no tracking of "people who would have bought an iPhone but chose not to SPECIFICALLY because of the new tracking feature we added. Thinking this kind of metric is relevant and measurable is a complete fantasy, and borderline silly.
Public discussion might influence product decisions. But it have to be hugely controversial to have any effect.
You can't "vote with your wallet". That's it. I have no idea why such belief made it's way into pop culture.
If there's anything that will make a business listen, it's money (or lack of it). When it affects the bottom line, they are forced to listen.
Sure, I don't affect them much by myself. But everyone commenting about Apple around my presence will find out why I am not sponsoring them.
There is also the crappy hardware design, not just the privacy issues (which negate the advertising spending they made the months before). Here is what Louis Rossmann, who repairs Apple devices, says: https://www.youtube.com/watch?v=AUaJ8pDlxi8
Pretty clearly, forty-plus years of sustained warnings and criticisms[1] of increasing surveillance and security failures have failed to have any substantive influence on the industry.
It's not that the concerns aren't justified. It's that they're far too complex to be communicated and grasped by the public as a whole, and they facilitate far too much by way of immediate profit and market advantage to the vendors who will more than happily ignore the warnings.
A boycott by the vanishingly small fraction of the market which does fully grasp the issue is no threat at all.
> Do you really think you can cast a vote by _NOT buying_ an iPhone
Isn't that how any vote works? My ballot is one of several hundred million - statistically meaningless by itself. But it is a vote. And if enough people vote, the millions of statistically meaningless datapoints become meaningful.
And in the end, even if my single wallet-vote is meaningless, I'm no longer affected by what Apple chooses to do with their phones. A successful vote, IMO.
To boycott, a large portion of buyers are needed on board. That’s why “vote with your wallet” in loneliness like a “rational agent” doesn’t work. Are you going to vote and convince others to boycott?
You need to correctly assign the churn event to the privacy issue. This is a) very difficult to do and b) not evaluated because churn relates to marketing, not product design.
"Wallet voters" live under the illusion that their intent is known and taken into account by BigCo decision makers. This is NOT the case. They just don't show up in reports. So what do you think they are going to do about it ? Nothing.
If you want to influence product design, you can talk about it online and condemn their choice. If that's big and controversial enough, they might do something about it. And ypu can do that while buying a new iPhone as well because it doesn't matter.
As stated above, if you want to show up in the CHURN. Many “voters” are needed (as in millions), a few guys mad at Apple, stopping from buying a product, are not even a blip. You need to spread awareness, convince others, to join in your crusade.
US undoubtedly forces Apple to hand over everything and anything they want. It's not fine, but I feel like we have no say over it anyway. Apple announcing publicly that they will scan your photos is something they chose to do, and the law doesn't require them to so this in their implementation of iCloud.
They probably made a deal with the Justice Dept.: We give you surveillance in exchange for not pursuing an anti-trust investigation. That's the only prize that could justify Apple's "voluntary" behavior.
Apple in 2022: it would be a shame if dissents could download apps outside the app store, like Signal, which refuses to implement encryption back-door...
Apple has been up front about what they're doing. They're explaining how what they're doing is specifically designed to protect your privacy in practical terms. No, they probably don't have to be doing this, and I would be happy if they decided not to.
But do you really think that any other phone you buy—apart from something ridiculously niche like a Librem—doesn't have exactly the same problems, only they're still keeping it secret from you?
"Voting with your wallet" only works if there are realistic alternatives.
>But do you really think that any other phone you buy... doesn't have exactly the same problems, only they're still keeping it secret from you?
At the moment, ALL of the other smartphone models (except iPhone soon) don't have this problem. There's no reason to bother with client-side given the current server-side implementations and this scanning can't be hidden for long.
Apple does not spy on your device. They only get information about the data you agree to sending it to them by using their cloud services. It's nothing remotely close to the surveillance we are talking about here.
I think that's a very dishonest take. The scanning takes place on your device before any data is sent to apple. The fact that you were intending to send that data to iCloud is almost irellevant to the entire discussion since apple has clearly built a system for scanning your files on your device, not in the cloud. There's nothing stopping then from using the exact same technology to scan for literally anything on your phone.
It's like if apple added a 360° motorized camera in your home but said "don't worry, it's only pointing at the door!"
Except Apple already turned iCould on without really making a big deal about it, just up the storage while making everything automatic and nobody would have noticed. This outrage just seems silly in that context because iCloud data is already available on request: https://www.apple.com/legal/privacy/law-enforcement-guidelin...
This is “simply” Apple leveraging people’s phones to run perceptual hashes rather than paying for the servers to do it themselves. Which is why the outrage caught them off guard, from their perspective nothing changed governments have been spying like this since 2011.
And on the bit about it being on your phone (rather than their servers), I think it opens the door to an entirely new phenomenon that either companies or governments want people to begin accepting: you no longer own your phone, but rather, rent it under certain conditions.
Those conditions would undoubtedly be tied to social credit scores, required viewing of ads, required on-demand "checking in" with the government, and direct access to local phone data on demand to catch and prosecute political dissidents sharing critical information and memes.
Example: Australians are now required to install an app that has them taking a photo of themselves and sent to authorities to enforce COVID mandates like staying in one location, even if you're not sick:
You never owned an iPhone. You can't read the OS source, you can't install other OS, you can't sideload apps, you can't repair it yourself, you can't buy official spare parts, and you cannot even reuse the parts of your phone.
There is no changes whatsoever in that regard. You never owned your iPhone, and you still don't, no more, no less than before.
I think the idea that Apple is just moving rote hash calculations from the server to the phone - and therefore they were “surprised” anyone cared - does a disservice to the smart people at Apple. They’re not dummies and they didn’t think this would be an easy sell.
The question of whether we can and should build client-side scanning has been a huge and controversial issue in our community since at least 2019 — when AG William Barr wrote an open letter to Facebook requesting these features.
As a cryptographer who works on E2EE I’ve been reading opinion pieces on this, writing pieces and attending Stanford-hosted workshops since that time. Many people attended those workshops from child safety groups, policy think tanks and other tech firms. Apple was also invited and declined to attend. But to imagine they didn’t realize how controversial these systems would be — that defies belief. They would have to be living in a cave, and the Apple security leaders don’t live in a cave.
The Apple system was clearly designed to enable scanning on E2EE backup files. The fact that iCloud isn’t E2EE right now is amusing, but I think it’s safe to assume that they want to enable that feature but felt they needed this coverage. Apple doesn’t share their reasoning, but it’s fairly obvious they felt some strong pressure — largely instigated by government agencies and members of the Senate - to implement some kind of “voluntary compromise” here. This is the one they chose.
Where they were surprised, I think, is that there was so much broad negative reaction beyond the usual suspects like privacy groups. As you said, server-side scanning is already ubiquitous so what’s the big deal? My suspicion is that “server side scanning of your private files is ubiquitous” is something that is only actually well-known to tech experts. Most of the general public is ignorant about all this, and they only really get a chance to react to any of this technology when something big makes it national news: as Apple did this summer. (Putting it on your phone and making it apply to private backup photos rather than shared albums, as some cloud providers have in the past, also didn’t help.)
Encryption at rest is easy, but E2EE isn’t on the table simply because they don’t trust people to keep secure backups of their own private keys. Thus Apple needs a way in, and if Apple has a way in they are required to let the FBI in.
As to scanning on device, that’s not what is involved in the child porn system as the list it’s being compared to is kept secret. So inherently the system requires an upload of each image for comparison. Further, it also needs to actually upload an image for comparison not just a hash.
I think you are wrong about this. Android has already enabled E2EE backups using their Titan HSMs. While Apple will certainly have some concerns about UX, they already protect user data like passwords E2E. I anticipate an (at very least) opt-in E2E backup system to be available within a year.
Your phone is the camera. It can already read and write everything that's inside because that's how an operating system works.
There is zero breach of privacy unless the result of the scan are sent without your consent. Just like for everything else that's on your phone, scanned or not.
> "The fact that you were intending to send that data to iCloud is almost irellevant to the entire discussion since apple has clearly built a system for scanning your files on your device"
This is a very dishonest take. They already have a system which runs on your device and processes all photos in the photo library, tagging categories like faces and pets and food and so on. They could have added this new system into that one with much less engineering and design effort using a system which scans all your files. They didn't, they went to a lot of extra effort not to. They already have system services e.g. location awareness or device monitoring and diagnostics, they could have built this new system as something like which would run 24/7 and access everything and be a simpler design, and they didn't.
What they did was put it into the iCloud upload feature to be matched by additional code on the iCloud server side. Something which took them more time, more effort, more complexity, and gave the whole thing less power and less flexibility.
If this system is as dreadful as all the complaints say, it should be plenty of ways to show that without having to make stuff up. If describing it accurately isn't scary enough, maybe it's not as scary as y'all want people to think.
You could argue that the public at large "agrees" to total surveillance by:
1. Claiming such things are conspiracy theories, despite an abundance of evidence
2. Not giving a shit, even during the (very brief) period where it was covered (very briefly) in the news
3. Buying into all the thingly-veiled gov and media FUD about terrorism, paedophiles, organised criminals and communists
4. Not protesting en-masse
5. Not voting for a party that runs on privacy and anti-surveillance policies
6. Not starting such a party as in (2), since AFAIK it doesn't even exist...
Assange is being punished by almost all western countries. Sweden had a big role in it too and so does UK. And Australia is doing absolutely nothing to prevent it from happening to their citizen.
This. It literally happened to me yesterday.
The internet was blocked altogether and one Sim card service was allowed after a long time so I did something interesting.
I asked a friend in another country to log into an old reddit account, I dictated him the text I wanted to post and he pushed it.
I had kept that reddit account dormant for over 7 months for this very purpose because I remember somehwere about reddit 90 days history policy or something.
This is the level of threat I am looking at.
I was willing to dictate PNG base54 character by character so that the other person could reconsititute it but they reopened internet the next day but this is the level of threat facing a casual internet user
I’m really sorry to know this. And I’d like to apologise for not being able to convince some of my close friends to not vote the fascists in, in the last national elections.
The upcoming mau Zedong or xi is the cow in charge of India. That is what he/she is apologising for. This person is saying they did not know in 2014 the country would crumble like this,
So this is step 1 then and we’re basically waiting for an event for step 2, because “privacy-preserving” surveillance system is production ready at this point and is going to be deployed in near term.
Just a minor update away from a ML-powered thought crime prevention as step 3.
Sadly, “privacy-preserving” surveillance is already a thing. After the Snowden revelations the NSA didn't stop collecting everyone's data, they just refined the legal justification. They claim that it isn't illegal surveillance if they don't look at the data.
So if you become a person of interest they get a warrant and then trawl through your digital life since forever.
So… await elites making a big pedophile catch (perhaps recycling Epstein’s story, which gained a lot of popularity among the youths), and using it as a launchpad for “see, we needed an automatic iPhone search”.
Per number 5, even sticking to just the verified stuff it is well known Obama used these powers to spy on Trump’s campaign, and Trump used them to spy on Biden/force Apple to be silent giving up Biden-people’s iCloud data via NSL gags.
Even if it's well-known, and I'm not convinced it is since this is news to me, it won't be forever. Sharing some sources would help a lot both for people now and people finding this post 10, 20, 30 years from now.
"...force Apple to be silent giving up Biden-people’s iCloud data via NSL gags."
I terribly misread that as SNL gags.
My brain was awfully (in hindsing - hilariously) confused for a good two minutes... trying to figure out how exactly was Apple sidetracking the imposed silence coalmine-canary-style by creatively leaking some data to Biden's people - via the medium of Saturday Night Live gags.
To be fair the (5) been professed for decades and is yet to happen. I've been hearing that all my life and my hair is greying.
The worst hit democracy has taken was not from sinister NSA bureaucrats but from kooks calling themselves citizen journalists, real estate developers, one-book activists and Crossfit trainers.
I've been wondering for decades if privacy advocates are ever going to go back and examine the dystopian proclamations they've been making to see how many actually turned out to be true.
This sort of reminds me of the whole thing with lockdowns
1) It is noted that governments could, in theory, be requiring and enforcing lockdowns not because they're scientifically better for public health. Widely dismissed as silly and "anti-science".
2) Some scientists come out in opposition of lockdowns, evidence from other countries shows that lockdowns don't help, both get banned/censored or painted as lunacy.
3) Years later, it becomes public knowledge that lockdowns did not in fact help, and countries that had zero lockdown policy did just the same as those that did. It is also too hard to know what went through exactly and to hold people accountable, technically everyone was complicit.
4) The situation persists because it is convenient to the people in charge.
5) It gets turned turned on political dissidents. If a government wants to get rid of your noise, they can paint you as an endangerment to public health and suggest to the public to "cancel" you by banning or by being painted as a "conspiracy theorist".
Australia is following this plan almost to the letter at the moment. People getting arrested for not wearing a mask while they're with their daughters and government officials "advising" the public not to talk to their neighbors and to avoid interacting with them socially, even in public, and even if wearing a mask.
People are still treating this like "oh we don't know what's going on" or "it's still dangerous" while in a few decade or so, the way we're headed (if nothing changes) seems to be a form of soft-occupation by CCP ideology. It'll be too late to change anything, and then people wonder how can the Australian government pass a bill that allows police to add, delete, and modify your data during an investigation.
I know there is a case to be made as to whether lockdowns are effective, but as someone who is not trained in epidemiology, it seems like something that you can try that's well-intentioned and within your available set of tools. This would be especially true during the early phases of a pandemic, when you don't necessarily have great models for disease transmission, preventing more precise restrictions.
If the goal is to single out and persecute some specific opposition, this seems like an incredibly clunky and inefficient way to do so. Surely it's easier to tie them up ny having some proxies bury them in court actions or plant some drugs on them, than to throw the entire country into rictus. If nothing else, they're having to personally pay for the economic blowback of declaring a lockdown and all the inevitable egg-on-their-face moments when they themselves break it.
I am really missing something in this whole thing. Everyone is already partnered with gov. Gov have access to all the servers. So that doesn’t explain this outrage, Microsoft, google, Apple will all comply with warrants asking access to cloud stored data.
The outrage was: Apple were spying ON the device. But then they explained, they were hash matching so they didn’t need to give unfettered access to your cloud data.
The device doesn’t notify police, it has protections from false positives and is no MORE access than any company already has.
This seems like outrage with no substance behind it. Gmail scans your mail, Facebook your messages (on and off device). Why is Apple suddenly terrible for doing something that seems to honestly improve the situation not make it worse?
Because it's my device, it belongs to me and should obey me instead of gigantic corporations or governments. I don't care about some government agenda trying to improve whatever situation. I want my computer out of it. They can go "honestly improve the situation" elsewhere, hopefully as far away from me as possible. And we don't need to provide any other justification either. They're the ones who need damn good reasons and warrants before stepping foot in my property.
A user can have more or less control over their device. There are many shades of gray between "I fully control my device" and "I don't control my device." When a change to the device software, like the introduction of on-device scanning, reduces the level of agency the user has over their data on their device, it's an adverse change that the user is justified to protest.
How do you distinguish this from any other optional network-based service that does some of the processing client side? If you choose to use a particular client/server based service aren't you telling your device to do the client side processing that service requires, and hence the device is obeying you?
> How do you distinguish this from any other optional network-based service that does some of the processing client side?
The line is drawn at the network layer: the point at which my computer talks to their computer. If I don't like what their client is doing, I can reverse engineer their network communications and make my own client that lies to the server.
Like online video games. The game client ultimately doesn't matter. I can trash it, make my own client and automate the whole game if I want.
Device belongs to you, but software belongs to the Apple. If you want your device to fully belong to you, you need to find a device supporting software which you can control. Otherwise you're at corporation mercy when it comes to privacy and other sensitive topics.
I agree this is effectively how it is today in some countries, but I don't consider this an acceptable stance. Software cannot be exempted from normal rules of physical ownership of sold goods.
You sold me a device, which also means you sold me the software within because the device is useless without it. The software is not separate from the device, it is the device.
Of course, that doesn't mean I can do whatever I see fit with the software; I cannot redistribute it under my own terms. But insofar the actions I take with the software strictly pertain to my own usage of my own device, I should be allowed to do it. And it also means you (the company) shouldn't be able to do with the software installed on my device as you see fit.
> Otherwise you're at corporation mercy when it comes to privacy and other sensitive topics.
Despite HN’s desire, the vast majority of touch points in our daily lives are NOT open source. I’d rather laws be setup to safeguard my privacy versus relying on “FOSS” as if that’s a panacea.
Tell me you’re joking.
* It’s not your device. *
Ok, What does it mean to own something? To hold it? To be able to destroy it? To use the services it provides? Maybe the receipt makes you the owner? If that’s your definition, in a very limited sense, you own it. But you don’t control it, because you can’t understand it fully, because you didn’t design it. In fact no one person designed it, it’s the product of the group, and advanced technological products like the iPhone serve the interests of the group, first. It just so happens in most cases the interests of the group align and benefit the user. After all we are very similar to one another and so this is a mutually beneficial arrangement. You want to buy, they want to sell. You need a service, they want to serve and so on.
With a product like the iPhone you’re purchasing access to an iteration of a design. Delivering that to you in a white shrink wrapped box is a clever marketing illusion. You can’t see the many huge factories full of calibrated robotic pick and place machines or the lithography setup used to create the chips or the lines of workers that assembled the phone one tiny piece at a time. You don’t see the endless cell towers on the horizon and all the engineers that maintain them. And we haven’t even begun to discuss the human labor that forged the design. There’s easily 500 straight years of skilled engineering labor in the modern iPhone. Perhaps even double that.
Here’s my point already. There’s some brain damaged, mentally ill, psychotic people who like to rape children, take photos of it and share it with their friends. Then what happens is those kids go kill themselves. This problem is happening at scale and it’s not in the interests of the group to let it continue. If the group believes it’s interests and your own diverge, you’ll quickly discover who really owns that device. They control the product they designed and manufactured and they can do whatever they want via signed software update, and you can do exactly nothing about it. That’s a fact.
A $1200 iPhone might be the best deal ever offered to a human being in recorded history. But you’re buying a service, leasing a disposable product. You’re a guest. That’s the truth.
It’s a really ugly image I have of the simple type of person who’s having a hard time choosing between email privacy and stopping a child predator.
I'm buying a phone. I'm not buying a service, an experience, a design or whatever. I'm buying a phone. How is it even possible to twist this?
Buying a phone is really just a fancy way of saying I'm buying a computer. And if this world was just, this computer would not execute a single instruction without my consent. I don't really care what "the group" wants, they're not gonna use my computer for it.
> It’s a really ugly image I have of the simple type of person who’s having a hard time choosing between email privacy and stopping a child predator.
I don't have a hard time choosing between anything. I will choose privacy every single time. If authorities want to stop crime, they can go out there and do real police work. They don't get to surveil everyone on earth just because it would make things easier for them.
If you have such an easy time giving up your privacy and freedoms, then you should be the first to be scanned. Who are you anyway? You're not even posting under your real name, threshold.
You don’t want to be a part of the group and reap the benefits of collaboration and economies of scale? Well then your $1200 isn’t going very far. But you can have a lump of aluminum, a pile of sand and a sheet of glass. And there’s enough left over for a huge pile of books so you can learn how to fab chips, make circuit boards, write your own operating system and build your own phone network. Good luck
This idea that you’re somehow special and exist in a vacuum is privileged and arrogant. You’re relying on other people to do everything for you. They feed you and look after you when you’re sick. And they build these fantastic technologies and work long hard hours and die early just to make it affordable. You got a great deal, the best any living human ever was offered. But the group asks you to run a program you can’t even see so their children don’t get raped and suddenly you’ve been cheated and violated and wronged.
That's a very unreasonable take. Privacy is not reserved for "special" people, it's for everyone. A $1,200 smartphone is generally not considered a "great deal" and definitely not "the best deal ever offered to a human being in recorded history". And users are absolutely justified to oppose on-device surveillance introduced on their smartphone, regardless of whether you consider it a "product" or a "service".
Imagine you use some OS, say Ubuntu or Windows and the company behind it canonical/Microsoft creates an application that scans your files for illegal shit and reports you to the government. This application is optional you download and install it if you want it.
Who in the right mind would download and install such an application that runs in background , uses your resources and reports you to the police, no advantages for you.
So the equation is clear, the user has nothing to gain, a lot to lose if the system triggers and the user also needs to blindly trust(no way to check the hashes or the binaries) that the system will not be abused.
If you are a person that strongly thinks Apple should handle CSAM then ask them why they did nothing until now? They collaborate with FBI/NSA on other stuff.
> Imagine you use some OS, say Ubuntu or Windows and the company behind it canonical/Microsoft creates an application that scans your files for illegal shit and reports you to the government.
You don't have to imagine it. This has literally been happening for the past decade.
>The system that scans cloud drives for illegal images was created by Microsoft and Dartmouth College and donated to NCMEC. The organization creates signatures of the worst known images of child pornography, approximately 16,000 files at present. These file signatures are given to service providers who then try to match them to user files in order to prevent further distribution of the images themselves, a Microsoft spokesperson told NBC News. (Microsoft implemented image-matching technology in its own services, such as Bing and SkyDrive.)
Google has been scanning everything in your account for the past decade as well.
>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect's Gmail account
Apple will only scan photos you post to iCloud Photos, and their system is set up in such a way that they cant see the scan results until a threshold of 30 matches is met, to protect you from being flagged to the government for a few false positives. (They do have a human verify there is a real issue, and that you haven't just somehow posted more than 30 false positives to iCloud)
Google and Microsoft's systems have no such protection.
Anyone who can issue a subpoena can maliciously use false positive data against you.
We've seen the data trove companies like Google retain used to imprison innocent people before.
>Innocent man, 23, sues Arizona police for $1.5million after being arrested for murder and jailed for six days when Google's GPS tracker wrongly placed him at the scene of the 2018 crime
>You don't have to imagine it. This has literally been happening for the past decade.
You either do not understand or trying to avoid the question
Is there a TurnMeToThePolice app for linux or windows ? and who installs it for free ?
For server side makes 100% sense , I use Facebook,Gmail, Dropbox for free so I expect they will scan shit for stuff. For sharing files I am also expect that cloud providers will also scan and respond for copyright violation, makes sense , for stuff you post on Facebook I also expect they scan for porn or other stuff that is against their TOS.
But it would be retarded to install something to scan my local files and report me to the police if I had nothing t6o gain from it , so you have already the iPhone, you paid for it, if Apple wants you to allow them to install this shit you should think, why? what I have to gain? Do I trust Apple and my government, what about the future government and the future Apple? The equation is that you gain nothing and the future is unclear so it is 100% a FACT that is a net negative for the user.
Still nobody found an answer why Apple did not scan iCloud for CSAM? And why are they starting now with this "think different" way of doing it. If you are a think of the children person you should denounce the fact they did nothing so far , if you are a privacy kind of person you should ask why Apple is giving up now on privacy.
Apple prefers to keep that sort of data private, and a kiddie porn false positive is not the sort of data that should be vulnerable to misuse by anyone who issues a dragnet warrant.
Google's data hordes are already widely abused today.
>Google says geofence warrants make up one-quarter of all US demands
Another example would be that Google scans all your photos to identify the people in them and retains that data on their servers, while Apple chooses to perform those sorts of actions on your own device without uploading it to their servers.
"Please don't post insinuations about astroturfing, shilling, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."
> You either do not understand or trying to avoid the question
I understand perfectly.
Google and Microsoft have been scanning everything in your account for the past decade and turning customers over to the police.
Apple is set to start scanning iCloud Photos and only iCloud Photos in the future.
They waited until they found a way to preserve your privacy until they were reasonably certain there was a real issue with what you were uploading to your online account and not just a false positive.
With their system, Apple cannot see the scan results until your device informs them that you have crossed the threshold of 30 images flagged as kiddie porn and hands over the encryption key for the scan results. They can't turn over scan result data to the feds before that, because they can't read it.
Google and Microsoft, on the other hand, would have to hand over an every account with a single false positive to anyone who issues a dragnet warrant since they carry out their scans on the server.
Apple have the key to read your photos, they probably are even encrypted since you can view them from browsers, so this is fake news. They could have scanned the images on the server at the moment you uploaded them, they could have reported CSAM to the authorities and catch the bad guys.
I want the proof that Apple is unable to scan on the server, that they don't have the keys and the only solution is on device scanning.
Of course Apple can read data you have uploaded to iCloud.
Every cloud service can read the data you upload to it.
How do you think Google and Microsoft are scanning everything in your account?
What Apple cannot do is see the results of a scan that was not conducted on it's servers, since those results are encrypted with a key Apple doesn't have until the threshold of 30 images matching known kiddie porn is reached.
1 Apple can read data you have uploaded to iCloud.
2 What Apple cannot do is see the results of a scan
What makes 2 impossible? if 1 is true then Apple can generate the scan and check the hashes on the server so 2 is possible.
What is the advantage? Apple will also have to push the scanning on device to the laptops too and what do you do with devices that don't get updated? Instead of having a simple solution that works for all images in iCloud you get a privacy-invasive solution that only works for some hardware that runs latest stuff. Seems to me that this implementation is not solving the problem of "Prevent CSAM to be uploaded to iCloud" but solves soem other problem and the CSAM is a cover.
The scan is not conducted on Apple's servers, so they don't have access to the scan results the way Google and Microsoft do when they conduct the same scan on their server.
Apple designed the system that way because it is more private to keep potentially damaging information from being readable on their server, where it can be misused by anyone who can issue a warrant.
They can't decrypt the scan result until the device tells them that 30 images have matched kiddie porn, whereupon the device hands over the decryption key.
The theory is that you are unlikely to have 30 false positives, but the next step is to have a human make sure that's not what happened.
Since Google refuses to hire expensive human beings when a poorly performing algorithm is cheaper, I have no doubt Google is turning in people for as little as a single false positive.
>They can't see the scan result until the device tells them that 30 images have matched kiddie porn
Isn't this FALSE? the devices hashes the images but it does not have the database, so the hashes are sent to the server, and Apple servers compares your hashes with the secret database, so Apple knows how many matches you have.
Your argument would make sense ONLY IF your images would be encrypted and Apple had no way to decrypt them so the only way to compute the has is with the creepy on device code.
The system is designed as if iCloud photos is already E2EE. It's not currently, so Apple could have simply done mass decryption server side and scanned there.
But the way the CSAM system is designed works exactly as described. It's technically pretty cool. Each matching hash builds part of a key. Only when the key is complete (~30 matches) can the matches and only the matches be decrypted for review. This also only works on photos destined for iCloud, and actually makes it harder for LE to show up and say 'here is a warrant to scan all photos for X' since the matching hash db is included in the iOS release.
>The system is designed as if iCloud photos is already E2EE. It's not currently,
and it will never ever be E2E because of the US laws or if it will be ever encrypted it will use the backdoored NSA crypto (and Apple PR not even tried to hint at it to calm down the waters).
I agree the algorithm is prety clever but it feels that is not designed to solve the CSAM problem but to look good on someone CV.
Now you have the worst of both worlds, Apple has access to your photos on the server(and if they would respect the laws they should scan them for CSAM already since they are responsible on what they store and share (I mean when you share stuff)) and Apple has a scanning program inside your phone.
> it feels that is not designed to solve the CSAM problem but to look good on someone CV
It feels like it's designed to protect customers from being accused of having kiddie porn (by prosecutors who issue a dragnet warrant of everyone who had a single positive result)
Dragnet warrants on location data have become very common.
>Google says geofence warrants make up one-quarter of all US demands
The solution to resisting these warrants is to never have access to the scan results, until you are reasonably sure there is a real problem.
By setting a threshold of 30 positive results before you can see any of the scan results, customers are much more protected from the inevitable false positives.
I would just like to point out without diverging or causing too much of a "flame war", that this same line of thinking is the one us Libertarians point to regarding "voluntary" things like taxes. I.e. If you asked people to voluntarily sign up for taxes, "who in their right mind would" do so?
My overall point being: To the government, this is the same. It's a power imbalance and they don't see anything different because they think they have the right. We've opened up that can of worms a long time ago and it's important for us to realize it instead of continuously living in pretend-land where we think we have a "choice" over what government does.
>.e. If you asked people to voluntarily sign up for taxes, "who in their right mind would" do so?
Those who would naturally cooperate in the prisoner dilema, those who believe society is better by having a government instead of going full anarchy, those who think humanity is better off if the strong protects the weak.
So only the tiny subset of people who care about people instead of those who adhere to the "fuck you, I got mine" point of view.
I really didn't want to get into this, I just wanted to point out how we've already "breached" that whole "if you make this optional will people choose it" line. You're now attaching a moral argument about how only the "fuck you, I got mine" people will say no to taxes.
And as a side note: I, as a libertarian/an-cap, would be perfectly fine with paying for all manner of things (with taxes) if I could choose how it was spent to some nominal degree. Here we are in a discussion about government surveillance, which would probably be a non-issue if we all just had the ability to slide that slider next to the "NSA Contribution" heading in our tax-contribution forms.
You gain something from taxes, like police to protect your big pile of money and villa. But from this scanning you gain nothing , Apple could have scan iCloud and caught the evil guys already if they exist so you can't claim that the society benefits from this when it could have implemented exactly like Google does it a few years ago.
The gain is, an Apple user can assume that Apple is not scanning their photo library for csam. Because the only time Apple start scanning is if the device gets 30 positive hash matches.
Where as if they used another service, all their photos will be scanned all the time.
Hashes are generated for all images(I think some thumbnail is also generated ), then sent to Apple that checks each hash vs the secret database, the if(matches>=30) code runs on the server too.
I would say the scan is on the device and the check is on the server to make sure the secret database of hashes does not get leaked. The image is never encrypted and if FBI (or the corresponding national agency)comes with a warrant your iCloud data will be fully readable.
The image is never scanned either, unless you consider generating a hash: scanning.
Given hash comparison is how we treat passwords (an extremely sensitive piece of information). I don’t see how anyone can make a compelling case that: generating a hash is an image in some way compromises the privacy of the owner of the image or contents of the image.
>The image is never scanned either, unless you consider generating a hash: scanning.
You are a really torturing the language and logic,
Example, some malware lists all the files on your system and sends the file names to Disney. Your logic says that is not scanning your devices, it is just listing the file names and your logic implies the scanning happens on Disney server.
Checking an image against perceptual hashes is a little more than a string comparison, but that's moot. Also, "scanning" does not require anything be leaked for someone to call it that, which is again, almost definitely what the person you replied to meant when they used the word. Your insistence here that the word doesn't fit that situation is peculiar.
Reporting filesystem stats is simply a bad analogy, it's not like checking image contents against perceptual hashes.
Yes, Google and Apple are both capable of writing code to scan your device for anything. Only Apple (to our knowledge) has written this code. It's a distinction with meaning.
For the moment, yes, they are only scanning things that will eventually be stored on their servers. It's just another minor policy change to scan things that you are keeping locally.
My hope is that they will delay this until they can announce E2EE for iCloud, which would then at least be give and take on privacy, potentially a net win.
There is almost 0 potential for abuse… it’s scan workflow ends in a human verifying that it isn’t a false positive.
Not whataboutism, the alternative to apples solution is companies openly scan your entire photo library. That’s what’s happening right now. People are upset because Apple suggested: “how about we only scan when we suspect there is something not right?”
Here's an interesting thought experiment for anyone who disagrees that the proposed system is problematic.
Let's say the police installs a new device in your house. This device could be capable of seeing or hearing anything that happens in your entire home. However, the police claim that the device is only watching a specific door in your house and forwarding encrypted low resolution photos to their servers "just in case" some criteria is met and they need to take a look. Nevermind that it would only take a policy change and an OTA update that you have no visibility into, or chance of blocking, before it's watching your entire house in real-time and sending practically anything to their servers. After all, they did release a carefully worded statement regarding what the device supposedly does.
But hey, you have several other doors to enter or exit your house from so what's the big deal right? You could seal that door and never be subjected to the system _as advertised_.
Would you trust the police in that scenario, regardless of whether you have anything to hide?
It’s not just your house, it’s everyone’s house. Our smartphones and computers are our digital homes. When something is installed in those homes which monitors everyone for potential crimes, that is what I would label as an unreasonable search and a violation of privacy. It doesn’t matter what they’re looking for, our personal devices should be inviolate unless there is a suspicion of crime, and even then a court order should be involved.
What apple doesn’t understand is that scanning on the device is worse for privacy than scanning on the server, not better.
The analogy is missing a key part of the system as designed today - putting something on someone else's property. The house analogies are terrible, but closer to what's happening is as follows.
I'm your neighbor and I want you to store a box for me. It's tapped up so you don't really want to open it, but would you store it for me without knowing what's in it? You could be like other cloud providers, rip the tape off and rummage through looking for whatever.
Apple has designed a method for me to scan the box at my house only for CSAM (agreed upon by the intersection of multiple databases), then hand it to you with a note that says 'this is not CSAM'. Now you can store the box with some confidence in knowing you're not storing CSAM. It's also more private because you don't need to go rummaging through my box - even though you currently can if you want.
Of course we're starting from the basis that you have decided you don't want to store any CSAM. If you don't care about what you store it doesn't matter.
And you're right that Apple could make a policy change to scan my whole house and upload that somewhere, but guess what? iOS users (smart phone users in general) have always been one policy change away from Apple/Google doing something like uploading/sharing all the face ML data (which IMO is way more valuable than some hash matches). So nothing has really changed.
Analogies are always misleading. A cloud provider is no neighbor. They facilitate a specific service, and it is clear that the package is owned by the client. The cloud provider shouldn't have any liability for the data he stores, especially if it's encrypted and not shared to the public, nobody should ever care what's inside the package. If the authorities have a good reason to think that you have drugs or CP in your package, then they can force you to open it.
IMHO, no combination of ones and zeros should ever be illegal. The act of distributing them to others or creating them in the real world should be. The energy stored on a flash drive doesn't harm anybody, human action does.
iCloud servers are not your property, they are Apple’s property. Even the neighbor terminology is accurate since there’s is an implied trust higher than a random stranger.
I don't trust anyone even though I have nothing to hide.
If Apple was serious about privacy maybe they'd put some of their lobbying / peer pressure efforts towards that instead of trendy social fads at least this once.
To continue the physical home analogy, let's say you're a renter and your landlord has the ability and access to install something in your home at any time without you knowing. Maybe they include it as part of the terms for your upcoming lease renewal, maybe they don't.
Does that mean you should just accept it when they decide to install one of those devices in your home?
There is a giant difference,If you are a developer you might understand this better, it is much easier to have an application behavior setup from a config file or from a database then from code. So you have a config that looks like
scan_everythin = false; and an update would flip this to true for everyone or for special users. Also with the hashes , the check is not done locally so the government will ask Apple "hey check those hashes you have stored against this secret database too and if you find even 1 match send it to us, here is a judge order so shut up and do it".
So after this the governments around the world job is easy, they throw hashes to Apple(and fortunately for Apple they can say they had no idea what original photos content was)
where before the government would have needed to ask Apple to write new software and Apple could have said that it can't do it without starting a shit storem.
Exactly, and given the amount of information Apple knows about you and your device (home country, current location, etc), testing those hashes against different datasets specific to each governmental jurisdiction is trivial.
But in reality it‘s you installing this device by turning on cloud sync. And the law that needs photos to be scanned that are in the cloud. Which is done by all cloud providers. Because the photos are unencrypted. How are we going to change that?
Now change that a little bit. And imagine that you're renting this place. And that's not a police requirement, but home owner requirement who wants to prevent undesirable behaviour to protect everyone (or so he says).
That would be a better analogy. You don't own your iPhone at all. You own few dozens of grams of steel, glass and silicone. You don't even own parts of it, they're sealed, DRMed and non-functional without proprietary firmware. And of course you hardly own the entire device, as you can't control software which runs on it.
That is nothing like this… you’ve just made an argument against having smart phones at all. Not apples improvement to protect the privacy of your photos.
Because the alternative is to scan all the images on the server. With the hash scanner only hash matches are scanned. All the other images are kept private.
The ability to avoid decrypting and mining your entire photo library in order to comply with us law. Instead performing a hash match on the device that gives their compliance team a partial key to decrypt only the images in question to confirm it wasn’t a false positive.
Mean while on a google pixel, google will just scan your entire library.
Could you source Apple’s explanation, that you read?
It’s pretty common knowledge that most of iCloud (including Photos) is not E2E encrypted; in fact, here’s the proof right from Apple - https://support.apple.com/en-ae/HT202303.
> Mean while on a google pixel, google will just scan your entire library.
Google doesn’t scan anything on Pixel phones. Only photos and files that you upload to Google Photos or Drive. This is standard industry practice, and apparently Apple wasn’t doing that. So now they built this client side system that nobody else has, and created local surveillance.
> This device could be capable of seeing or hearing anything that happens in your entire home
This device is your smartphone. It can already watch and listen everything you do. It can already read and write everything that's inside because that's how an operating system works. The fact that it can now scan for CP makes zero difference privacy wise.
If you don't trust your smartphone to do what they advertise, you don't use one.
If you believe Apple can make the iPhone send data to them or the governments without your consent and without even telling you, you shouldn't use Apple software in the first place because you'd have no reason to believe that such a feature hasn't already been implemented.
Your device scanning its content is not a breach of privacy until that information is send without your consent. Wether it is for malware or for child porn. Claiming that this is a slippery slope is like claiming that building a bridge is a slippery slope as the worker could place a bomb inside anytime.
> Apple has abandoned its once-famous commitment to security and privacy.
The cat's out of the bag now that this has never been more than a marketing gimmick. You can rely on Apple to protect your data from their competition, but other than that they'll be at least as happy to implement dystopian surveillance nightmares as the next megacorp. This time they even won the prize for the worst idea ever.
And why is this normal? Why can't they offer zero knowledge cloud storage? Encrypt before upload and decrypt after download. The less they know about the data, the less liable they are for anything. Data is and should be a massive liability for companies. They should aim to know as little as possible.
That’s changing the conversation isn’t it? The reality is cloud providers have been doing this for over a decade. This is the norm for better or worse.
Apple just now implemented this in the most privacy conscious way possible and it seems like they’re taking the brunt of the PR damage.
If anything this tells me Apple are trying to do the right thing whereas the other companies didn’t even bother trying.
That’s why it’s confusing when people are swearing off Apple and switching to . . . Google? Samsung?
Mega.io does CSAM compliance in a more privacy-friendly way: they scan their cloud day and night, but it is end-to-end encrypted, so the scan always returns exactly 0 results.
At this point I feel unsurpringly betrayed. As others have pointed out there is no going back. With little to no visibility on what the phone, the ipad, the mac-mini is doing, are they safe? I hear there's no device level scanning, has that been confirmed by someone other than Apple?
What are you guys also doing? Is there a website that helpfully details how to get yourself out of the eco system?
I can tell you what I have done so far, but it's very much work in progress.
- Removed all photos from iCloud and turn off all iCloud stuff.
- Backup up phones to my mac (encrypted), will transfer to nextcloud and NAS.
- Built a NAS and now storing photos there - looking at.
running PhotoPrism, but haven't made that work yet.
- Transfered useful cloudstuff to Hetzner's NextCloud hosted. and enabled E2EE on certain folders.
- I use NordVPN but I always have done.
I realise it's a lot of effort to go to so I can privately store a pictures of me eating an ice cream in Rome ... but as we all know that's not really the point.
> With little to no visibility on what the phone, the ipad, the mac-mini is doing, are they safe?
No. You can't trust proprietary software. Even if it's not doing anything bad now, that could change tomorrow.
All this Apple polish is a mirage that distracts us from the truly important stuff. Real computing is built on top of open software. Software that works for us instead of them. In the end that's the only thing we can actually put some trust in.
Ya but now you're flagged as someone who has something to hide, possibly trying to clear off devices before the change was implemented.
Hmm maybe that's their game. You don't need to scan, just need to see who deleted all their photo's over the last few days. A higher than random hit rate I'd guess.
I'd rather be flagged as someone who gives a shit about privacy and autonomy than passively accept the further degradation of those things. If we all treat this as a tragedy-of-the-commons situation and impotently accept whatever these companies do in order not to raise a fuss and draw attention to ourselves, then the future is inevitable and the world will go further down this path than it already has. The presumption of guilt is fucking ridiculous and I won't be a party to it, nor will I run and hide.
And what about all the people like me: imminently following the same path but never having used iCloud photos?
You're suggesting they are faking one violation of the 4th amendment for a much more egregious violation of the 4th amendment. They (AAPL) have nothing to gain by doing this.
Just sayin, it's not a _completely_ unimaginable situation. If the FBI was somehow able to access to those delete logs..
Anecdotal: One year my brother was coming back from spring break and there was a sign that said "drug checkpoint 1 mile." right before an exit. Everyone that got off at the exit got searched.
The majority of these discussions are starting in the wrong place, I think. Instead of asking whether this specific feature is good or bad, we should start by asking whether Apple should be doing anything to try to stop the sexual exploitation of children.
If the answer is "no, it's not Apple's job." then of course these plans are terrible.
If the answer is "yes, Apple should try to use it's immense power and wealth to try to stop a terrible evil" - then the obvious follow up question is "How?".
They could do whatever ever other company is doing and just looking at all of your data, but Apple didn't want to do that. So they came up with a system that they felt was a good compromise between privacy and trying to stop the spread of CSAM. If you want to quibble about the implementation details, that's fine. But if your position is that they shouldn't be looking at your data at all, then you have to go back to the first question and say "Apple should not be trying to stop the exploitation of children."
Now you might say, "Yes, but 'what about the children?' has been used to justify terrible things." Granted. But... "what about the children?" is a very important question to ask and shouldn't be used as a cliche to dismiss Apple's attempt to make the world a better place in this area.
Apple isn't law enforcement. Of course it's not their place to be stopping the sexual exploitation of children. They're selling storage devices. Has anyone ever suggested that closet and basement makers should install covert scanners to make sure no one is hiding kidnapping victims, that automakers should put scanners in the trunks to make sure nobody is hauling off murder victims to bury, that makers of filing cabinets should install scanners to ensure you're not storing photos of known exploited children in your filing cabinet? Why is this topic treated so differently when the devices become networked? We could just as easily tap every phone line in the world and zoom in on some region where a child was just reported kidnapped and flag all people who say their name over a phone line, but we don't do that, and no one in their right mind would find that acceptable. The police need probable cause and a warrant before they can install a listening device on your phone line. They don't pre-surveil everyone to flag potential criminals before they become known by other means. I don't understand why software services are held to such a lower privacy standard than every other form of storage and communication we have.
And to specifically answer the question of what they can do. When the police come with a warrant because they have good reason to suspect some customer X is storing and/or transmitting child porn, then transmit a covert over the air update to that customer's device and that customer's device only, scanning photos on-device before they get encrypted for iCloud backup. Don't push the surveillance update to every single customer on the planet before they're under any suspicion and this suspicion has been certified by a judge just in case.
>But if your position is that they shouldn't be looking at your data at all, then you have to go back to the first question and say "Apple should not be trying to stop the exploitation of children."
You're getting pretty far into "affirming the consequent" territory here.
I don't think donating to a private yet government funded, nontransparent and unaccountable organization is a good idea. In fact, I don't think that having such an organization is a good idea at all.
> But... "what about the children?" is a very important question to ask and shouldn't be used as a cliche to dismiss Apple's attempt to make the world a better place in this area.
I've never seen a single person ask this question in good faith. It never stops. Every single day we have a new crisis involving children and the solution is surveillance, more government control, less freedom, whatever.
Nothing but emotional manipulation. None of this will ever stop crime. None of this will ever undo the suffering already inflicted. All of this is mainly justified by the debatable notion that demand for CSAM causes child molestation, the actual crime authorities should be trying to prevent. Instead of doing that they use children in order to establish the surveillance tools needed for government oppression. Using children as a political weapon should be considered child abuse as well.
>They could do whatever ever other company is doing and just looking at all of your data, but Apple didn't want to do that.
Note that Apple doesn't do end-to-end encryption, there's little evidence they are even thinking about it, and there are other issues that would stand in their way if they had tried to. Under the current system, we get client-side scanning and server-side scanning whenever Apple feels like it. Right now, there's no privacy benefit other than the made-up one some Apple fans invented from thin air.
>So they came up with a system that they felt was a good compromise between privacy and trying to stop the spread of CSAM.
It's a terrible 'compromise'. In return for a bad precedent, they get a system that catches way less cases than the typical server-based ones. Unless Apple expands the system further to catch these cases too, in which point one should ask how much privacy is really kept.
Cloud scanning would be better, because there’s a clear way to opt out - don’t back up to the cloud. Scanning in device takes a hard technical restriction where they can’t scan files they don’t have access to and turns it into a policy restriction where they say they won’t scan files that are not being uploaded to iCloud.
How long do you expect that policy to realistically hold when some three-letter-agency comes knocking?
If they don‘t upload there is no information anywhere, but on your device. The system very much depends on the cloud, because the keys of known photos aren‘t stored locally.
Photos are already scanned locally for the keyword search and faces,and that could easily be used to search for anything, but so far no one had an issue with that.
If you say, 'Yes, apple should be doing something, but not this." I'd love to hear your ideas about how Apple can do more to stop CSAM while better protecting your privacy.
Detecting CSAM on a phone is not stopping exploitation of children. Just as detecting images of violence isn't stopping violence. Violence and abuse have existed before phones and cloud storage, and will continue to be ever after.
And Apple cannot stop exploitation of children. Anything they try is trivial to circumvent, a massive breach of privacy, or both.
I keep seeing China getting blamed for this move that was only targeting US phones. While it's definitely possible China would have tried to also force implementation in China, if a government deserves blame for this move it's the US government.
I can't understand why so many people see a US company try to spy on US citizens and say China must be at fault.
> I can't understand why so many people see a US company try to spy on US citizens and say China must be at fault.
This is extremely common - i.e. blame the immigrants stealing jobs, blame the muslims for eroding our religious institutions, blame literally anyone but "us". That's because it's so much easier to blame someone else for our own societal problems.
Australia is pushing for client-side scanning too:
"We know there are a number of solutions that would ensure illegal activity online can be addressed, without weakening encryption and still allowing lawful access to information needed in serious criminal investigations. Solutions include: using certain types of encryption that allow proactive tools to function, implementing proactive detection tools at transmission, rather than on receipt, moving AI and proactive technical tools to the device level."[1]
The main problem with client-side scanning is that to have any effectiveness it requires a situation where "Trusted Computing" becomes a legal requirement for sale and use of computing or communications devices, where only registered operating systems can be installed and software can only be installed from registered application stores. And in terms of the Internet, registered operating systems would have to only permit access to registered web services, and registered web services would find themselves quickly unregistered if they allowed end users to execute their own scripts within a browser session. It'd require the end of "General Purpose Computing"[2] e.g. along the lines of regulations such as [3].
Criminals are being pushed increasingly down the Phantom Secure [4] path (developing their own technology that doesn't implement backdoors, client-side scanning, etc), and in response, public policy is increasingly seeking to outlaw[5] "General Purpose Computing" and the current decentralised model of communications. It'll take a long time, but perhaps over the course of the next 20 years we'll see the end of "General Purpose Computing"?
Previously there was no precendent of an on-device content scanning protocol proposed by the top phone manufacturer themselves and even deemed privacy-preserving and secure by them.
Now please idividually hash every word of every message on the phone. Then give us all illegal messages that contain _exact_ hash match with the dangerous word “Tianamen”. Government abuse is impossible though: our team of Apple employees here at iCloud-China will manually review them before passing the list to firing squads. Thanks!
It will likely be a fuzzy match on though crimes. But every citizen will get their budget of 30 or so “security” vouchers, so we can all vent a bit once in a while.
Oh and they will also double check it with “totally independent” blacklist on server before running it by a party person last.
This approach just guarantees one - all the privacy talks from them is just marketing. You can not be serious about privacy and do this at the same time...
This is so frustrating, I generally expected the Hacker News community to observe a more sophisticated take on this topic.
Let's get it straight: the other Big Tech companies have done this CSAM scanning for many, many years. Their users have been reported. Arrests have been made. Microsoft had this sorted in 2009 with PhotoDNA, and made it available to everyone else in 2014[1]. I doubt very much this has stopped any of you from working at Microsoft or the FAANGs when given the chance.
The ONLY distinctions here are that Apple has opted to do the mundane hashing part on the device prior to upload, have been up front about the plan, have take comments from the public and are saying that they will work out a more satisfying scheme.
Apple even appears to be responsive to the shady claims that a 'reverse engineered (old) version of NeuralHash' might have resulted in some novel collisions, and spend the money to improve their current implementation.
The other big tech concerns did not do these things; they simply imposed it all and started informing the police.
As far as I can see, the only real misstep Apple has taken is being explicit about their plans and assuming people could understand the scope of the subject at hand. Instead they've stepped into a bear-trap of outraged hysteria, a moral panic from people who should know better.
The take away lesson here is "Do not announce stuff like this, the general public will flip out if you try to keep them informed". Is that anyone's goal here?
2) Everyone else is missing something. That includes organizations like the EFF.
In terms of missing nuance: What Facebook is doing absolutely and completely different from what Apple is doing. You should take time to understand why Apple's approach is so utterly repulsive, while I'm 100% in support of Facebook's.
There are distinctions other than your "ONLY distinctions." You should take time to understand what those are, and why they're important. As a hint, those have to do with where users might or might not have expectations of privacy, with due process, and with who owns what.
Yours is a vague and disrespectful response that does not really contribute to the conversation.
You suggest that I'm uninformed, yet do not specify what I've missed or provide any sources. You've spoken down to me and offered vague generalities as to why your hysterical repulsion is justified but can't actually articulate the issues - instead only appealing to the EFF and offering a paltry 'hint' as if your perspective were somehow self evident.
My position stands. However yours needs some clarification. You seem to suggest that expectations of privacy, due process, and 'who owns what' (you could just say 'ownership' here) somehow doesn't apply to Facebook? Or that they have somehow satisfied those concerns? Facebook? That point is plainly absurd, they are among the most abusive in this regard - as supported by hundreds of articles on the much vaunted EFF's site[1].
If Apple's approach, which I've characterized as 'performing the mundane hashing on-device' is really and truly repulsive, please do elaborate. I'm sincere in saying that I want to know. And if you can be bothered, please also clear up the matter of how the other Big Tech organizations have managed to accomplish this same law enforcement obligation to your satisfaction.
"the Hacker News community" is not "sophisticated" "on this topic" and is engaged in "a bear-trap of outraged hysteria, a moral panic from people who should know better," while I have "hysterical repulsion."
I responded to this position.
Since you're now asking for a clarification: If I am running a social media platform for the sharing of photos, I am responsible for the content on that platform. It is my platform. I'd better have scanning for not just CSAM, but hate speech, misinformation, harassment, criminal activity, and a slew of other things. If Facebook becomes a platform for the sharing of child pornography, or the collapse of our democracy, or polarization, Facebook has done something deeply unethical, and should be held accountable, whether through the courts of law or the courts of public opinion. On the other hand, users have little expectation of privacy on Facebook.
In contrast, if I paid for a phone or a computer, it is MY private device. I bought it, and I should have an expectation of privacy there. I should be able to trust that my device isn't out to get me. Even a space like AWS or GCE, where I am paying for a machine, I should have some reasonable expectation of privacy.
It's the difference between running a shopping mall (or an exclusive club) and selling a private home. If I'm running a business, I'm responsible for what happens on my turf. If I'm selling you a product, you're responsible for what you do with it. Your home developer have no right to install spy cameras in your home, even if they will only call up the police if they're 100% sure you're a racist terrorist raping underage [taboo] [taboo] [taboo] ....
There are plenty of other distinctions, such as around evidence and due process, but I don't want this turning into an essay. You can use a search engine for yourself; plenty has been written on this, and insulting people isn't a good lead-in to asking them to save you that effort.
And the EFF is right that Facebook does plenty of nasty things. This just doesn't happen to be one of them.
If scanning is done in the cloud, I can disable cloud uploads and reasonably trust no filter list will get me reported. If it’s done on-device, I have to rely on Apple to uphold their policy of only scanning material uploaded to iCloud. That’s a big difference.
>If scanning is done in the cloud, I can disable cloud uploads and reasonably trust no filter list will get me reported
Or that it uploads local copies to be scanned in the cloud and doesn't save them to your account but may trigger LE response... Though I suppose that would cause concern about the massive network traffic, whereas no one would notice Apple scanning local photos (w/o some sort of advanced disassembly to see how much the on-board ML chip is used)
I’m thinking more about what legal defence a phone manufacturer has against inevitable LE requests for access. Not having the technical capability is much stronger than not being willing, even though it’s obvious that a targeted “software update” is within the capabilities of Google and Apple alike.
Apple painted itself into corner by refusing to do iCloud scanning to maintain its differentiated stance compared to its competitor, so the only way it COULD comply with government pressure is to do the scanning at device level. Except, that turns out to be a whole lot more intrusive and unpalatable for people because the surveillance is taking placing on a device the user paid for and thereby expect higher level of privacy than what is stored on a cloud.
Apple is stuck between rock and a hard place. They don't want to be like Google or MS, but their solution is worse from public's perspective. Optics aren't looking so great at this point, because they are also dragging Google and MS through the mud by effectively saying, "Well, they've been doing it too!"
The main argument here is that we are afraid that governments can demand, through court, that Apple should add some other hashes than the ones they start with.
What is preventing those said governments from going through the same legal process demanding that Apple should implement such a system from scratch?
Elections. If people start demanding maximum security, it can become a popular opinion. This can definitely happen but at the moment implementation of listening devices to everyone is unpopular.
It's useful to remember that organisations are made of people with individual aspirations. Some would really want to have listening devices implemented to everyone so that they can do their jobs better(usually the intelligence people, the police etc), others will want it for less "Acceptable" reasons but at the end of the day the people who can allow or disallow it will have their careers determined by the general population in a very public manner.
Of course, you can coups, you can have wars etc. That's why people should watch out to keep the Democracy intact, go vote and have a healthy level of interest to the political processes.
The problem: at least 90% of the population doesn't care about privacy, and scandals (see Facebook etc) have no influence on this whatsoever, so awareness-campaigns are not likely to work either.
Generally the US government can't compel work to be done through subpoenas. Adding a hash is probably not considered work, but writing a bunch of code is. Typically the government can only compel parties to divulge information - such as "use your existing systems to tell us which users have this image on their phone".
Forcing citizens to perform work would be reserved for something that's part of a punishment for a crime conviction, such as prison labor. This would be "design, implement, test, release a brand new system to tell us which users have this image on their phone."
There's some academic debate over whether the "all writs act" can compel work/action/speech, but that would be the only avenue I'm aware of, and it's on pretty shaky ground for something like this.
If the system doesn't exist, Apple could fight an all writs act type request. If the system does exist, they probably can't really effectively fight a FISA order/subpoena to add some hashes in.
If they had never invented it, they might have been able to tell China "its not technically feasible" and potentially convince Chinese leadership that its even true. But at this point that game is up and China could definitely compel Apple to finalize and release this.
In this kind of situation I'd say the more realistic "risk" isn't subpoenas or court orders or whatever, at least not standing alone. Instead you just have the government mandate that vendors provide this kind of capability though positive law. Then later legal processes like subpoenas or other orders simply use the capabilities that you mandated.
Think basically CALEA, which required telecom providers to change their systems to better enable law enforcement wiretaps, as existing methods weren't quite keeping up with telecom digitization. You don't need to use the wiretap orders themselves to make AT&T build in your access points.
Content matching by government mandate for various things feels kind of inevitable... It's already out there in terms of plans for requiring it in the copyright sphere, and obviously for CSAM.
Anyway, I don't know that whether Apple actually deploys this system makes that much of a difference in this sense: the idea is already out there. Which is basically what you said.
Yeah, it's legislation that Apple is trying to get ahead of here. The conversation about government mandated backdoors last year where Graham threatened Apple to either do something or legislation will [1].
There's also the emails that came out where Apple thinks they have a CSAM problem in iCloud photos and they don't want them there [2].
I think a lot of people are missing the larger playing field when discussing this issue. Apple is going to do something either by choice or by force.
Super interesting. But what about the case where the hypothetical government develops the system themselves and requires apple to add it to their product? Of course apple could deny and possibly be blocked from operating in that market. I just find the whole situation fascinating
That's a very wide selection of services you're offering your customers, and they're very seamlessly integrated, and secured from open development (looking at you PWAs in Safari) it'd be a shame if they needed to be broken up or opened up with new legislation.
Nobody has ever brought up this argument, but isn't it odd, despite all the self imposed backdoors in China, Apple is targeting the us market for the pedophilia stuff? Why is that so?
No such thing for Android.
Must be pressure from the government I suppose. Or did they somehow think this would pass the clients sniff test and can be sold as additional safety measure?
The real why was never disclosed as far I am aware.
US legislation states that cloud storage providers are liable for storing/transmitting CSAM via their services. They are legally bound to identify and report it.
I've been pondering the idea of an offline-first experience on a laptop for my own mental health for a while. I feel like computing was just so much better without the internet as a distraction. Who is in?
I'm already there! I loaded up a distraction-free work laptop with Ubuntu, i3, 20 gigs of ambient Spotify playlists and all the documentation it's poor little SSD could hold. I seldom connect that machine to the internet, when I do it's only to clone a repo or install a package.
Seems like a reasonable response. Apple has now revealed that they were capable of failing to consider the optics on this one which is worrying regardless of what happens next. It's really unbelievable that we now live in a world like this. Yet another reason that I feel like I stumbled into a dystopian novel these past six years or so. Never imagined I would live to see something like it.
Me too. I am moving into the forest to be with animals and insects, because no matter where I go, humans do naughty things and I will not stand for it.
There are still solutions, even though the situation is dire. The problem is, those solutions should not eat up too much of one's time, and the naughters not past the naught of one's energies, preferably.
Of course one may finance malicious endeavours hence encourage them - but since this is against the most basic rule of "do not encourage undesirable behaviour", that one may be shortsighted, while in fact antisocial.
Is it possible that the poster defends antisocial behaviour coating the opposite as antisocial?
One has marketed it self as the privacy champion, the other has never pretended. That is the main difference.
I'm still sticking to iphone due to it since it was not going to be implemented where I live anyway, but switching back to google would not be a good option. I'm currently looking at the pro 1x[1] phone though.
But OP's comment isn't specific. It's quite general.
A specific summary would at least include details about how AOSP is open source and auditable, so if you de-Google a google phone (which is something that is actually possible, unlike Apple phones) you can be more confident that it isn't spying on you. Other relevant details are that Apple phones are entirely closed. No one can be confident of anything that this black box does. Everyone's lowered their guard because Apple says "no no we're on your side". This move shows Apple actually is not on your side.
Little details like that, damned as they may be, are what make this an upsetting topic.
Their point is general because the discourse has been run through analogies instead of facts. They don’t have specific arguments to respond to. Any time you get to the facts, they’re weak. iOS may be closed source, but there are avenues for introspection: jailbreak, security researcher device program, disassemblers, proxies, etc. You’d be hard pressed to design a scanning system which implements behaviors such as “scan your whole phone” without detection, versus alternative server side implementations where the purpose and processing is totally opaque.
>You’d be hard pressed to design a scanning system which implements behaviors such as “scan your whole phone” without detection
On this point I greatly disagree. Apple has control of the entire stack and shares nothing about it. If you jailbreak an iPhone you still don't know what happens when you send bits of data into any block of an SoC or modem. There are computers inside Apple devices that have no avenue for being audited.
Apple having total control of their devices is deeply ingrained in their technical identity. It only works if you trust Apple. Apple has shown that trusting them is a bad idea. That's why this is news.
The point you have to show is why trusting them has been demonstrated as a bad idea, but instead you take that as given.
The rest of your argument is very ambiguous. Shares nothing about their stack? The entire controversy began with an extremely transparent announcement about future, planned behavior.
Jailbreaking gives you visibility into the application processor, and the secure element must be trusted on any modern smartphone. If you compare to devices without one, you are comparing apples to oranges. There are plenty of closed source or hard to audit components of every system. Do you trust your baseband?
Like I said, the argument has been run through analogies and the facts are quite contingent.
Pragmatically, though, AOSP isn't relevant to the conversation. OP was comparing the popular smartphone ecosystems that most people are using, and that basically means the Apple phones and the phones which are running with Google's services.
Very few people are actually running AOSP -- they're running Android builds from Samsung or Google which add a lot of binary blobs into the system.
Telling people "you could have a more secure phone if you strip out most of the systems that the apps you use your phone for rely on" isn't very helpful.
I feel that scanning data that is present on a server the company owns is very different to scanning data that is present on a device that they have sold to a user.
Not even Google had the audacity to scan files stored on my physical phone. They can do whatever they want with their own computers, just leave my device alone.
It most definitely isn't "in the same way". Not even "privacy slumlord supreme" Google dared scan files on your device.
Yes, they scan stuff in the cloud (we all know this) as does Microsoft OneDrive / Outlook, Facebook, Dropbox, etc. but none of them dared to add code on your device to look at your private data.
Apple is running "the phone that respects your privacy" ads in prime time slots for 3 years now. A lot of people have chosen apple devices based on this promise.
You know what you get when you use Google accounts.
What people have so far missed is the nugget burried way down in the small print.
Apple are doing this because they have a child porn problem.
The biggest suprise of this whole bruhaha is that _facebook_ of all the companies come out looking better than apple.
Whatsapp essentially had the same issue: people using it to share nasty stuff. despite similar encryption issues, whatsapp manage to report hundreds of thousands of instances of nasty stuff, without putting in client side filters.
> Apple are doing this because they have a child porn problem.
They're also under relentless pressure. From governments, who demand backdoors and full access. From hackers, who are pwning the entire stack.
Like the EFF, I oppose the image scanning.
But I'd like someone, anyone propose what Apple (Google, Facebook, ...) is supposed to do?
My hunch is that Apple is trying to thread the needle. Do the absolute bare minimum, to fend off the inevitable governmental policy overreaches. Find some technical solution to mollify all the would be tyrants.
If Apple's attempts to find a balance don't work, the default position is backdoors, not Matrix-style end-to-end full encryption.
I'm not saying EFF should stfu.
EFF and others should be savvy enough to see the meta game and advocate their own policy (platform) accordingly.
> But I'd like someone, anyone propose what Apple (Google, Facebook, ...) is supposed to do?
I think what apple should do is follow Facebook/google's lead: the "report this message" button that _everyone_ else has.
Obviously google doesn't encrypt user's stuff, so do server side scanning. Whatsapp supposedly relies entirely on reported messages.
Given that iCloud accounts are tied directly to ID, it doesn't seem all that difficult for apple to get their report rate up just by letting users report stuff.
> EFF and others should be savvy enough to see the meta game and advocate their own policy (platform) accordingly.
agreed, the "EWWW NO NEVER EVER EVER" approach is counter productive, it just allows people to paint them as people that hide kiddyfiddlers. the wider public supports apple's drive a lot more than I would like.
But apple do need to do something more, they apparently reported less than a thousand people for having illegal images.
It's not a problem that can be realistically solved with technology. The problem existed before electricity had been discovered[1][2] and continues to exist[3] in a significant way regardless of the presence of computers and the Internet.
I'd guess that the intent behind client-side scanning is to force child abusers away from mainstream technology towards niche technology[4][5][6], increasing the signal-to-noise ratio for finding and targeting child abusers. At present, there is nothing inherently suspicious about someone using WhatsApp, which means child abusers can hide amongst millions of other users. If CSAM is instead pushed onto Tor and a little-heard-of encrypted messaging app with servers located in eastern Europe, the theory is that it becomes easier to target just these niche users. If they're using Tor, they must have something to hide and therefore further investigation is warranted...?
A key problem with this approach is that, as has happened in other criminal circles, criminals find increasingly clever ways to remain 'boring' and not inviting of further scrutiny, or to otherwise remain hidden in the masses. For example, 9/11 hijackers communicated using boring methods, without encryption, using a coded language to make messages appear innocuous[7]. Even if general purpose computing was prohibited, would it be realistic to prevent users from using spreadsheet formulas to implement encryption and display of images[8][9][10], without impacting other users?. Another key problem is that innocent people will likely be targeted and harassed by association with niche technologies, which also wastes resources that could be used to limit child abuse via more effective means.
It would be possible to implement something that looks like North Korea's Kwangmyong[11] and largely solve the CSAM-sharing-via-computers problem, but at that point you've essentially banned computing altogether. This is actually a decision some countries have made[12] (not for CSAM reasons though) but good luck convincing the modern world that it'd be a good idea to eradicate the last 40 years of technological advancement whilst other countries continue to leap ahead technologically with the help of open and unrestricted access to general purpose computing.
I am not, and I don't think anyone here seriously thinks that kiddie fiddling is new.
The issue is that Apple's platform is being used to store and share this abuse. Not by some clever mastermind, but by tech illiterate garden variety nonces. As I have alluded to before, whatsapp have reported hundreds of thousands of images in 2020. Apple have reported 256.[1]
This is a legal and PR risk. Storing these images, and or allowing them to be distributed via a system that you own carries stiff legal penalties. They have to do something about it. I don't think client side scanning is the best way.
The crucial point is this, you won't be able to find all of the nonces. But you will be able to find a good number. The idea is that you remove the ability for your standard criminal to be able to use your platform. Or at least give them the feeling that they might not be able to operate with impunity.
There will always be sophisticated criminals, always. But if we can keep them to a manageable number, then that will be an improvement.
I am adding my comment for what it's worth. This reckless grab for power is immoral and contradicts Apple's core values. They must stop all attempts at surveilling their users or lose their user-base altogether. Apple, stop this evil plan!
Just exactly why is Apple doing this? I doubt that it was due to some huge upsurge in demand from it's customers.
The arguments against are easy enough. This kind of tech is not ethically different from adding an all-seeing AI to your home that looks for illegal behavior. If something sketchy occurs, it calls the cops.
'It's for the children' is mockable, but isn't an unreasonable talking point.
...but what is the real driver and who is behind it? Merely guessing the cause is not particularly useful.
As an ill-conceived means of holding both privacy and oversight at the same time?
The main sticking point (as you know) isn’t what it looks like now, but how small a leap is required before the privacy is stripped away and all that’s left is oversight.
Oh well, even if Apple backs off they'll just try something else.
Even though surveillance has always been a thing, the ability to both collect and parse the data is unparalleled. No need for Walsingham's secret police, or at least not so much anymore. The loose coalition that actually runs matters will do whatever it takes.
I'd say that a lot of what drives this at the mass level is the need for revenge, that's a lot more fun than being for something. 9/11 occurs, lets get the bastids. The forces of the State are now arrayed at the evil TrumpWhiteSupremacistGuysWithFlags regardless of the lack of their actual presence on the street. You enemies need to be ground to dust.
I doubt that boycotting Apple will make a lick of difference. In a time of turmoil your optimal solution is to cut dependencies on the system. It's a battle that should be watched from a distance and the soul-rot that comes through computer screens and cell phones is best kept to a minimum.
We won't ever be free until we can manufacture computers at home just like we can write our own software at home. Right now, manufacturing computer hardware requires billions of dollars. Governments will target these centralized operations. We must decentralize hardware manufacturing.
LTE is quite standardized, although VoLTE is far from it. IMEIs can be spoofed. VoIP numbers are a thing (e.g. voip.ms, or google voice if you can go without emergency services).
I forget if IMEI spoofing is a felony. So much policy on the books it would take me months to know for sure.
My next phone was going to be an iphone because I'm tired of all the Google tracking (and Android itself). But now I'm holding up my purchase until they release (or hopefully abort it).
I'm happy to pay more for a robust and privacy protecting device. I won't pay more for the same bullshit.
There are many threads on HN about contemporary pocket-sized personal assistants with telephony not using Google Android or iOS. Some of them are at their early stages, but some may be usable. It is something most of us will have to go to sooner or later.
If I have something that spies 24/7 in my home or around me I throw it in the recycle bin.
No more tech talk, implementation talk on closed software and dark pattern designed products by any company.
Apple is using its market leader in "user convenience" position to normalize on device user behavior analysis and data processing and active policing. This is madness.
I expect police and three letter agencies to use their own hacking tools and budget. I will not pay the bill on a multiple levels to be monitored and classified. Yes you can vote with your wallet as first reaction, as second you can push your representatives for political and legislative reaction.
Imagine this news in 2010. For sure some Apple stores will be burned down to the ground.
But make no mistake: If you don't buy products created from the get go to snoop around and extract your data, you will have more privacy.
The only way into the future for people who want peace of mind is not to buy into smart home, smart appliances, smart cars and smart surveillance. Period.
They are called smart for a reason. Because we are becoming stupider.
Start self-hosting and local backups, there are tech savvy people who are moving away from surveillance state and building more private and sustainable future for themselves.
Apple is the big writing on the wall. They act as political entity now, the users are their voters and as a classic politics move they are selling the audience to the highest bidder.
Checks and balances.