The problem I have with this approach is that it introduces on-device scan for images. All what is needed to adopt it to scan for different kind of images is to connect it to different database, say, Winnie the Pooh memes featuring CCP chairman, and boom, jailed dissenters. And ability to scan all images is but a minor firmware update away.
Server scanning makes it clear that the company running the servers has access to your photos. So you can either find a form of encrypted storage, or be okay with that, depending on your privacy stance. Having device with ability to scan your photos removes that choice. It is a privacy invasion.
The SWAT team is knocking on your door after you've uploaded multiple instances of child porn to iCloud and those instances have been verified to actually be child porn by a human. That sounds fine to me.
>The SWAT team is knocking on your door after you've uploaded multiple instances of child porn
...or whatever gets sneaked into a database that nobody can take a look at, and whose maintainers have zero obligations to you.
>and those instances have been verified to actually be child porn by a human.
Yeah, SWAT teams doing their homework before shooting people up is precisely why SWATting is a completely innocent thing to do and never put anyone in danger.
And that also does nothing in case of "neural" (aka blackbox) hash collision, where the Algorithm mistakes a normal picture for CP. The "human" you have in your dreams doesn't have access to the actual file you have on your device, right? (At least, that's the sales pitch for on-device privacy). They won't know until they get you.
Personally, I would hope that HN people know better than to blindly trust an opaque algorithm running off an opaque database to never make a mistake in where it sends SWAT teams.. but here we are.
But Apple only plans to scan photos that are synced with iCloud, don't they? So you could just switch to an E2E encrypted alternative and drop iCloud completely.
I have to remind again, that iOS is a blackbox, closed source system. All this speculation applies also for a moment before they added anything. They might have had this code ready for years already. All we have is what they say. It is already very trivial to scan everything on on your device and send that metadata. Few lines of code. At the moment when they say about scanning everything in phone publicly without opt-out, then we should be worried. Once again, there is no way telling what they are doing already.
This is not the first time they have run into this - due the to AppStore being a walled garden they are the sole gate keeper who decides what goes in and what not. Makes sure the users are safe and everything. Perfect, right ?
Well, until protesters want to use an app in the store to coordinate their protests yet the government wants you to reject it, so the protesters can't use it:
With users not being able to install the app themselves Apple is the single point of failure with no plausible deniability like Android (any any sane OS in general) has. And they did reject the app.
And just a few months before this happened I attended a talk about free software from FSF and they mentioned just the same thing about iOS and the gate keeper being the single point of failure a repressive regime can apply pressure on. Turned on to not be far fetched at all...
iOS has been running complex Neural Nets on all your images for years now. It powers all their social features and search.
Apple have always had the capability, and have been advertising it as central selling point of new versions of iOS for years. That ship sailed along time ago.
What changed is Apple just signaled to the governments of the world what it's willing to do toward abusing user privacy and exactly how it can work. And hey, Apple, if you're willing to do that, why not just go a bit further and do this, because we're asking you to or else (and now we know you're obviously even more morally flexible than what you used to present yourself as).
Before that, Apple put up a front that they would fight for user privacy at every turn. They pitched that over and over and over again as a corporate ethos, a selling point. That was the facade at least, even if one is cynical and wants to pretend it was a lie. Now they're not even presenting the facade, which will open the flood gates dramatically. They went from a supposedly resisting agent, to a morally gray and willing agent at a minimum. Apple dumped an enormous vat of blood into the shark infested waters.
I think I disagree. Current move was improvement for user privacy, compared to what it used to be. Abuse is only on speculation, not on what has actually been done.
I think its more than that. images sent with iMessage are stored in iCloud, even if the device is not necessarily uploading.
How else would that have such warnings they claim in their announcement. [1]
And we have seen these systems have their scope/use case changed in the past [2]
To the point in the other discussion [3]. OP stated that Apples plans to scan and then upload suspected images are illegal. But i would think that they are only scanning images, client side, that users themselves are attempting to upload (either though attachments, or automatic iCloud backups etc) which would put Apple in the clear. In this case that would be iCloud images, or those that piggyback iCloud services like iMessage etc.
Stop repeating this lie. iMessages photos are not part of this. This is written in the technical document. This is only photos from iCloud photos.
It's been debunked, just read this article:
https://daringfireball.net/2021/08/apple_child_safety_initia...
And of course the scope could change tomorrow. Just like the scope of Android could change tomorrow. They could even have changed the scope without doing an announcement!
In my comment history it clearly shows that there's an effort to parse through the information and seek clarity.
And its worth noting that iMessage data is and can be backed up to iCloud, and not just using backups. For many with multiple devices this is specifically useful.
>And of course the scope could change tomorrow. Just like the scope of Android could change tomorrow. They could even have changed the scope without doing an announcement!
I am pointing out that there is a specific history of this already on record and documented. And their technical documents specifically state their intentions.
"This program is ambitious, and protecting children is an important responsibility. Our efforts will evolve and expand over time"
I don't understand why you find such an observation so offensive. Its pretty clear Apple sees this as a first step into what will eventually be a much larger program.
How do you know that? It is blackbox paradox, and all we have is what they say. They might report CSAM hashes to law enforcements. Any file can be a threat, hence images are included for scans. Defender also uploads whole files as unencrypted if you don’t opt-out.
...if a human actually gets the file, figures out what type it is, and examines it for themselves, they'd be obligated to report it. With the number of Win10 devices in the world, how big would their security team have to be to hand-groom every automatically submitted "suspicious" sample? (For that matter, why would a vanilla JPG get flagged as "suspicious" in the first place?)
> All what is needed to adopt it to scan for different kind of images is to connect it to different database, say, Winnie the Pooh memes featuring CCP chairman, and boom, jailed dissenters.
The CCP have already throughly demonstrated that they don’t need manufactures consent to build these systems.
Look at the Uyghur population in China. They already have their phones scanned on device for dissident material, not by coercing manufacturers, but by forcing the population to install a surveillance app. Then making it illegal to use a phone without it.
Being caught at checkpoint without the app installed and working is grounds for immediate arrest and re-education.
> The CCP have already throughly demonstrated that they don’t need manufactures consent to build these systems.
It was obviously merely an example for illustration purposes by the parent. To get a point across it's often very helpful to use a stark, clear example.
Few governments will ever have the extraordinary capabilities and resources of the CCP in China.
For the other ~190 governments that will never reach that level of capability, what they might have now is a globe-spanning billion-device corporation like Apple more willing to assist them.
Do mandatory reporter laws work like that? I was under the impression that you had to report something if you saw it, but you had no obligation to be actively scanning or to compromise encryption to do so. For example, I don’t think S3 does any active scanning and you can definitely shove any encrypted blob you want onto their servers with no obligation to give them a decryption key.
IMO this appears to be Apple either a) trying to preempt future criticism or regulation or b) responding to some behind-closed-doors pressure/bargaining with US authorities.
I think you have to be aware of what is happening, before you can say that nothing criminal happens. This is where scanning steps in. You can’t turn blind eye.
There is a big jump from reporting criminal activity if you happen to see it, to actively searching it out. It is the jump from police arresting you if they see you smoking a joint to police searching your rooms to make sure you don’t have any cannabis in there.
I read the law and you are correct, there is explicitly mentioned that provider is not required to enforce seeking of CSAM evidence.
However, they might be required to comply the demands of NCMEC if they ask to stop redistribution of certain visual depictions by providing hashes. This is were scanning steps in.
> Then HN taught me that any company storing images on their infrastructure in the US must report pedophilic images to the US government.
It's certainly been going on for the past decade.
For example:
>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect's Gmail account
There's nothing to report if all you have is some encrypted blob. Alternatively, just don't consume any user data at all. Data is and should be a massive liability.
If you don't want there very dangerous weapon you have thought out to be abused, don't create a physical assembly of it and don't tell anyone who has a habit of abusing powerful weapons.
Apple's banned image reporting wont stay iCloud only.iMessage is next. Maybe all data on your phone. 1) phone scanning is overkill for pics already on their servers. You don't build this and take the PR flack for something you can already do server side 2) Even if it's somehow not Apple's plan, they will be forced to use it on iMessage. Congress has been trying to for years.See the EARN IT act[0].
Apple just erroneously said "it's safe" despite the fact that it clearly can be abused.
It is not the next step, it is already there, if you read the technical papers. Additional encryption level comes to iCloud images with this change, and Apple can’t see your photos anymore unless CSAM threshold is achieved.
>Is there some evidence you have of this plan? Sounds like this is just a fear you have.
The EARN IT act. It may not be Apple's plan, Apple's plan, as you suggest, might only be for doing scanning on encrypted iCloud and excluding encrypted iMessage. But what Apple will be pushed to do after that is pretty clear.
If the government passes a law mandating that encrypted messages be scanned, it won’t be done using this CSAM mechanism, and it won’t only be Apple doing it.
In short, you might be right to be afraid of this outcome, but it has nothing whatsoever to do with CSAM countermeasures.
>That, of course, is the rub: Apple controls the algorithm, both in terms of what it looks for, what bugs it may or may not have, and also the inputs, which in the case of CSAM scanning is the database from NCMEC. Apple has certainly worked hard to be a company that users trust, but we already know that that trust doesn’t extend everywhere: Apple has, under Chinese government pressure, put Chinese user iCloud data on state-owned enterprise servers, along with the encryption keys necessary to access it. What happens when China announces its version of the NCMEC, which not only includes the horrific imagery Apple’s system is meant to capture, but also images and memes the government deems illegal?
>The fundamental issue — and the first reason why I think Apple made a mistake here — is that there is a meaningful difference between capability and policy. One of the most powerful arguments in Apple’s favor in the 2016 San Bernardino case is that the company didn’t even have the means to break into the iPhone in question, and that to build the capability would open the company up to a multitude of requests that were far less pressing in nature, and weaken the company’s ability to stand up to foreign governments. In this case, though, Apple is building the capability, and the only thing holding the company back is policy.
I agree that it could be used to detect image collections (and only image collections) that are not porn, that users upload to iCloud Photo Library.
That is the only established abuse case. Apple has categorically denied that they will comply with it, just as they refused to help the FBI in the San Bernardino case.
Even if they do end up complying in China because China passed a law, authoritarianism in China is a red herring. This mechanism is of no consequence to the Chinese government.
All of has absolutely nothing to do with your claim that ‘iMessage is next’ and the article doesn’t support your claim.
From everything that I've read, iCloud Photo Library is currently encrypted on the server, with a key that Apple only uses when presented with a warrant. If I ran the company (disclaimer: I do not) I'd implement this with an airgapped system in a vault somewhere, where a very small number of people have access to bring encrypted images in on a CD-R under two-person control.
That being said, one of two things is true. Either Apple does exactly what they say, in which case they are not able to perform server-side content / fingerprint scanning, or Apple is outright lying about only using their key on behalf of law enforcement. This latter case would open them to all sorts of legal liabilities, like a suit from shareholders for false reports. It would also require the silence of every Apple engineer who has ever been involved in at least their iCloud Photo program, and probably a bunch of server infrastructure as well. Additionally, they'd be legally obligated to report their scan results to the NCMEC but would have to do so in a way that doesn't give away that they're lying about how their systems work.
Because once that functionality is there it affects everyone, not just in the US. And it basically means we sell out our democratic principles, or rather allow our tech giants to sell it out. Or force them to do it, like our elected governments doing it. Either way, I don't like the outcome.
The right to secret communication. The right of not being under surveillance. The government cannot open letters without a warrant, but somehow Apple, Google, MS and co can sniff through electronic communication as they see fit because of a clause in an EULA. No idea how came there, but maybe the days when Stasi surveillance was the poster child of government intrusion into private life are too long gone to be remembered. Or they aren't and certain people choose to make shit load of money from the thing.
I’m an American ADHD patient. My doctor made me (and his other patients) come in on random weekends for drug tests. He said the DEA made him report his records.
It's easy to find other people saying they don't have to take drug tests. It seems more likely your doctor is mistaken or lying than many other doctors just ignore a legal requirement.
I don’t know if it’s a legal requirement, I certainly couldn’t find any information on it. My assumption is some lawyer told him it’s a “best practice” to have this information on record in case the DEA audits him or something.
Might as well make it legal for police to search our houses at will, as long as they are looking for child abuse images. Doesn’t sound much like the US any more at that point.
Google is checking, apparently, Gmail for cp. Apple is doing it, soon, on your phone. Checking your mail for analog cp requires a warrant and can only be done by police. See the difference?
Apple is only checking images you choose to upload to iCloud photos to see if you are uploading a collection of CSAM. This is entirely optional, and they have publicly explained what they are doing.
They are not sniffing through your communications as they see fit.
One last try, after that I'll stop since you are all over these submissions defending Apple here.
Take traditional mail. That is not opened, it is, usually, not read. Nor is content checked. It can, and is, opened in case of warrants (let's ignore totalitarian regimes here). What Google is doing when it comes to photos, as was Apple before, is opening every envelope containing photos to check wether or not it was CP. Already bad enough because they still opened your mail. You could avoid that by just using another mail carrier, so.
What Apple is doing now is checking you photos before you put them in the envelope. In case they find too many stuff they don't like they open all your other photo albums. And they tell authorities. Without any means for you to prevent that. It's like the postal service looking at your mail before they pick it up.
All that without oversight by courts. Without proper legal and investigative proceedings. Heck, even without any law, currently, forcing them to do that.
The more recent incidents where that or similar things happened were:
- the USSR
- the DDR with the Stasi
- Nazi Germany
- Western allies during WW2 through dedicated censorship bureaus
All of those were historically deemed unacceptable, maybe necessary for the greater good so. Now a private entity, with a global reach, does the same thing in principle. Even with the technical capabilities to do it on a much larger scale, and more thoroughly. And because of Apple being private is, for some reason, ok for you.
Not sure if further discussion woth you has a point, I'll just leave it at that.
> One last try, after that I'll stop since you are all over these submissions defending Apple here.
Ad hominem is bad faith. It’s usually a sign that you know your arguments don’t hold up.
> What Apple is doing now is checking you photos before you put them in the envelope.
No, an ‘envelope’ is a totally misleading analogy. This has nothing to do with sending messages.
If you want an analogy try this one: Apple provides a warehouse for people who want to store copies of their precious photos. They give you a copier to make copies of your photos, you give them the copies, and they file them.
Because they don’t want a vault full of child porn, then equip the copier with a scanner to detect known child porn while it makes the copy.
That is all that is happening here. No sniffing through communications as they see fit, only a way to prevent you from uploading child porn to their service.
Anyone saying otherwise simply isn’t being truthful.
traditional mail is under federal jurisdiction from mailbox to mailbox. For obvious reasons.
Storing files on a private company's servers is nothing like that whatsoever.
They could still do the scanning, but if the photo fails it would just refuse to upload it and display an error to inform the user that the photo will not synchronize. There is no reason that the results of the scans need to be sent to Apple servers.
If you read the technical details, result of the scan is packed with the photo. So, if upload of photo fails, then result of the scan is not uploaded as well.
>if you don't want to store pictures in clear on your servers
Reading https://support.apple.com/en-us/HT202303 , it seems that Apple may encrypt pictures on their servers, but they have the key. The list of what's actually end-to-end encrypted doesn't include photos. So, they may be scanning on your phone, but they can scan on their servers if they wanted to.
I posted more detail upthread but what I've found suggests that Apple does have a key to decrypt pictures but they claim to use it only to respond to a warrant. (They could of course be lying about that, but I don't believe they are.)
But they could, as iCloud Photos is not e2e (Apple can read all of it) and they turn over the user data on over 30,000 users per year to the USG without even a warrant.
They haven’t announced this, but they invest a lot in encryption and privacy, and have stated that user privacy is a value of theirs. They have also expressed that they don’t want access to be able to be forced by law enforcement.
Apple stores user iCloud backups and their encryption keys on Chinese government-controlled servers in China, and gives the Chinese government full access to those servers. And routinely grants the US government warrantless access to those same backups in the US.
So what actions are you referring to that show they won't do any of those scary things?
Right, so presumably you’d agree that the people who are saying that CSAM detection is a problem because China might abuse it are just being silly, right?
As for the US government having access to the backups, that’s required by law.
You can always make the paranoid case that Apple wants to do this because they are somehow lying about their values, or you can make the case that their hand has been forced.
You could also note that they promised to implement e2e backups but haven’t yet, and this is rumored to be because the FBI asked them not to.
If you assume that Apple is doing this stuff because they want to, then of course you’ll see this next move as just another bad thing they are doing.
If on the other hand you consider that they don’t want to do these things but are being forced to until they have a better option, then you can look at this move as a way to get out of a double bind.
Now they can turn on e2e without being accused of creating a safe haven for pedophiles.
Both pathways are plausible, but given the investment in privacy Apple has been making and the consistency with which they state their values and boundaries, I don’t think they want to be creating backdoors.
Encrypted or not, Google will give the backups to the government, along with any keys they have.
I agree that there would be more protection against the government if the backups were encrypted, and I hope this is still Apple’s plan.
Google on the other hand, has been scanning photos for CSAM all along, and collects a massive trove of behavioral data from android and every one of their other properties including search history, all of which are also available to the government.
Apple's own transparency report, under FISA orders. Presumably it includes all subscriber data they can access for the specified accounts, so likely contacts, photos, and device backups (full iMessage chat history, or sync keys to decrypt same).
FISA orders are not warrants and do not require probable cause; the FISA Amendments Act Section 702 spying that goes on (aka PRISM internally to the IC) pulls data directly from cloud provider systems without a search warrant and was cited by Ed Snowden as one of the main reasons he came forward.
I was using iCloud Photo Library before this was announced.
None of my photos have been scanned, nor ever will be unless I choose for them to be. I don’t have to do anything to achieve this. They won’t scan anything unless I decide to go ahead.
That is the very meaning of opt-in.
Opt-out typically means that someone will go ahead with something unless you decline. This is not that.
I do agree that if I don’t want on device scanning in future, I will need to choose another could photo service, but in the meantime, nothing will be scanned without me taking positive action to initiate it.
Encryption does not help, Apple still is responsible.
If Apple intends to let the user store photos in iCloud (or send by imessage) encrypted, they either have to keep the keys, so they can decrypt and scan the photos or or to keep the user from uploading incriminating content.
Apple found a third way: they will only get to reconstruct the keys if the user uploads too many pictures triggering alarms.
The US aren't the only government with a stake in that. And countries like China, Saudi, the Emirates have a lot of leverage. Financially and diplomatic. Heck, Facebook bowed to Myanmar just to get the users there.
Every cloud infrastructure holder is required for doing that. Closing an eye does not take a duty away. You must be actively pursuing that. Encryption would start flood of new laws
Tarsnap exists so either it is legal when done right or tarsnap is a walking dead and I haven't heard anything to that effect from any credible source.
I guess that service slightly goes out of the scope for active scanning, because it is for general backup, not a cloud especially for photo sharing and storing.
And that is my point: by tying oneself to the mast, denying oneself the access to navigate after the sweet sweet sound of user data, it becomes possible to sail straight past the sirens.
Today this is less about physically tying management and physically putting wax in the crews ears and more about technically and legally making oneself unable to touch the juicy juicy customer data.
Encryption itself is not illegal, but it might make harder to comply other legal requirements. I have just heard this many times, and now I read the whole law (curse me). It says on 2258Af part especially that there is no requirement to find evidence of CSAM material all the time. However, if NCMEC especially shares some information about visual depictions and asks to stop redistribution, then provider is required to comply in some cases. For example if they share hashes and these should be stopped. To be able to stop this data, then search is required and complying this with E2E encryption is not possible.
>What other technical approach are people advocating for?
Reduce user data stored in cloud data centres as much as possible. This is the approach taken by Whatsapp, so not surprised they are the ones most vocal against it.
And at the risk of appearing to be supportive of a Facebook product, I think this is the right way to take computing. We don't need a central place to put stuff or to do compute when we can do it on our own devices. We just need orchestration.
It is a bit ironical that WhatsApp is worried about privacy. All message metadata is unecrypted and part of their business model. They don’t know about message contents, but they know everything about your social network (who do you message and when, who are part of your groups etc.) Add cross-app tracking with Facebook APIs and soon they can also categorize your message contents.
How would you do it? With whatsapp you have the file on messaging partner’s phone. People do not want to share their images over a peer network with random people.
Apple aren't required to decrypt anything. This is why ever other server/storage provider are not also demanding access to everything client side (or keys to decrypt server side). It's a red herring for Apple to pretend they're "required" to do this, they're no more required to do so any more than the post office are required to open your mail on the off chance they might be handling CP...
Then HN taught me that any company storing images on their infrastructure in the US must report pedophilic images to the US government.
At this point, the approach taken by Apple seems like the best one to me, if you don't want to store pictures in clear on your servers.
What other technical approach are people advocating for?
Another point it is to try to change the law, but this is beyond the scope of the conversation.