Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apple have the key to read your photos, they probably are even encrypted since you can view them from browsers, so this is fake news. They could have scanned the images on the server at the moment you uploaded them, they could have reported CSAM to the authorities and catch the bad guys.

I want the proof that Apple is unable to scan on the server, that they don't have the keys and the only solution is on device scanning.



Of course Apple can read data you have uploaded to iCloud.

Every cloud service can read the data you upload to it.

How do you think Google and Microsoft are scanning everything in your account?

What Apple cannot do is see the results of a scan that was not conducted on it's servers, since those results are encrypted with a key Apple doesn't have until the threshold of 30 images matching known kiddie porn is reached.


This logic makes no sense

1 Apple can read data you have uploaded to iCloud. 2 What Apple cannot do is see the results of a scan

What makes 2 impossible? if 1 is true then Apple can generate the scan and check the hashes on the server so 2 is possible.

What is the advantage? Apple will also have to push the scanning on device to the laptops too and what do you do with devices that don't get updated? Instead of having a simple solution that works for all images in iCloud you get a privacy-invasive solution that only works for some hardware that runs latest stuff. Seems to me that this implementation is not solving the problem of "Prevent CSAM to be uploaded to iCloud" but solves soem other problem and the CSAM is a cover.


The scan is not conducted on Apple's servers, so they don't have access to the scan results the way Google and Microsoft do when they conduct the same scan on their server.

Apple designed the system that way because it is more private to keep potentially damaging information from being readable on their server, where it can be misused by anyone who can issue a warrant.

They can't decrypt the scan result until the device tells them that 30 images have matched kiddie porn, whereupon the device hands over the decryption key.

The theory is that you are unlikely to have 30 false positives, but the next step is to have a human make sure that's not what happened.

Since Google refuses to hire expensive human beings when a poorly performing algorithm is cheaper, I have no doubt Google is turning in people for as little as a single false positive.


>They can't see the scan result until the device tells them that 30 images have matched kiddie porn

Isn't this FALSE? the devices hashes the images but it does not have the database, so the hashes are sent to the server, and Apple servers compares your hashes with the secret database, so Apple knows how many matches you have.

Your argument would make sense ONLY IF your images would be encrypted and Apple had no way to decrypt them so the only way to compute the has is with the creepy on device code.


The system is designed as if iCloud photos is already E2EE. It's not currently, so Apple could have simply done mass decryption server side and scanned there.

But the way the CSAM system is designed works exactly as described. It's technically pretty cool. Each matching hash builds part of a key. Only when the key is complete (~30 matches) can the matches and only the matches be decrypted for review. This also only works on photos destined for iCloud, and actually makes it harder for LE to show up and say 'here is a warrant to scan all photos for X' since the matching hash db is included in the iOS release.


>The system is designed as if iCloud photos is already E2EE. It's not currently,

and it will never ever be E2E because of the US laws or if it will be ever encrypted it will use the backdoored NSA crypto (and Apple PR not even tried to hint at it to calm down the waters).

I agree the algorithm is prety clever but it feels that is not designed to solve the CSAM problem but to look good on someone CV.

Now you have the worst of both worlds, Apple has access to your photos on the server(and if they would respect the laws they should scan them for CSAM already since they are responsible on what they store and share (I mean when you share stuff)) and Apple has a scanning program inside your phone.


> it feels that is not designed to solve the CSAM problem but to look good on someone CV

It feels like it's designed to protect customers from being accused of having kiddie porn (by prosecutors who issue a dragnet warrant of everyone who had a single positive result)

Dragnet warrants on location data have become very common.

>Google says geofence warrants make up one-quarter of all US demands

https://techcrunch.com/2021/08/19/google-geofence-warrants/

The solution to resisting these warrants is to never have access to the scan results, until you are reasonably sure there is a real problem.

By setting a threshold of 30 positive results before you can see any of the scan results, customers are much more protected from the inevitable false positives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: