Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>They can't see the scan result until the device tells them that 30 images have matched kiddie porn

Isn't this FALSE? the devices hashes the images but it does not have the database, so the hashes are sent to the server, and Apple servers compares your hashes with the secret database, so Apple knows how many matches you have.

Your argument would make sense ONLY IF your images would be encrypted and Apple had no way to decrypt them so the only way to compute the has is with the creepy on device code.



The system is designed as if iCloud photos is already E2EE. It's not currently, so Apple could have simply done mass decryption server side and scanned there.

But the way the CSAM system is designed works exactly as described. It's technically pretty cool. Each matching hash builds part of a key. Only when the key is complete (~30 matches) can the matches and only the matches be decrypted for review. This also only works on photos destined for iCloud, and actually makes it harder for LE to show up and say 'here is a warrant to scan all photos for X' since the matching hash db is included in the iOS release.


>The system is designed as if iCloud photos is already E2EE. It's not currently,

and it will never ever be E2E because of the US laws or if it will be ever encrypted it will use the backdoored NSA crypto (and Apple PR not even tried to hint at it to calm down the waters).

I agree the algorithm is prety clever but it feels that is not designed to solve the CSAM problem but to look good on someone CV.

Now you have the worst of both worlds, Apple has access to your photos on the server(and if they would respect the laws they should scan them for CSAM already since they are responsible on what they store and share (I mean when you share stuff)) and Apple has a scanning program inside your phone.


> it feels that is not designed to solve the CSAM problem but to look good on someone CV

It feels like it's designed to protect customers from being accused of having kiddie porn (by prosecutors who issue a dragnet warrant of everyone who had a single positive result)

Dragnet warrants on location data have become very common.

>Google says geofence warrants make up one-quarter of all US demands

https://techcrunch.com/2021/08/19/google-geofence-warrants/

The solution to resisting these warrants is to never have access to the scan results, until you are reasonably sure there is a real problem.

By setting a threshold of 30 positive results before you can see any of the scan results, customers are much more protected from the inevitable false positives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: