Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem I have with this approach is that it introduces on-device scan for images. All what is needed to adopt it to scan for different kind of images is to connect it to different database, say, Winnie the Pooh memes featuring CCP chairman, and boom, jailed dissenters. And ability to scan all images is but a minor firmware update away.

Server scanning makes it clear that the company running the servers has access to your photos. So you can either find a form of encrypted storage, or be okay with that, depending on your privacy stance. Having device with ability to scan your photos removes that choice. It is a privacy invasion.



> ability to scan all images is but a minor firmware update away

ios already does on-device ml-based photo categorisation for some time, afaik no way to turn it off.


And now it's pretty much the same thing, but with a SWAT team knocking your door out when the ML messes up.

Yay progress.


The SWAT team is knocking on your door after you've uploaded multiple instances of child porn to iCloud and those instances have been verified to actually be child porn by a human. That sounds fine to me.


>The SWAT team is knocking on your door after you've uploaded multiple instances of child porn

...or whatever gets sneaked into a database that nobody can take a look at, and whose maintainers have zero obligations to you.

>and those instances have been verified to actually be child porn by a human.

Yeah, SWAT teams doing their homework before shooting people up is precisely why SWATting is a completely innocent thing to do and never put anyone in danger.

And that also does nothing in case of "neural" (aka blackbox) hash collision, where the Algorithm mistakes a normal picture for CP. The "human" you have in your dreams doesn't have access to the actual file you have on your device, right? (At least, that's the sales pitch for on-device privacy). They won't know until they get you.

Personally, I would hope that HN people know better than to blindly trust an opaque algorithm running off an opaque database to never make a mistake in where it sends SWAT teams.. but here we are.


The algorithm doesn't report to the SWAT team. It reports to Apple who verifies it.


But Apple only plans to scan photos that are synced with iCloud, don't they? So you could just switch to an E2E encrypted alternative and drop iCloud completely.


Yes but it probably took additional code to only scan pictures that are synced to iCloud.

Probably not monumental task, to change to scan every picture.


I have to remind again, that iOS is a blackbox, closed source system. All this speculation applies also for a moment before they added anything. They might have had this code ready for years already. All we have is what they say. It is already very trivial to scan everything on on your device and send that metadata. Few lines of code. At the moment when they say about scanning everything in phone publicly without opt-out, then we should be worried. Once again, there is no way telling what they are doing already.


The difference is now every government knows too.

They can't pretend they don't have the capability.

And if they can scan for CP, why can't they scan for "whatever" else instead.


This is not the first time they have run into this - due the to AppStore being a walled garden they are the sole gate keeper who decides what goes in and what not. Makes sure the users are safe and everything. Perfect, right ?

Well, until protesters want to use an app in the store to coordinate their protests yet the government wants you to reject it, so the protesters can't use it:

https://www.applefritter.com/content/teargas-walled-garden-i...

With users not being able to install the app themselves Apple is the single point of failure with no plausible deniability like Android (any any sane OS in general) has. And they did reject the app.

And just a few months before this happened I attended a talk about free software from FSF and they mentioned just the same thing about iOS and the gate keeper being the single point of failure a repressive regime can apply pressure on. Turned on to not be far fetched at all...


iOS has been running complex Neural Nets on all your images for years now. It powers all their social features and search.

Apple have always had the capability, and have been advertising it as central selling point of new versions of iOS for years. That ship sailed along time ago.


Neural Net might be overkill as an example. Antivirus software has existed since 1980s[1].

[1]: https://en.m.wikipedia.org/wiki/Antivirus_software


Does an AV report its findings to local govs?


What changed is Apple just signaled to the governments of the world what it's willing to do toward abusing user privacy and exactly how it can work. And hey, Apple, if you're willing to do that, why not just go a bit further and do this, because we're asking you to or else (and now we know you're obviously even more morally flexible than what you used to present yourself as).

Before that, Apple put up a front that they would fight for user privacy at every turn. They pitched that over and over and over again as a corporate ethos, a selling point. That was the facade at least, even if one is cynical and wants to pretend it was a lie. Now they're not even presenting the facade, which will open the flood gates dramatically. They went from a supposedly resisting agent, to a morally gray and willing agent at a minimum. Apple dumped an enormous vat of blood into the shark infested waters.


I think I disagree. Current move was improvement for user privacy, compared to what it used to be. Abuse is only on speculation, not on what has actually been done.


Before is speculation. After is too late. When are people allowed to object?


Speculation as in "whats technically possible".


Well.....

I think its more than that. images sent with iMessage are stored in iCloud, even if the device is not necessarily uploading.

How else would that have such warnings they claim in their announcement. [1]

And we have seen these systems have their scope/use case changed in the past [2]

To the point in the other discussion [3]. OP stated that Apples plans to scan and then upload suspected images are illegal. But i would think that they are only scanning images, client side, that users themselves are attempting to upload (either though attachments, or automatic iCloud backups etc) which would put Apple in the clear. In this case that would be iCloud images, or those that piggyback iCloud services like iMessage etc.

[1] https://www.apple.com/child-safety/ [2] https://www.eff.org/deeplinks/2020/08/one-database-rule-them... [3] https://news.ycombinator.com/item?id=28110159


Stop repeating this lie. iMessages photos are not part of this. This is written in the technical document. This is only photos from iCloud photos. It's been debunked, just read this article: https://daringfireball.net/2021/08/apple_child_safety_initia...

And of course the scope could change tomorrow. Just like the scope of Android could change tomorrow. They could even have changed the scope without doing an announcement!


So there is really no need to be this aggressive.

In my comment history it clearly shows that there's an effort to parse through the information and seek clarity.

And its worth noting that iMessage data is and can be backed up to iCloud, and not just using backups. For many with multiple devices this is specifically useful.

https://support.apple.com/en-us/HT208532

Further, as to this

>And of course the scope could change tomorrow. Just like the scope of Android could change tomorrow. They could even have changed the scope without doing an announcement!

I am pointing out that there is a specific history of this already on record and documented. And their technical documents specifically state their intentions.

Page 3 : https://www.apple.com/child-safety/pdf/Expanded_Protections_...

"This program is ambitious, and protecting children is an important responsibility. Our efforts will evolve and expand over time"

I don't understand why you find such an observation so offensive. Its pretty clear Apple sees this as a first step into what will eventually be a much larger program.


>The problem I have with this approach is that it introduces on-device scan for images.

Windows already does this via Windows Defender. This is a basic AV functionality and much more privacy preserving.


But Windows Defender doesn't report you to law enforcement when it believes it found a virus.


How do you know that? It is blackbox paradox, and all we have is what they say. They might report CSAM hashes to law enforcements. Any file can be a threat, hence images are included for scans. Defender also uploads whole files as unencrypted if you don’t opt-out.


Neither does this.

https://www.howtogeek.com/719825/how-to-stop-windows-10s-ant...

If Microsoft receives an illegal file through this channel, they are legally obligated to report it in the US.


...if a human actually gets the file, figures out what type it is, and examines it for themselves, they'd be obligated to report it. With the number of Win10 devices in the world, how big would their security team have to be to hand-groom every automatically submitted "suspicious" sample? (For that matter, why would a vanilla JPG get flagged as "suspicious" in the first place?)


> All what is needed to adopt it to scan for different kind of images is to connect it to different database, say, Winnie the Pooh memes featuring CCP chairman, and boom, jailed dissenters.

The CCP have already throughly demonstrated that they don’t need manufactures consent to build these systems.

Look at the Uyghur population in China. They already have their phones scanned on device for dissident material, not by coercing manufacturers, but by forcing the population to install a surveillance app. Then making it illegal to use a phone without it.

Being caught at checkpoint without the app installed and working is grounds for immediate arrest and re-education.


> The CCP have already throughly demonstrated that they don’t need manufactures consent to build these systems.

It was obviously merely an example for illustration purposes by the parent. To get a point across it's often very helpful to use a stark, clear example.

Few governments will ever have the extraordinary capabilities and resources of the CCP in China.

For the other ~190 governments that will never reach that level of capability, what they might have now is a globe-spanning billion-device corporation like Apple more willing to assist them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: