Actually this makes sense. Already since recent iOS versions have started popping up an alert to grant access to local network resources per app, I've noticed this popping up on several apps that have no business mapping and fingerprinting my LAN. Including the Facebook app.
It's weird how a company can be so internally disconnected. This entitlement is a good privacy-preserving hurdle to prevent scummy apps from interfering with your local network and fingerprinting you. On the other hand, the on-device scanning of nudes in imessage and csam in iphoto is a total snitchware swatting-as-a-service piece of software which only serves to incriminate and harass the owner of the device. Such a shame.
The only problem I have with it is that the entitlement system controls both what you can submit to the App Store as well as what you can locally compile and install.
For example, you can't locally develop a VPN app until you ask Apple permission to develop a VPN app. Almost certainly this is to appease China, which I find particularly egregious.
Fonts also require an entitlement, but I'm not sure if it's just something you need a paid dev account for or if you need to specifically grovel to Apple as to why you need it. I doubt "I want to submit a pull request to iSH" would be considered a valid reason (but correct me if I'm wrong).
Even things like camera access in multitasking views are entitlements that your dev account needs to be preapproved for. In fact, it wasn't even something that Apple even publicly mentioned for a while - only Zoom had it until someone reverse-engineered their app bundle and found out about it.
Didn’t that get hardened in response to the Onavo revelations? In particular, Facebook was using their developer certificate to sign spyware installed via MDM. I don’t remember it being a requirement from the beginning, even though China has tightly regulated VPNs for a long time.
Honestly I see no issue with facebook being allowed to do this. If someone pays me to install spyware on my desktop, nobody complains that we should remove the ability to run as root.
> For example, you can't locally develop a VPN app until you ask Apple permission to develop a VPN app. Almost certainly this is to appease China, which I find particularly egregious.
Can't you manually add a vpn without an app? I haven't needed to configure this myself but presumably if you're bundling your own vpn app you can do the manual configuration in settings which is significantly easier.
That only works if your VPN uses one of the 2-3 standard protocols supported by iOS itself.
If your VPN uses a different protocol, you must (with an entitlement) develop an app that gets to execute its code for every packet sent and received. For example, if you want to create a "VPN" that sends and receives packages by audio like an oldschool modem, or if you want to implement the WireGuard protocol, or if you want to implement a dns tunnel, etc.
The good thing is that any random shady app can't just start hijacking and intercepting every network packet for the entire device. The bad thing is that it makes difficult as an outsider to contribute to open-source VPN apps, since you don't have access to the entitlement (which ultimately requires access the the private code signing keys and provisioning profiles for the developer that DID receive the grant (and the debug device must also be registered there)).
Your comment nicely addresses the local network restriction, but the article already expresses understanding of this. Why the extra-stringent multicast thing?
They’re trying to stop people from listening for all multicast traffic, presumably because it stops someone from fingerprinting based on the things on your network.
The restriction doesn’t stop you from using the NetService API (or friends) from listening for or advertising specific named mDNS services. You can continue to do that today without any extra entitlements. What it stops you from doing is listening for wildcards and sucking up everything — sadly, restricting multicast traffic on regular IP sockets is basically an extension of that.
Instead of trying to define a guideline that says “may not use multicast for this, that, blah”, apps instead just have to request it and explain why. It’s probably not a topic that the usual review team can consider without extreme levels of training beyond what’s necessary, and it’s a specialty network skill with lots of privacy traps. It also allows Apple to see who doesn’t request the permission but uses multicast anyways, because they might be using some sort of framework that is unknowingly tracking people, and then they can identify and start rejecting that framework storewide.
“How can we ensure every request for multicast escalates to someone qualified to make a decision that we can stand behind?” An essay question is a great answer, even if people hate the uncertainty. Better that than a new guideline!
Multicast can wreak havoc on a WiFi network. Clients that are further away from your AP require sending data at a slower rate and APs will try to send the data slow enough so that all clients can receive the traffic. In practice this means a 1mbps multicast stream will take far more than 1% of a 100mbps network and in some cases I have seen it shut down a network entirely. At my day job we disabled receiving multicast (IPTV) feeds over WiFi years ago because we have never seen it work well enough to be worth it.
If Facebook has no business using this permission why did App Store review approve it? Surely they must know what is going on in every app, that's why the App Store is so safe.
Because there are legitimate reasons why Facebook might need access to your network for streaming videos to a TV or similar uses cases.
Apple doesn't know what every app does, they basically know what frameworks/syscalls get made. You can wrap entitlements around those things without explicitly knowing exactly what an application is actually up to.
Streaming videos to a TV with airplay can and does happen without the app itself needing multicast/broadcast permissions, as long as you use the provided frameworks and APIs.
It’s not for AirPlay it’s to control Facebook applications running on other devices like the Facebook Portal. I control our Portal with my phone regularly.
It is an OUTSTANDING product. We have several elderly grandparents who loved visiting their grandkids and during the pandemic were cut off. The portal brought them back into the feeling of being part of the action and getting to see what their grandkids were doing. They were all able to set up the Portal without any assistance, which was very surprising. Also, it’s a fantastic Bluetooth speaker when we’re not using it for calls.
Did I feel good about inviting Facebook into my living room? No. I don’t trust that company. But the tradeoff was worth it for me.
Edit: I didn’t just buy one Portal, I bought 4. One would be kind of pointless.
> Because there are legitimate reasons why Facebook might need access to your network for streaming videos to a TV or similar uses cases.
If you want Facebook on your iPhone to call someone then Apple provides a Phone API.
If you want Facebook on your iPhone to stream to your TV then Apple should provide an Apple TV API for Facebook to use.
There is no reason whatsoever for Facebook to have any direct access to anything on your phone whether its your sensors, your microphone, your video, your network.
> If you want Facebook on your iPhone to stream to your TV then Apple should provide an Apple TV API for Facebook to use.
There is - AirPlay doesn't require local network access since it's an API apps can use. Un/fortunately, there are multiple competing standards like Chromecast (and other, manufacturer-specific casting standards) which require the app to do its own local network discovery to find available devices.
Could you explain your posture in the CSAM scanning thing? I'm not an apple user but the way I see it, it's not some sort of nudism / age detection mechanism where a very incopetent system could land you in jail for an old digitalized photo of yourself after a shower from 30 years ago when parents actually took those kinds of pics.
The way I see it, it just hashes them with whatever mechanism they came up with and there are additional mechanisms to verify it if for some fringe conincidence your cat pictures hash matches some CSAM hash which would be annoying but not the end of the world.
Now, in the other hand, let's say the snitch detects actual CSAM in someones phone, what's the problem? if it was sent without their consent an investigation can lead to who sent it, and if it was well, tough shit...
I know it sounds very 1984ish but honestly I don't think it's any worst than the kind of surveillance power google has with all their platforms combined (chrome, android, google, any web thing they didn't kill already).
I guess, what I'm asking for is for real arguments on why and how this truly violates privacy and to what extent it is problematic for a legit non CSAM consuming person.
I'm not trying to argue with you but to understand this point of view since I read so many comments against it but nothing that seriously made sense to me.
I am not fond of my device burning battery for, and being one bitflip away at a jmp-if-zero/jmp-if-not-zero, to calling an API whose only purpose is to inform the authorities that I am a suspect. If it happened server-side I would have no problem with it.
Thanks for your response, I'll answer just to continue the conversation, not to try to invalidate your points or anything like that.
I read in other posts there's some sort of review process and a way to verify if the image is a match or a collision (I don't know much about he details) but I read the latest posts about the attacks that were being worked on.
I mean, with the complexity these attacks have I think it'd be easier for a ransomware gang to just infect you, plant the CSAM, find reliable contact info, verify it, lock your phone and extort your through an untrackable side channel (if this system didn't exist) or something a bit more elaborate / targeted (not even at NSO level).
IDK, I think the phone burns battery for dumber reasons at a higher rate, this should only be activated when there's a new picture written to disk and it's probably less expensive resource wise than whatsapp / telegram / imessage checking for new messages periodically don't you think?
I think if they don't royally fuck the process up or turn it into some idiotic fake way of getting the cops whatever paperwork they need to force you to give them access to your files it's a good thing.
The answer to why people don't like this is simple, if a government like China says "Apple, you're going to add these image hashes to the database and report any device that has them in the next update or you're going to leave China," what do you think Apple is going to do?
I have read their papers, I understand the system and the safeguards they put in place, but none of them are good enough to have scanning on my device. There is nothing that is good enough. On device scanning for "illicit" content is a box that cannot be closed.
They have the whole system at their disposal for that, they don't have to do this. As an example (I know I could be out of date with this one), do you know why aren't there any iMessage bridges that don't require a mac?
IIRC it's deeply entrenched in the system and no one reversed their way deep enough to be able to replicate it. Now this might sound silly, but it's just an example, a contrast maybe of how the hard work the people behind asahi are putting or the huge jailbreak community, but the idea I'm trying to convey is that the playfield is HUGE and they just don't need this.
The one thing I would be 100% concerned about is the investigation process for matches because that's mainly where human interaction and decision making com into play and we humans SUCK, we've put people behind bars for years for no reason and with all this AI crap there have been a lot of news articles about that kind of stuff and that's something we should definitely be worried about, but I guess it's less about the tech and more about the people in charge right?
I don't understand the idea you are trying to convey. iMessage is not impenetrably complex, it is just an ugly API that uses an Apple provided certificate and a valid serial number as part of the authentication factor.
I also don't agree that the human-in-the-loop part of the process is a/the problem. Are you suggesting that it should just send the findings straight to the FBI... where a human would review it? Or maybe skip all of the messy middle part and if the model detects enough CSAM just send an APB to the local police to pick you up and take you straight to prison with no trial?
I was using iMessage as an example of something tedious and not overly complex but not exactly low hanging fruit that's yet to be completely reversed. The S/N or certificate parts don't even matter, if people had reversed their way through it, there would be at least an option to extract the required parameters out of your hardware and plug them in a stand alone server (in fact, IIRC there a valid S/N generator some time ago that was used to deploy osx in kvm?).
So, the idea is that even though it's not an impenetrable fortress, there are still plenty of dark places to introduce subreptious changes.
As for the human-in-the-loop part, I don't know why did you get so snarky, what I was talking about was that this is the layer that should be scrutinized the most, all of those components because even without the technology those are the people that will put someone in jail with no verifiable evidence.
So your argument is "iOS is complex, they could have just hidden it in there, but they didn't, they told us about it." I'm still not sure why this matters. From the standpoint of interacting with a government Apple could say "we cannot do that and maintain the security of the OS." Now, post announcement, they have to say "we will not do that."
That is a huge difference.
I got snarky because the human-in-the-loop for decision making is the the least concerning part of the process and the alternatives are as ridiculous as I laid out. There will always be a human-in-the-loop in this process - I'd rather it start with Apple's human, then law enforcement, then a prosecutor, then a judge, etc.
You really should go read Apple's papers, FAQ, etc... on the feature. Not saying that has happened here, but there are a lot of knee jerk, uninformed opinions and information floating around. Also take a look at PhotoDNA, which is an older version of a hash system already in use by other providers.
In your example about planting CSAM, why would on/off device matter since the new feature only checks for items going to iCloud anyway? The planting CSAM attack vector is available right now for any device connected to FB, OneDrive, or Gmail, and I don't think planting material has been an issue.
Well, in the planting scenario I didn't mention the attacker uploading it to iCloud directly because that's exponentially harder nowadays.
If you're hit by an NSO client and they have an agent running in your phone checking in with their C2, what do you think would be easier :
1 - Run a reverse proxy in your phone, steal your credentials (or session data) and use that connection to upload the material
2 - Write it to disk and wait for the media scanner service to pick it up and act on it?
I mean, in the end it's not about the technology but the people operating it, if apple is really incompetent and law enforcement is shitty as usual then yeah, people might end up behind bars for no reason, which sucks but in that case I think the focus shouldn't be the technology itself but how shitty and unfair the system is.
There is also the other thing with imessage scanning. It seems ripe for abuse, for example a husband forcing it on every family member including the spouse (and forcing a fake DOB).
Using Apple devices used to be all about how they serve the user to bring joy. Knowing they now spend even a single cpu instruction on trying to frame the user turns the device from something I loved to something I fear.
All the talk about human review and multiple failsafes does not smooth things over. App store review is a prime example of how their review process can be seriously flawed - scam apps and subscriptions sometimes even being FEATURED in editorials on the app store.
It does not matter that you will be found innocent in the end. Just being put under investigation for csam can make anyone's a life living hell. Getting your AppleID blocked, even if temporary, can cause severe problems.
When a company advertises that "what happens on your phone stays on your phone", and then proceeds to build snitchware into the phone that reports on received imessages to the "family head of household" and reports and UPLOADS photo roll items that were never intended to be shared, to human review, well... that company does not appear to be honest anymore.
> There is also the other thing with imessage scanning. It seems ripe for abuse, for example a husband forcing it on every family member including the spouse (and forcing a fake DOB).
In addition, this only works for <18 accounts. If the abusive figure goes as far as making other family members recreate Apple IDs and lie about their age every 5 years to keep getting access to iMessage (and other parental controls like screen time) then there's not much Apple can do.
In that iMessage scenario, the family member has to explicitly approve sharing anything with the adult on the iCloud account. Nothing happens automatically.
No I didn't. Even if someone is under age and gets nudity messaged to them with this feature enabled, they have explicitly opt into sending it to their parent. Otherwise, nobody sees it.
But it ultimately does happen server side. The client is is hash creation, but the server runs it thru an elliptical curve to see if there’s a real match. And if so, it performs an additional hash on the server side.
One question I have on CSAM is: so it only detects known pics that law enforcement already has? If so, is that the major issue with child porn, e.g. same pics getting passed around? Just seems like this won't prevent abuse from occurring, with their own new pics and videos.
FB, Google, MS, etc... already use a similar hash based system called PhotoDNA on any photos in their clouds. They reported ~20M+ instances last year, so yeah it seems like the same pics do get passed around.
I'm not defending the FBI or the Apple feature here, but on a lot of cases where abusers get caught it's because they were sharing pictures of their victims in groups where they exchange other pictures with other people into that. Sometimes those other pictures are on this databases. So this database matching things generate leads.
Last case I heard about was cleaning personnel in a body expression workshop for kids with learning disabilities that was sharing new pictures he took of girls in a Telegram group. The group was infiltrated by an FBI agent that allegedly got the link from a Facebook group that they found because of Facebook scanning for known hashes and reporting.
I can't remember the actual title of the article but I clearly remember about an instance (I think it was a few years ago) where there was a CP ring that operated (at least partly) through a whatsapp group and they got busted when they accidentally added someone with the wrong phone number so yeah, I think there's a lot of "low hanging fruit" that could lead to putting some of these assholes behind bars.
I don't think it would prevent "new" abuse but just like you might have some material (books, music, whatever) these people have their stuff and it's not like every single one of them is a producer but they might be part of communities and catching some of them might lead law enforcement to bigger fish and hopefully producers or at least that's how I think the people behind this might be thinking.
> I know it sounds very 1984ish but honestly I don't think it's any worst than the kind of surveillance power google has with all their platforms combined (chrome, android, google, any web thing they didn't kill already).
This should give you pause.
Why do you think something that sounds 1984ish should be acceptable to anyone? Why is it acceptable to you?
The fact that other companies also have advanced surveillance power should be reason to push back on that as well, not to cede more ground to surveillance.
I guess your logic doesn't make any kind of sense to me.
It's weird how a company can be so internally disconnected. This entitlement is a good privacy-preserving hurdle to prevent scummy apps from interfering with your local network and fingerprinting you. On the other hand, the on-device scanning of nudes in imessage and csam in iphoto is a total snitchware swatting-as-a-service piece of software which only serves to incriminate and harass the owner of the device. Such a shame.