> Can you point to a single instance where it was used for ulterior purposes?
That's the sick genius of parallel construction: without leaks, we'd never be able to know. (And yes, there are plenty of instances of abuse, brought to the public despite the risk whistleblowing entails.)
Ask yourself this: even if the "right" people are in control of this system now, could the "wrong" people ever gain control of it? If the answer is yes, maybe the system should not exist.
Then to be consistent, you cannot use closed source, proprietary systems. That's the trade off.
You _NEVER_ know what Apple is doing under the covers. This principle is well understood and has been espoused by RMS for over a decade.
But saying "well Apple could use this system for evil if they want..." is silly. Apple could ALWAYS use their system for evil if they wanted. This addition does nothing to improve that position.
> This system already exists on iCloud. It's been in use for 2 years.
Source? That is not true as far as I'm aware.
> Can you point to a single instance where it was used for ulterior purposes?
What do you expect them to do with they receive a national security letter? Shut down? Go to prison for decades? Of course not. They will comply, as they have always done when the technical means to do so have been available. Why do you think we would even be told about it in situations when it can be kept secret?
>What do you expect them to do with they receive a national security letter?
They can't do _anything_ with an NSL on your data because they don't have the encryption keys. Without this feature, they control the key. With this feature, you control the key and they only gain access when >N tokens cryptographically indicate hits on the CSIM database. Those have to come from _your_ device. And those tokens only give access to the data in question.
Everyone understands that Apple can do anything with unencrypted content, but now Apple can also identify encrypted content because they make the phone hash it before it's encrypted. Apple is essentially lying when they say this can only be used to identify child abuse material because it can clearly be used to identify whatever Apple tells it to identify. The system doesn't care if it's used to target whistleblowers or child abusers. Apple doesn't even need to know what content they are made to target. The government could just give them a hash and demand to know who has a file corresponding to it, with zero risk that an Apple employee will leak anything the government want to suppress.
There is no reason Apple can't give you E2E encryption without any backdoors or spying features. They should act on abuse when it's reported to them by someone who can give them the key, but otherwise not concern themselves with the content that's stored.