Something the whole Apple kerfluffle has revealed to me is how many services were already scanning for CSAM in the cloud and reporting to authorities. E.g. Google, Facebook, Microsoft. I consider myself tech-literate and had not known about this.
Are there other types of material or kinds of activity that cloud services might already be scanning for, but might not have much public awareness?
Another is that Google Photos and Facebook both do classification for objects and text within photos - eg. a Kohl's ad in my timeline on Facebook has image alt text of "May be an image of 1 person, standing, footwear and outdoors". I'm sure their detections look for TOS breaking content or other pictures showing illegal activities.
I think everyone who allows users to upload images and is medium sized or larger has to- you really don't want to be hosting CSAM, for legal and moral reasons.
Are there other types of material or kinds of activity that cloud services might already be scanning for, but might not have much public awareness?