Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Something the whole Apple kerfluffle has revealed to me is how many services were already scanning for CSAM in the cloud and reporting to authorities. E.g. Google, Facebook, Microsoft. I consider myself tech-literate and had not known about this.

Are there other types of material or kinds of activity that cloud services might already be scanning for, but might not have much public awareness?



I guess you could consider Paypal/Venmo which scan for words: https://slate.com/technology/2020/02/paypal-venmo-iran-syria...

Another is that Google Photos and Facebook both do classification for objects and text within photos - eg. a Kohl's ad in my timeline on Facebook has image alt text of "May be an image of 1 person, standing, footwear and outdoors". I'm sure their detections look for TOS breaking content or other pictures showing illegal activities.


What the heck is "little women bootleg dvd" doing on there?

CP?


A film adaptation of Little Women by Louisa May Alcott was released a few years ago.


It's a porno about Pigmy women:

https://www.youtube.com/watch?v=6DMrQPc7wfo


05052070978 ateşli kızlar arayabilir


Not sure why the author thought that would be flagged - it's in the "not flagged" list of phrases. I can't find any other reference to it either.


I think everyone who allows users to upload images and is medium sized or larger has to- you really don't want to be hosting CSAM, for legal and moral reasons.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: