Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I doubt it's really the high-profile targets they're after though. It sounds more like they want to make this the bread & butter of policing. Having an AI looking over our shoulders to see if we're up to anything bad. Of course not just the content of our conversations, but GPS locations etc. Basically like Apple was proposing but here they are already targeting a much wider range than just CSAM.

Because if it was only the high-profile cases the intelligence services already have huge permissions in terms of hacking, infiltration etc.



I agree with you about the general direction but you're hinting at preventive policing which I don't believe it to be the case. I mean just look at the US and the amount of surveillance since 9/11.. I think the goal here is to monitor dissidents or to help post-factum in investigations.


Not really preventative.

It's just that if you start collecting conversational data from everyone, you need AI to go through it. It's simply not possible to do it manually.

And once you get AI involved, scope creep is guaranteed because of the huge advancements being made in that area.

There's no way we can stop AI from being built but we can still stop some of our data from going into it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: