Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


CP is a pretext to grab power, the same way terrorism was 20 years ago. If a government actually cared, they would start dismantling the catholic church. Risking a slippery slope fallacy, I see no way governments won't expand the scope of this intrusion. Before you know it, being critical of a certain foreign government [0] or teachers criticising the department of education [1] will be limited.

0 - you know what conflict I mean. Will we have to resort to coded messages wherever we go?

1 - https://www.theguardian.com/politics/2023/oct/21/uk-governme...


The Australian government enacted the same type of "protect the children" laws, and then immediately used it to surveil journalists critical of their policies.


Do you have more details on this? A link maybe?


Didn't the UK's web filter contain political websites too?


Why should everyone have to suffer so that state's job in catching criminals is made dead easy ? Such criminals are a microscopic minority of the population. Governments - esp in the west - have disinvested in traditional investigation and moved to using mass surveillance as their default operating strategy. And citizens are being made to pay the price.


We keep having "anti child exploitation" measures, like all the bloody time. It's been decades by now.

And the problem keeps getting worse.

So either it's just not working whatsoever and this is useless and should stop. Or it's never really about child exploitation.

Just like "temporary measures" against terrorism which have been temporary for 30+ years now (no, it didn't start on 9/11).

Almost like it was never about terrorism.


> And the problem keeps getting worse.

You're swallowing the propaganda. The problem hasn't changed to speak of.


The bad stuff will just move somewhere else as it always has done.

Compromising everyone's privacy will eventually mostly affect innocent people. Or even cause the platforms to cease existing altogether, which looks like a real possibility with Signal. Pedos will just move on to whatever service isn't compromised yet. You can outlaw or hamper secure encryption in some jurisdictions, but due to the generic nature of computers you can't in principle stop people from using secure encryption.


> but due to the generic nature of computers

This might only be a temporary state, though. Smartphones are, in that sense, not generic. PC might follow.


Smartphones are general-purpose computers with a bunch of little digital locks, that while strong, are not impervious. Such locks, when used to protect a device owner, are good. The same type of locks, when used to deny a device owner full rights to use their device as they see fit (absent harm done to others), are evil.


mtl is saying computers are big boxes of math and you can't ban math.

Though given how incredibly clueless politicians have become I wouldn't be shocked if they tried.


They could however require a license for a compiler/interpreter, and then require binaries to be signed by said compiler. As you said they seem clueless, so I bet they will try.


Neither Whatsapp nor Signal can do anything about it, since they don't know the content of the messages. That is the whole point of their protocol. That is the whole point of privacy.

Nobody falls for that crap. We all know that CP is being presented as a scapegoat here because, "how can you be against something that MIGHT help against CP!?!?!" while in fact and in the end it'll be used to spy on everything.


Nonsense. Whatsapp owns both endpoints. They could know perfectly well what you write, when you write it, to whom, and anything their heart desires by way of their analytics. The messages themselves contains no business value to them. They could send it by carrier pigeon for all they care as long as the client is their product.


No. A user owns each endpoint. Whatsapp provides a service to the owners of the endpoints.

Yes, Whatsapp is in a position to act unethically and steal information, but that does not make Whatsapp the owner of anything.


Whatsapp is not something you can compile or inspect easily. They own the endpoint, in that specific meaning. They may not have root access on the device, but inside the client nothing is out of scope.

It is their client. Any data you enter into the client is data they 0wn.


You can definitely decompile WhatsApp on Android to inspect it. I'm sure security researchers do this regularly, including those looking for a bug bounty that could be life-changing.


You can always inspect what you are running right now. Again, they literally own the software. They could augment it at runtime, or do whatever else they desire. You have to trust them that they don't copy your data. How the transport protocol works is completely beside the point.

Security researchers analyze software in order to third party attack vectors. They do not analyze first or second party attack vectors, because that would be silly. There's simply too many of them.


They don't have to steal information in order to block inappropriate content. The app itself can detect and block without external intervention.


Presumably they would use edge based hash scans or ai models to detect unsavory content. But if the content is so extreme as to be unsavory, likely they will be legally required to report it to leo.

The next steps are leo seizing your device(s) or leo having WhatsApp start sending all your messages to them for review.

What happens when leo adds the hash of a state-loathed meme?


They "could" maybe. But since you seem to not have more information, we have to remain on the assumption that they're still using Signal protocol and can't see what the messange contents are.


They could use the signal protocol AND see the contents


Signal could still see the contents of your messages. Anything you enter into their app could be scanned or sent back in plaintext to some server, all prior to actual transmission via their protocol.

The only way to ensure that can't happen is to inspect the code and compile it yourself, or at least validate the hash of the binary you're installing. But we've also recently learned with the xz fiasco that you'll need to be sure to add checks all the way down.

Of course, you could always encrypt before entering the text into signal, but at that point why use signal?


Why do you take their motives at face value?

Obviously, the moment these platforms lose privacy, the criminals cease communication on them immediately. So they're the last group this is aimed at.

The solution to crime is the same investigation and detective work and anonymous tip offs and so on that it's always been. People going undercover and infiltrating these groups and then bringing them down.

By chasing the criminals off this platforms, all that happens is the detective work gets harder. Now they've got to go find where to start their infiltration, all over again.

This outcome is so obvious that the only conclusions available are that the lawmakers are either IQ 60 morons, or that they have malicious intent.


I read it somewhere, but most of the crime is done by people with low IQ. So it makes work for the police actually easier.

Well, we went in billions to these chat apps. The state just follows.

As sceptical as i am, i think, i want the state to resolve online crimes.


> Obviously, the moment these platforms lose privacy, the criminals cease communication on them immediately

You overestimate how smart they are, you forget how a lot of crime is opportunistic, that if abuser has to convince a child to install a shady app they would have much lower success rate and set off many alarms, etc.


> what else are governments going to do

One way that would put down lots of exploitation and support privacy of adults would be video and online surveillance of all children when not alone, using a parent-controlled computer to detect bad things happening. This could start in kindergarden and school and gradually expand to all spaces that are not home. Children have some right to privacy, but not as strong as adults.

On the 16-th year, if the child wants so, surveillance gets turned off and he/she is granted more privacy. Like with age limits on car driving or working, at some point the state says, you are old enough to take responsibility, we won't protect you from harsh life anymore.

This is a targeted, reasonable solution with little collateral damage, that upholds the right to privacy for adults. It's what parents would want, instead of the bureaucrats. And who really, actually cares about safety of the children, parents or bureaucrats?


I hope you're not a parent.


> support privacy of adults

> video and online surveillance of all children when not alone

So, you want children to have no privacy just to get a tiny bit more privacy to adults? Are adults really this horrible towards children, do you really think you would like this as a child?

> Children have some right to privacy, but not as strong as adults.

Why the hell not? Do you really think it is ok that your daughter gets constantly video surveilled all throughout puberty? Do you really think that is a lesser evil than your text messages being scanned for some keywords? Would you be happy if there was a camera constantly watching you as you jerked off as a kid?


> Are adults really this horrible towards children, do you really think you would like this as a child?

Only parents would have access to surveillance records. Children often do not like stuff their parents make them do, and their power over them, this would be one more thing, with great benefits.

> Why the hell not?

Because they are children, they do not have full responsibility for their actions, and they are more vulnerable to abuse, and protecting their safety is more important than protecting their privacy. I want to keep the status quo, where children are protected, and adults have rights. The way stuff is going, we're all getting more like children with one parent called Big Brother.

> Would you be happy if there was a camera constantly watching you as you jerked off as a kid?

That is not what I'm suggesting. I'm talking about public spaces (including online) where adults are present. If the kid wants to jerk off, or two or more kids want to make love, they can go home or use some private space like a bathroom.


Being under constant surveillance during the most formative years of your life can leave you extremely mentally unwell.


> The way stuff is going, we're all getting more like children with one parent called Big Brother.

Easier solution, remove and replace the politicians.


> Only parents would have access to surveillance records

You do realize parents create a lot of the CP out there? Especially if you include non-biological parents, who are legally parents.

> That is not what I'm suggesting. I'm talking about public spaces (including online) where adults are present.

So you wouldn't stop basically anything? What kind of child porn do you expect to catch using this? It isn't like anyone is creating child porn in public, they do it in the privacy of homes.

I assumed you would suggest something that could stop child porn, not just cameras in public places.


Your argument makes as much sense as banning knives because they are sometimes misused to attack people. What about alcohol? Some people drink and drive, we should ban alcohol too!


Banning sale of alcohol in gas stations would not be unreasonable.


It would


Lots of them will probably be selling alcohol years after they are no longer selling gasoline.


If knives can be used remotely to monetize child abuse at scale then yeah you bet they would be controlled. Your analogy fully misses the point. (And still, in many countries I've been to tobacco and maybe also alcohol are banned for sale near schools.)


Just put the entire population in jail, I'm sure you'll be able to prevent a lot of crime that way.


This argument is more thought provoking than people may think.

What we see is a shift of power. Electronic communications started out as private enterprises, then mostly taken over by states because of the need centralization, and now almost completely taken over by private enterprises one layer above. Governments are still trying to make sense of what happened and find their role in this new world.

Platforms are centralization at work, and it's not that far fetched to think that states could do a better job than Twitter or Facebook. Platforms have immense power. After all, we mostly agree that Facebook very literally facilitating genocide was not good for society. What we disagree on is how much they knew and how much was circumstantial.

There is also this idea that jurisdictions matter for platforms. The Chinese connections with Tiktok owners are problematic since we know for a fact that they have the power to influence elections. The American ownership of Facebook is not similarly problematic, largely because the CIA and other institutions interests mostly align with ours.

It would not surprise me if the Saudi money financing Twitter/X would turn out to be just as important as the financing of 9/11.

In light of that, it should not be surprising that EU states wants to play the game too, even if it will have very little practical effect.


States can already play the game. But this isn't playing the game via competition. This is just stealing data via lawfare


You're right. But apparently every time this topic comes up we cry that this is abuse of our freedom.

Tech is not good or bad but it does have unintended consequences and open new ways for abuse. This is tough to swallow for us who work in tech but is obvious to everyone else. If we stay in denial and do not volunteer to help use tech smartly to compensate for bad side effects, they will vote in some dumb law and some bad guys will exploit it for surveillance later.


There is in fact no "smart" way to compensate for this particular "bad side effect".

Either your communications are spied on to weed out unapproved material, or they're not. And there is no way to make the system architecture care about which material is allowed to be "unapproved".

The right answer here is just to accept that, beyond a certain point, further reducing the amount of circulating child porn requires unacceptable tradeoffs. Then stop whining and wishing for impossible technical solutions.


If you put "bad" in quotes, this means you don't think child abuse is actually bad? Cool.

> The right answer here is just to accept that, beyond a certain point, further reducing the amount of circulating child porn

Not porn, don't delude yourself. Actual abuse.

> requires unacceptable tradeoffs

And whether tradeoffs are acceptable or not depends on tradeoffs. If you refuse to help find solutions that don't involve spying, don't get all flustered when a law is adopted that does involve spying!

> Either your communications are spied on to weed out unapproved material, or they're not.

Exactly. Apple's solution did not involve any spying. People have spoken against, now we have solutions that do involve spying. Let's see where this goes yeah?


> Apple's solution did not involve any spying.

Spying by your own device is still spying.


Try to understand how the algorithm works. If you do, please share the mental gymnastics routine you use to make it look like spying. It is not spying in any reasonable sense, definitely not the sense of the proposal by EU lawmakers.


It would attempt to identify child porn by scanning images (using both hashes and ML if I remember right, but it doesn't matter). This takes place on the local device, but under control of Apple, not the user.

Upon detecting a suspect image, it takes action. The action isn't really part of the "algorithm". Depending on the score, Apple's whim of the day, and/or outside compulsion which would probably be applied to Apple, it refuses to allow the image to be sent in messages or to "the cloud", deletes the image, and/or reports the image to Apple (which would presumably report it further). They can change what it does at any time.

In other words, it examines data on your device, on behalf of others, in a way that you don't control, and uses the resulting information to your potential detriment, again on behalf of others and again in a way that you don't control.

Apple examining your data in a way you don't control, to implement policy that you don't control and may oppose, is spying. It doesn't matter whether they do the spying using you CPU or their CPU.


If "to your potential detriment" is your measure then tell me if you think police checking your documents is not "to your potential detriment", or speed/traffic camera is not "to your potential detriment", or waiting in line to board the bus, etc.

The only scenario where you don't need to accept things that are to your potential (and very real) detriment is if you don't live in society.

And no, what you described is not spying. Look up what it means: https://en.wikipedia.org/wiki/Espionage.

You are doing mental gymnastics. It's like calling a speeding fine "robbery", at best it's noise and not a useful contribution to discussion.


> If "to your potential detriment" is your measure then tell me if you think police checking your documents is not "to your potential detriment",

Where I live, police can't just randomly "check your documents" unless they already have some independent evidence supporting the idea that you might be involved in a crime. Which is how it should be.

It wouldn't be spying, though, since they'd be doing it openly and presumably not doing it on a continuing basis. It'd still be obscene authoritarian overreach.

> or speed/traffic camera is not "to your potential detriment",

These should of course be banned.

> or waiting in line to board the bus, etc.

This has nothing at all to do with anything and is just you trying to muddy the waters.


> Where I live, police can't just randomly "check your documents" unless they already have some independent evidence supporting the idea that you might be involved in a crime.

Yeah, then how about you match the description of a criminal. But it was an example and if you don't get the point then it's a waste of time.

> These should of course be banned.

Are you protesting them?

What if your kid gets run out by one of them speeders while walking out of school gates, will you still think speeding cameras are unnecessary?

> muddy the waters

These are all examples of having to suffer detriment due to living in a civilized society. You can pretend you don't but you do.

Why does this issue get techbros all up in arms-- I don't see them out protesting airport security checks etc. Somehow when their own lives are on the line then it's acceptable to require privacy invasive checks.


There is in fact. You could build AI directly into the app to detect inappropriate content and block it.

The app has thus not violated end-to-end encryption, nor has that content been exposed to external parties.

Platforms could definitely do more.


And how do you deal with all false positives. That will just be deemed collateral damage?


To add to your excellent point, who gets to validate the models efficacy? How do we know the state hadn't trained it to report users talking about maga, or Isreal, or for those with Chinese national lovers?


How do you deal with all those UUIDs colliding?

You can design a system where false positives are so rare they are insignificant and can be properly handled. The only reason we don't is because we don't think it's that much of a problem.


No, you can't. Machine learning doesn't work that way.

You can keep the false positive level reasonably low for copies of already known and flagged images. You can't do it for new images.


Maybe you underestimate machine learning, if you check HN on any given day it can do anything, probably be the next president.

But flagged known images was the point of Apple's algo, for example. Still everyone just went "forget abuse, my privacy is more important for me". Really at this point techbros deserved any dumb law that lets the government read their chats.


> Maybe you underestimate machine learning, if you check HN on any given day [...]

Maybe I actually know something about it.

> But flagged known images was the point of Apple's algo, for example.

You whined at me a little while ago about how all this was about "abuse, not just porn". Yet you're using that to justify a system that, as you describe it, could only find old, known images that have been circulating around the Internet and made it into a database. Meaning images of past abuse that cannot be prevented, by third parties who would not be caught by this.

Pick a threat model, because the measures you defend don't address the threats you claim justify them.

... and if you start talking about "grooming" or "normalization" or other silly bullshit that hypothetically might have a third-order effect, but probably doesn't have any meaningful effect at all in real life, I'm not going to bother to answer it.

> Still everyone just went "forget abuse, my privacy is more important for me".

Everybody's privacy is important to me. Including the privacy of the children whom you want to have grow up in an ever-expanding panopticon. Because this isn't just about stupid bullshit like your embarrassing disease. It's about people ending up in prison. It's about building infrastructure that can trivially and secretly be repurposed to hurt people, including children, in serious, life-changing, and potentially life-ending ways.

The Stasi were not a child-friendly institution.


> Maybe I actually know something about it.

If you know then you'd agree that with the right setup ML can do this with a very high precision. We're talking about a highly customized system trained exactly for this one purpose not some chatbot.

> You whined at me a little while ago

You're the one whining here buddy-- remember this is about a law about to be forced on you that you find inconvenient ;) I find it suboptimal but in some sense it might be better than nothing.

> this was about "abuse, not just porn".

These are related. If you have this material, you obtained it from somewhere even if you didn't make it yourself. Some police work and it may lead to some dark web exchange marketplace and actual producers.

That said yes, there's difference. The EU law being discussed is probably more fit to counter realtime abuse, compared to Apple's algo for example.

> Because this isn't just about stupid bullshit like your embarrassing disease. It's about people ending up in prison.

I actually agree with these two sentences, but not in the way you probably intended.

> The Stasi were not a child-friendly institution.

I was waiting until Hitler gets invoked in a discussion about using tech to combat and prevent child abuse facilitated by tech, I was not disappointed.


> If you know then you'd agree that with the right setup ML can do this with a very high precision.

No, it cannot.

Not with a model that you can run on a phone, no matter how specialized it is. Serious ML takes actual compute power (which translates to actual electricity).

Not with a model that you can train on the number of positive examples that are actually available. Current ML is massively hungry for training data.

Not with any model that's out there on people's phones and therefore subject to white-box attack. Adversarial examples are not a solved problem, especially not in the white-box environment.

Probably not with any model. You would need maybe a 0.0000001 false positive rate. That rate falls asymptotically with both model size and training.

> that you find inconvenient ;)

The last refuge of the fanatic is to call anybody who raises inconvenient objections a pedophile.

> I was waiting until Hitler gets invoked in a discussion about using tech to combat and prevent child abuse facilitated by tech, I was not disappointed.

The Stasi did not have anything to do with Hitler, and did not exist at all until after Hitler was dead. They were not part of the Nazi apparatus. Your ignorance of history helps to explain your willingness to give dangerous powers to untrustworthy insitutions, though.


You would not need a perfect model, since there will have to be a human and due process in the loop.

> The last refuge of the fanatic

Between us two there's one with maximalist and absolutist views.

> call anybody who raises inconvenient objections

The actual objections are dealt with. Properly implemented (like Apple's algo) it's not spying, inconvenience and detriment of an individual is a fact of life in any society etc. We just trade personal attacks now.

> a pedophile

Putting words in my mouth.

> The Stasi did not have anything to do with Hitler, and did not exist at all until after Hitler was dead

Thanks for correcting me. So basically Stalin then. Wow, such difference.


> Putting words in my mouth.

You know exactly what you intended to convey with that little smiley, so do I, and so does everybody else.


I don't know where to start .It's not only that you assume I imply you are a pedophile, you also think that would be a derogatory word or something? Pedophilia is not a crime (like being gay), sexual abuse is.

You're reading too much into this. The smile is there because you called me whining and I did the same to you.


Government recording all our conversations also has have "unintended" consequences.

The track record of such societies is rather terrifying.


You think I disagree? My point is almost exactly the same, we should stop that (by promoting a saner solution).


So, what's an example saner solution? Or is this one of those "just" sentences?


Is it as "just" as "just ignore the abuse and keep making tools used for that purpose"?


Of all the things potentially worthy of "just", the status quo sounds most appropriate.

But I think you not being able to state what a saner solution is is enough for this conversation.


Knowing you value minuscule amount of privacy over another person's life and suffering was already plenty.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: