Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Open source is a red herring. You're downloading WhatsApp from an app store (or, at least, the overwhelming majority of users are). If you can't verify what the binary is doing, the source code doesn't make a difference.

Beyond that, despite the repeated claims of open source advocates, there's nothing preventing people from taking the app store versions of things like WhatsApp and reverse engineering them.



If builds were reproducible (i.e. binaries would be identical if recompiled with the same toolchain on a different machine), then all it would take is N independent builders to verify the app store binary matches their locally built binary to greatly decrease the likelihood of tampering.

So, while being open source is not the complete answer, it certainly doesn't hurt.


How do you guarantee the App Store doesn't serve limited edition binaries to selected recipients?


The App Store run by a central authority with complete control over what can even be available and the ability to modify the delivery at their own whim is certainly a big issue in terms of trusting the integrity of the apps running on a device.


If you suspect yourself to be a selected recipient (e.g. you're Edward Snowden) I reckon you should compile your own binaries. Or read 'Reflections on Trusting Trust'.


Now you are the selected recipient of modified source code.


Get it from multiple sources, and do a diff.


Fun fact: The App Store already serves limited edition binaries to everyone because it encrypts them per-account :)


Forget the aop store. How do we know Google, Apple, Microsoft, Ubuntu, etc doesn't give us a malicious kernel update?

I don't think we have good solutions for the problem of malicious updates in general.

The only one I can think of is a trusted hypervisor that hashes memory in the guest and reports on it. And even then, how do we trust that?


Forget the software, the firmware running on the baseband processor can read system memory and send it over the network without you knowing. But that takes lots of effort to target a specific person.

So what do you do? It comes back to making sure that 'they' can only hack some of the people all the time, and all of the people some of the time. It's preventing them hacking all the people all the time I worry about.


I don't think it hurts! All else being equal, I'd rather have source than not have it. What I don't accept is our supposed helplessness in detecting backdoors in secure messaging software.


Or we could have build systems that weren't even more convoluted and fragile than they were 50 years ago and just release software in the form it's supposed to be in.


Obviously the app maintainer could push an update that simply leaks your keys and stored messages. Doesn't matter if it's open or closed source.

However, open source does give users some real recourse in the event that the project moves in an undesirable direction. I don't like what's happening, I can fork it without your permission and still have access to the same development environment and build tools the original project had. I think that's important and valuable.


A major problem with your approach is that it assumes analyzing a binary for correctness or security is equivalent to analyzing well-documented, high-level source. It's not. It takes much more work to discover vulnerabilities in assembly or even correctness failures. That's because it lacks the context for how the software is supposed to operate.

I can read a commented Python or BASIC program with almost no effort unless it's very sloppy. I can tell you a MISRA C or SPARK program with associated static checks is immune to entire classes of errors without analyzing the source myself. I can tell you what information flows can or can't happen in a language like SIF implementing information-flow security. I can do all of this while expending almost no effort. So, I'm much more likely to do it than if I had to decompile and reverse-engineer a binary from scratch with analyses using the tiny information in a binary.

So, every time you say that, what you're really saying is: "Anyone could do this if they spent enormous time and energy. Sort of like they could hand-compile their C code each iteration. They probably won't but I'm countering your wanting for source because in theory they could do all this with assembly with enough effort."

It's definitely not true for correctness as assembly lacks what you need to know it's correct. It's probably not true for security as correctness is a prerequisite for it. In any case, economics is important where the effort required to achieve a thing determines whether someone will likely spend that effort. In case of binary analysis, it's apparently a lot less than source analysis.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: