I mean, it shouldn't. Humanity is perfectly capable of building secure web services without having to keep the way it works a secret. You don't publish your encryption keys with your source code, which is what your security should be depending on.
And what's more, Reddit themselves did not even use that excuse in their official statement for it, even though to me their excuse felt even less logical.
Basically, they don't want to leak the crazy features that they're developing and have such piss-poor source code management that they cannot provide tarballs of clean states of their source code.
I mean, how do they deploy new versions, if they cannot cleanly separate feature development from stable code?
When I worked there, it mostly came down two two things that leaked into every product: anti-abuse (spam mostly), and ads/tracking.
Anti-abuse is useful to keep secret because it includes tools that make spammers think they're successful. I think it's a little more nuanced than the "open source code is more secure argument", which I totally agree with - but the anti-abuse includes active mitigation measures that constantly evolve; and in this case, obscurity to how this all works is actually valuable.
A reddit-specific ad or metrics implementation isn't useful to anyone else, and it was a tough sell that the codebase should be made more complex only to accommodate configuration for a small handful of users. I know, because I made the argument that we should when I created the open-source reddit-mobile repo, which was originally broken up into plugins like the reddit python codebase. Eventually it just wasn't feasible to maintain as a 2-5 engineer team rebuilding a 10-year-old website in a couple of months. That's a story for another time.
Personally - I find it sad, and I think it got rid of one more thing that made Reddit special. Unfortunately, the only metric you can attach to this is "how much longer does it take us to ship shit", and thus, it died.
The origin of that statement (Kerckhoff's principle) refers to cryptography, not to application security.
If you take a quantitative, cost-based approach to modeling security through adversarial capability, obscurity becomes a perfectly valid security measure if it's not used in isolation. We don't use it for cryptography because the tradeoffs aren't worth it. It's better to design cryptography with provable security based on mathematically rigorous computational hardness assumptions than it is to make secret algorithms.
In the context of application security, if the decision to obscure some or all of your system incurs a non-trivial cost to an adversary, it makes sense. We can't rigorously and mathematically prove the security of applications in the same way we can prove e.g. an algorithm is sub-exponential instead of polynomial time.
You often see "security through obscurity" mentioned in the same way that people cite "appeal to authority" or "ad hominem" fallacies in internet debates. The reality is more complex than that. Fundamentally, anything that increases the effort required by an adversary to successfully compromise your system is worth considering. You just shouldn't depend on it in its entirety. Closed-source software is a good example of robust security through obscurity, as basically any security engineer will tell you (I'd rather look at source code line by line to find vulnerabilities than try to find them through trial and error in a penetration test).
No, having an open source kernel means a lot more developers looking at the code and working on a fix if some bug is found, rising the probabilities to find a bug and shortening the time required to fix it. How would keeping the source closed decrease the number of bugs?
In theory it shouldn't, but in practice people don't make perfect code that doesn't have vulnerabilities. But a lot of people would argue that by having many reviewers, you're reducing vulnerabilities. Thus, my stance is that "it depends on the situation"
And what's more, Reddit themselves did not even use that excuse in their official statement for it, even though to me their excuse felt even less logical.
Basically, they don't want to leak the crazy features that they're developing and have such piss-poor source code management that they cannot provide tarballs of clean states of their source code.
I mean, how do they deploy new versions, if they cannot cleanly separate feature development from stable code?
https://www.reddit.com/r/changelog/comments/6xfyfg/an_update...