Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If you build a building that can't stand up to the rain or the wind, you're not an innocent victim of the weather, you failed to design a building for the conditions you knew would be there.

This is why I liken it to protecting from an army. Wanting to protect a building from rain is fine - rain is a constant that isn't adapting and "fighting back".

Find me a building that is able to keep its occupants safe from an invading army, and then we'll talk. It's impossible. That's what we built armies for.

> But there's a huge spectrum between writing a SQL injection vulnerability and a complicated kernel use-after-free that becomes a zero-click RCE with an NSO-style exploit chain, and I'm much more sympathetic to the latter kind of mistake than the former.

To be clear, I agree that there's a spectrum, and I wouldn't want to make it so that companies can get away with everything. But I'm not sure we have a good solution for "my company has 10k engineers, one of them five years ago set up a server and everyone forgot it exists, now it's exploitable". Not in the general case of having so many employees.

> The fact is that most exploits aren't very sophisticated -- someone used string interpolation to build an SQL query, or didn't do any bounds checking at all in their C program, or didn't update 3rd-party software on an internal server for 5 years. And for as long as these kinds of mistakes don't have consequences, there's no incentive for a company to adopt the kind of structural and procedural changes that minimize these risks.

I'm not a security researcher, but I'd guess that most exploits are even simpler - they don't even necessarily rely on software exploits, they rely on phishing, on social engineering, etc.

I've seen plenty of demos of people being able to "hack" many companies by just knowing the lingo and calling a few employees while pretending to be from IT.

This doesn't even include "exploits" like getting spies into a company, or just flat-out blackmailing employees. Do you think the systems you've worked on are secure from a criminal organization applying physical intimidation on IT personnel? (I won't go into details but I'm sure you can imagine worst-case scenarios here yourself.)

> But when a company cheaps out on software and develops code in a rush, without attention to security, then they shouldn't get to socialize the costs of the inevitable breach.

I agree, but there's a huge range between "builds software cheaply" and "builds software which is secure by default" (the second being basically impossible - find me a company that has never been breached if you think it's doable).

We want to make companies pay the cost when it incentivizes good behavior. That's sometimes the case, hence my agreeing with you for many cases.

But security is a game of weakest links, and given thousands of adversaries of various levels of strength, from script-kiddies to state actors, every company is vulnerable on some level. Which is why, in addition to making companies liable for real negligence, we have to recognize that no company is safe, even given enormous levels of effort, and the only way to truly protect them is via some state action.

The reason your bank isn't broken into isn't just that they are amazing at security - it's that if someone breaks into your bank, the state will investigate, hunt them down, arrest them and imprison them.



Show me a company that claims it's never been breached in some way, and I'll show you a company that has no clue about security, including their prior breaches.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: