Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>The internet is a bunch of disparate networks that no one person controls, exactly. That means that when someone tells you there's a fault in the kit that you have installed in your network, then that is 100% your problem and it is 100% on you to get it fixed.

So you've never worked in a corporate environment. Here's how that conversation would go:

Hey guys, some researchers found out if you use a test flag meant for the lab on the public internet, it breaks all our BGP sessions.

OK, so drop their feed and block them?

Great, done.

>Once a vulnerability is known then someone out there is going to start exploiting it almost immediately.

And yet the "vulnerability" in question was known, and was not being immediately exploited if you read through the mailing list or were participating in NANOG at that point in time. So your statement is provably false.

>Backbone operators can have all the hubris they want, but it won't change the reality that the only effective action they can take when a vulnerability is found is to get it fixed ASAP.

And yet we're having this conversation in 2023, they have operated the same way for 40+ years, and somehow the internet is still working. Bad actors get blackholed, it worked in the past, it'll continue working in the future. The reality is that backbone routing is expensive, and expecting everyone to update their kit on YOUR timeline isn't reasonable.



I am aware that network operators have behaved this way for very many years. The point I am making is that all of the IT industry used to attempt to "work around" security vulnerabilities in the same way, until log4shell and all the others gradually beat that propensity out of them.

I am prophesying that a similar reckoning is likely to come upon the network backbone. You're arguing that the cost of entry to the game (BGP peering) is such that the old ways will continue to work. Let's hope you're right.


Software ecosystems like libraries and frameworks have completely different propagation and remediation mechanics compared to federated systems like core Internet backbone routers and switches is the thing. Try as we might conceptualize otherwise the modern Internet from a packet’s purview is more like a loose confederation of ultimately privatized or state-run fiefdoms than a cellular automata digraph explosion. So actors that try to act maliciously against the network will be basically shut out given the rule of an iron fist being the default.


Conceptually, there is no special difference between a remotely exploitable vulnerability in an HTTP server and a remotely exploitable vulnerability in a BGP router. It would have been theoretically possible to have dealt with log4shell by blocking every IP address that was sending malicious payloads, for instance.

Practically, I accept that there is a real difference. ASNs are much harder to acquire than IP addresses, and there are far fewer of them. That difference might mean that blocking continues to be an effective mechanism. However, in a world where malicious actors are increasingly just nation states wearing a flimsy mask, I'm not sure that the difficulty of getting access to sufficient numbers of ASNs is going to be a forever mitigation.

The main thing I have taken away from this thread is that when it comes to vulnerability management, network operators seem to be pretty far behind the curve in mindset terms. The arguments being made in here ("you can't break other people's stuff", "we have a lot of boxes and SLAs mean we can't patch quickly") aren't even slightly new. The well-proven reality is that threat actors don't give one single shit about any of that.


Operators could have raised this with vendors to fix.


> OK, so drop their feed and block them?

> Great, done.

The "great" is the problem, no?


Backbone operator who was effected by this. We had a large number of routers in production with this bug, we were aware and upgrading as fast as we could, but with 99.999% uptime SLA's we only have so many minutes per router we could afford for downtime/outages. We had schedules in place (approx 3 months of out-of-hours upgrades) 1 week warning was a bullshit move. Dropping the BGP sessions on 1000's of routers globally was stupidity.

Bearing in mind we also couldn't just "apt-get upgrade" in place, most boxes required hard reboots to apply the patches.

The answer to your question is no, as per a few others have said, Don't violate Rule #1. We see bad actors very often, our job is to keep the bits flowing and the internet online.

Keeping the internet online is painful enough as it is without "researchers" dropping thousands of routers to "prove a point."


Verbatim this is the same things people said in early 00's about people testing XSS et al. against poorly coded PHP/Perl sites.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: