Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The current behavior is akin to an HTTP server shutting down whenever it receives an invalid HTTP request.

It is nothing like that.

HTTP requests do not propagate across an organization's entire network, or possibly the entire Internet.

> Or, for an even better analogy, it is like an HTTP server that, upon receiving a malformed HTTP request, drops all current and future traffic from that IP.

So just like (e.g.) fail2ban blocking IPs that continuously send bad login requests? That's awesome! I would want my web server to block bad clients.

Heck, it would be nice if web browsers would do the same thing with content they downloaded. In the early days of the web there were all sorts of garbage files out there and instead of forcing people to correct their HTML they tried to be clever in mind reading:

* https://en.wikipedia.org/wiki/Tag_soup

For a time there was an entire 'movement' to get people to clean up their act:

* https://en.wikipedia.org/wiki/W3C_Markup_Validation_Service

* https://en.wikipedia.org/wiki/HTML_Tidy

It's fine to have a flag to have stringency be a policy that can be toggled, but I think Postel's law ("be liberal in what you accept") can cause issues over the long-term:

* https://en.wikipedia.org/wiki/Robustness_principle



> So just like (e.g.) fail2ban blocking IPs that continuously send bad login requests? That's awesome! I would want my web server to block bad clients.

Well, somewhat, except that it's not awesome. It's like fail2ban, but implemented on a server that sits behind a load balancer. So, when it receives a bad login request from an IP and it bans that IP, it actually banned the IP of the load balancer, so now it won't receive any other requests. So that a malicious client can just send 1 bad request, and take down everyone else using the whole server.

Basically, this whole discussion is not so much about Postel's law, it's about limiting failure domains. A bad route advertisement should make that 1 route inaccessible. It is not good design for a bad route advertisement to take down all the other good routes. It is particularly bad design for a protocol where routes are automatically propagated by devices that don't think there's anything wrong with them.

If you want to compare it to the HTML situation, the bug can also be seen as a browser that, when it receives invalid HTML from www.google.com/search would not only throw an error, but also refuse any other connection to *.google.com/*. The proposed fix is not to interpret anything as valid HTML. It is to show an error only when accessing www.google.com/search.


It’s like your origin web server unceremoniously dropping the http connection when its front end reverse proxy (fastly, cloudfront, akamai, etc) forwards an invalid request, and when your web server restarts the proxy retries the bad request.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: