Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I stand corrected. I think yours is the correct approach.

How shall origin be defined? I can envision the likes of Microsoft which have many, many second-level domains making calls between them.

We can’t allow the site itself to grant access. How would this be managed, other than “please stop and think what a domain name is supposed to be before spraying your product across twelve of them?”



To be clear, evil.com can define sub.evil.com to resolve to 127.0.0.1. You basically can’t look at domains to mean anything much. You have to look at IP addresses.

(Which is in turn made harder by IPv6 public addressing, where you can’t just block the private IP range because you might not be behind a NAT in the first place, instead only behind a firewall. So your address A::B, can route to your Intranet peer A::C, which isn’t public-routable, but is a public address. But there’s nothing, other than the firewall, that says that that’s not a public address. It’s a hard problem!)


To add on to your point, even if you allow evil.com to only access evil.com and not any subdomains, your browser is still vulnerable because of short TTLs on DNS resolution.

evil.com can set a short DNS TTL, and after you access it, it can rebind its address to 127.0.0.1. Then subsequent requests to evil.com go to localhost (e.g. fetch("evil.com", ...) on evil.com will go to 127.0.0.1 if the DNS rebound successfully).

Caching a website's IP on first use doesn't help, either, because it breaks long sessions on websites that use DNS rebinding for legitimate purposes (load balancing, fallover).

The only real way to fix this is for the local webserver to check the Host header on the HTTP request... or look at IP addresses. But building a global registry of IP addresses is hard, so we're stuck with trusting application developers (and malware writers) who run servers on localhost to use good security practices.


This would prevent most users from visiting that site (since most of the time it will resolve to 127.0.0.1)


Evil.com could be set to resolve normally but redirect you to <plausible random word>.evil.com, which resolves normally once and then performs the attack, leaving evil.com able to keep serving new visitors.


We already have a notion of origin that is used for most of the browser security policies (exact match of domain, protocol, port). Websockets allow servers to enforce this policy by sending an Origin header, but unfortunately observing the error messages/timing still allows you to determine if the port is open at the transport layer even if you can’t establish a connection. Since websockets routinely need to connect to different origins (they can’t be routed exactly like normal requests, though many CDNs/reverse proxies can handle both), browsers would need to remove the information leak themselves by normalizing error messages and timing across failures.


Granting access down seems ok to me. To make it really generic, you would need a way to query the upper domain if accessing a sibling is ok.

The process is different enough from cookies to warrant another large discussion about how to do it, with plenty of trial and errors. But the stakes are much lower, as in the worst case the user will get a dialog, instead of a site being broken.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: