They're also pretty slow, but that isn't much of a concern here - browsers do suffer from it though (except for Edge, which uses an undocumented IPC API).
Do you have any links or things to google for?
Would have assumed they'd do better given how polished their consumer products are
The service should check that the `Host` header on every request specifies `localhost`, `127.0.0.1`, or whatever other name is normally used to access it. If the `Host` header specifies a possibly-attacker-controlled name, the service should reject the request.
Explanation:
The problem is that the service is relying on same-origin policy to prevent a web site running in a local browser from making requests to it -- but it does not actually check what origin requests are addressed to. A random web site running in a browser normally cannot make requests to a hostname other than its own. Or, more precisely, it can make requests, but it can't read the responses. The Blizzard updater therefore forces the client to make an initial request whose response contains a secret token, and then subsequent requests contain that token, proving that the client was able to read the response and is therefore not a web site running in a browser.
However, it's perfectly possible to define an evil hostname whose DNS records assign it the IP address 127.0.0.1. Thus requests to this evil hostname would end up reaching the local Blizzard updater service. Unfortunately, the service will happily respond to these requests because it does not pay attention to what hostname the client was requesting.
(To actually exploit this, it's necessary to load an evil web site from the evil host name, which then performs the attack. This means that the evil hostname can't only map to 127.0.0.1, but must also map to the attack server. But a hostname can indeed have multiple DNS records, and the browser will arbitrarily switch between them, allowing an attack to proceed.)
DNS rebinding vulnerabilities are an authentication/trust problem. Authors assume that 127.0.0.1 (or other private IP blocks) are safe and therefore no additional authentication is required, or that they can rely on the same origin policy to do certain things and, e.g., reject requests with the Origin header set, or as in this case, rely on the same origin policy to restrict updates to the `Authorization` header without a preflight OPTIONS request.
If you want to watch an interesting DefCon talk about the same idea, but attacking routers, this [1] is one the most entertaining/interesting talks I've seen overall.
> Their solution appears to be to query the client command line, get the 32-bit FNV-1a string hash of the exename and then check if it's in a blacklist. I proposed they whitelist Hostnames, but apparently that solution was too elegant and simple.
It looks like they just block specific browsers from doing the exploit, except they have to actively maintain the blacklist, which is sub-optimal.
The easy way to fix it is to introduce some non-HTTP-based way of transmitting this token. For instance, put the token in the user's Blizzard settings directory, so that you have to have read access to the user's files to access the server. Or use some non-network-based interprocess communication mechanism, like named pipes, to do the initial authentication. Or (what we did a couple jobs ago), check what the other end of the socket is, and make sure it's a known-good app (instead of making sure it's not a known-bad app, which is what they're doing).
If you're intentionally embedding a web browser in your app and want it to access this interface, you can have your code pick up the token from a file / from IPC using native code, and then send it into the embedded web browser as a cookie or JavaScript variable or something. Most ways of embedding a web browser in a native app support this.
The reason they fixed it this way is probably because all good ways of fixing it require code changes to every client to use the new mechanism, and there are lots of clients using this interface, and this only requires a code change to the one server. (Even though "client" and "server" are both running on the same machine, patching one binary on everyone's computer is still easier than rebuilding and patching lots of binaries.)
https://github.com/transmission/transmission/pull/468
Seems to be a quite prevalent issue...
In particular, note that the request is not made to localhost, it's made to a DNS name that simply happens to resolve to 127.0.0.1. Should Chrome and also all other web browsers add a special case for DNS names that resolve to 127.0.0.1?
I use a fast DNS resolution solution that only queries authoritative servers and stores IPs in constant, perfect hash databases, then in kdb+. No caches. I see the IP addresses that are returned in DNS packets not as ephemeral and inconseqential, but as entries in a database that need to be validated before insert.
If I see some nonsense like 127.0.0.1 in an A record, let alone a public IP address that Im using, it is rejected. I have seen NS records with 127.0.0.1 as well.
Are there DNS rebinding attacks that do not use iframes, Javascript or some other way to trigger automatic lookups without user interaction? In theory perhaps. But every attack I have seen relies on triggering lookups automatically.
Its too bad the popular browsers make automatic requests for resources, automatically follow redirects and do not allow users to disable this default behavior.
However users can make use of less complex HTTP clients that do not make such automatic requests and where redirects can be disabled. These can be used in tandem with the popular browsers to give users more transparency and control.
Also isnt it possible to use SSL/TLS for localhost JSON-RPC? Theoretically couldnt users make use of client certificates?
Just a thought.
It was discovered when going to that name outside revealed what seemed to be a copy of our own internal webserver!
What had happened is that our webserver was on the same host as our squid proxy. So we were in fact seeing our own site under the external name.
Protection for this sort of thing is in the default squid conf these days (from memory)
In an explicitly enumerated category, blacklists and whitelists are logically equivalent and can be used interchangeably. In almost every other case blacklists are insufficient because new items can generally be created, either maliciously or just accidentally as the size of the category grows, which are not on the blacklist but which share whatever bad trait you were hoping to protect against.
I'm sure there are a few exceptions, but generally speaking any problem that can be solved with either a blacklist or a whitelist should use the whitelist, just to be safe. A problem that can't use a whitelist is probably not actually solvable by a blacklist either, and trying to use one is likely to fail in the long run.
Its indeed a very strange patch from Blizzard. As if they hastily assigned an intern to it and then called it a day.
So if I understand this correctly, websites can now bypass all firewalls and send traffic to any _local_ port at will? It also seems that this same trick would apply to local/intranet IPs (e.g. have domains that redirect to 192.168.0.x) allowing interaction with things like printers. While Blizzard has a bug, it seems to be the browser that has the real vulnerability here.
Edit: The replies have good explanations with more detail why this would be difficult to fix -- the host doesn't have enough context to differentiate between "intended" and "unintended" IPs without a bunch of pernicious edge cases.
Generally, I think browsers handle this as well as they can. DNS rebinding preys on a feature that's useful for being able to fall back on redundant servers if a primary fails, which is important.
IIRC from the talk, browsers have implemented policies that prevent rebinding to non-public IP ranges. The talk below touches on how that's not quite sufficient for routers, because they also happen to have a valid public IP, but often don't properly filter or NAT packets from the LAN NIC, leaving them vulnerable because the packets still come from a private IP, so the source-IP-based security lets them through.
They can't send arbitrary traffic, though; they can only send valid HTTP requests, and they don't get access to your cookies (because the hostname doesn't match), so the "only" thing they can do is get access to things that an unauthenticated HTTP client running as you could get access to.
This has been true since almost the first web browsers - XHR wasn't a thing, but you could send GET requests with <img src="http://192.168.0.1/reboot-everything"> or even POST requests with forms (a little easier once JS let you create and submit forms from JS, but certainly doable in pure HTML).
And the problem is there's no way to tell what IP addresses to block. Special-casing 127.0.0.1 is at least a clear enough solution to articulate (though it breaks all sorts of use cases where HTTP to localhost on a custom domain name is intended), but should you also block all the RFC 1918 space? Doesn't that break the vast majorities of companies that have internal websites named wiki.example.com or wiki.corp.example.com? And some companies don't even use RFC 1918 space, they use public IPv4 ranges they own for internal routing.
It gets worse - the other problem here is that IP-based access to resources on the public internet is also vulnerable. If you're at, say, a university which has IP-based access to some journals, any website can send HTTP requests to those journals from the university.
The real right solution here is to avoid IP-based access controls, either on the public internet or on your private network - preferably by not having a private network or at least not trusting it, BeyondCorp style. Every HTTP request that does stuff on your behalf needs to be explicitly authenticated, even if it comes from the private network.