Wrap the exploit up in a blog post about Rust -- or an article about gut bacteria -- and submit it to Hackernews. Boom, a virtual feast of secrets.
I used to have some co-workers who would dump json docs containing sensitive information into these sites all the time, and despite showing them how to format stuff in VS Code.
Could that be the highest voted link in HN history?
how convenient to consider that "pretty slim" now
In seriousness, this is all because websockets aren't bound by CORS, for good reason. https://blog.securityevaluators.com/websockets-not-bound-by-...
There's a simple fix though - hot reload websocket listeners like Webpack should only consider the connection valid if they first receive a shared secret that's loaded into the initial dev bundle, which itself would never be transmitted over a websocket and could be set via CORS to not be accessible to non-whitelisted origins. It's a dead-simple protocol with no ongoing performance impacts. But understandable it hasn't been implemented yet.
As far as I can tell, that article only explains that WebSockets aren't bound by CORS. It doesn't provide a reason (good or otherwise) why WebSockets were designed that way. Personally, I consider that feature to be a design flaw. If WebSockets handshakes respected the Same-Origin-Policy and CORS headers the same way every other HTTP request on the web does, none of these vulnerabilities with poorly implemented WebSockets servers would exist today, as they would be secure by default rather than "insecure unless the server properly validates the origin header on every handshake".
Probably too late to do anything about that anymore though. Changing WebSockets to respect the Same Origin Policy now would break a ton of websites.
The same origin policy was originally introduced with AJAX at a time when the vast majority of traffic was to the same origin. It wasn't a common pattern to make complex requests to a different domain (barring GETs and FORM posts which were allowed).
Web development changed and it started to become more popular to make complex cross-domain requests, but the problem is they couldn't just throw out the SOP without introducing a massive security vulnerability to all existing sites. So instead CORS was introduced as an option to relax the SOP.
With websockets being completely different, it was an opportunity to "start over". They opted to embrace cross-domain communication and require developers implement security on top of it.
The `postMessage` API has done the same thing. Any window can `postMessage` to any other window -- it's fully up to the windows to validate the security of messages coming their way.
Some argue that it's a bad idea to make "allow by default" the new paradigm. Personally, it seems pretty clear to me that developers just don't understand CORS at all [0], and letting the developers handle this in their own logic, while being exceptionally clear about this in the documentation, is a far more developer friendly and simple (therefore more secure) approach.
Example, and note: https://news.ycombinator.com/item?id=23261309
||localhost^$important,third-party
||127.*^$important,third-party
||10.*^$important,third-party
||192.168.*^$important,third-party
||172.16.*^$important,third-party
||172.17.*^$important,third-party
||172.18.*^$important,third-party
||172.19.*^$important,third-party
||172.20.*^$important,third-party
||172.21.*^$important,third-party
||172.22.*^$important,third-party
||172.23.*^$important,third-party
||172.24.*^$important,third-party
||172.25.*^$important,third-party
||172.26.*^$important,third-party
||172.27.*^$important,third-party
||172.28.*^$important,third-party
||172.29.*^$important,third-party
||172.30.*^$important,third-party
||172.31.*^$important,third-partyIt’s also possible someone might bind to an IPv6 address.
Better to rely on fixes mentioned elsewhere for web socket servers running on the local machine, including inserting a secret key into web socket path or query param, ensuring the web socket validates the path or query, and ensuring there are no web socket endpoints that could be used to get the secret from the websocket when not passed in. (Like an index of paths.) The Node debugger is mentioned elsewhere here as an example and cautionary tale.
Paranoid folks could maybe trick their everyday browser into never connecting to localhost via various means, and there’s an argument that websockets deserve localhost third-party restrictions or prompts, but if I were an attacker, publishing a malicious package via the web is significantly easier and higher value. Also, websockets require JS so disabling JS is another workaround. But then the site could encourage you to enable it for other reasons...
What I really want is a way to block (by default) all connections to my local network from websites outside of my network, like a firewall.
It amazes me that browsers just allow this, this should require a permission prompt.
When I do not want the browser to access somedomain.com, I redirect somedomain.com to 127.0.0.1 in my hosts file
I'm a pretty big Javascript advocate, but I do recommend advanced users run uMatrix and consider disabling at least 3rd-party JS by default. uMatrix is a fantastic tool and it really doesn't take long to get used to. And honestly, a relatively large portion of the web works with only 1st party Javascript, and a surprising chunk of the web still works just fine with no Javascript at all.
This is also why I advise advanced users to run Firefox. uMatrix isn't available for Safari, and it's looking extremely likely that it'll be at least underpowered in Chrome once Manifest v3 comes out. Or I guess run Brave or Vivaldi or whatever. Dang kids running around with their hipster browsers, I can't keep track of them all.
The point is, even though I'm extremely bullish on the web as a secure application platform, part of the reason I'm bullish is because the web makes it relatively easy to take simple security measures like disabling scripts by default. You should absolutely take advantage of that, you should absolutely be disabling at least some Javascript features when you browse.
You can even globally turn off fingerprinting vectors like WebGL[2]/Canvas[3] in Firefox, and just swap to a different profile whenever you want to visit the rare game/app that requires them. Although with more and more people trying to embed their own DOM models in Canvas, maybe that'll be harder in the future.
[0]: https://github.com/gorhill/uMatrix
[1]: https://github.com/gorhill/uMatrix/wiki/The-popup-panel#the-...
[2]: about:config -> `webgl.disabled` -> true
I'd be happier if Firefox itself asked for permission before allowing web servers an websockets, but even this wouldn't be terribly helpful, as any authorized website (like agar.io) could then scan you.
The average user will never learn to configure and use software like uMatrix.
Compared to alternative platforms, security on the web is easy.
Also keep in mind the audience. If I was posting this on Facebook or Twitter, I might not make the same recommendations, but uMatrix is not too complicated for the average HN reader to use. It might be annoying and you might decide you don't want to have to turn it off or fiddle with it for some websites, but the learning curve is really not that steep if you have even a rudimentary knowledge about how websites work.
I genuinely have a use-case for this. We have an internal company wide business app, that works in any browser. The usual create-read-update-delete stuff, reports, factory forms etc.
With websockets we solve communication with local devices on the shopfloor - some computers have serial-port attached thermal printers, others have usb attached notification lights. We have small python scripts that listen for commands with websockets on 127.0.0.1 and control the printers and lights.
That way we can control each users local devices from the web app - without configuring internal firewalls or installing special browser add-ons (an in-house browser add-on is a bigger security risk, than a websocket on 127.0.0.1)
Unless I misunderstood your use case.
Also, obligatory xkcd 1172
I /have/ put other secrets into frontend code before, strictly for small temporary projects where the cost of implementing secret management outweighs the size of the project. And obviously not in code that was anywhere close to being deployed outside my own box.
Unfortunately the method outlined in the article allows access to environments that would otherwise be considered trusted and not-accessible over the internet, hence the problem
evil-server
(looks at data from client)
(recognizes well known server app)
(launches exploit!)
The first one that comes to mind is built in "package updaters" where the front end server has a well defined way of updating its packages. Have your evil server send it "get a new version of fetch_user_passwords from here..."- you would probably only need a handful of ports
- it really only takes one person pasting that AWS key into their file to get pwned and I’m sure someone has those keys committed to GitHub right now.
- how many tabs do you have open of random tech blogs right now? Excluding HN, my guess is the average dev has at least one.
Not a super plausible attack, but over a long period of time with decent SEO, could probably deliver some interesting results.
The answer to why browsers allow connections to 127.0.0.1 from external sites is probably something like "legacy reasons".
Custom hostnames are such a better solution, but for some reason developers don't use them.
Localhost is just another site. If you want to make it secure, make it secure.
You realize that anybody on your coffeeshop wifi can also connect to your localhost server, don't you? Just because a server is running on your laptop doesn't mean it's not a server, running on the internet.
Also at least by convention, localhost is only accessible via the loopback interface. This allows it to be accessible even if there is no physical network to connect to, but also means that it is only accessible on the same physical/virtual computer that it is running.
To let other people in the coffee shop access your software you would need to connect to a public or private interface.
I actually saw people leaving this enabled so much in shipping products, I wrote a little utility to test for it.
Edit: can't wait for the usual replies with "what is your solution?"
The obvious flaw in modern web security is that the domain isolation model does not make any sense today. It's an outdated hack. Software communication should be done thorough something resembling actor model where code running locally is thought of as completely separate entity from the web server. It shouldn't have anything to do with domains. Communication from any actor to any other actor should be subject to the same security model, regardless of where their code was loaded from. Escalating privileges between actors should be a universal and well-established process with known guarantees, not a bloody mess of ad-hoc conventions, headers and "best practices" that change with every browser, app and year.
How would that change anything? The flaw is the websocket was open to anyone (on the application layer. All security in the example was due to what ip addressess the websocket binded to on the network layer). If you replace the same origin policy with some other security policy, it wouldn't really make a difference, unless your web socket decided to use it, and in that case you might as well use the existing same origin policy.
If your real argument is that CORS/same origin policy/websocket security policy is inconsistently specified and has made some questionable specification decisions - sure i agree with you. But that has nothing to do with using origin as the security domain for websites
The fundamental flaw here is using the ip address and assumed sufficientness of only binding to 127.0.0.1 as a security measure without application level mitigation, not how browsers do network security
Edit: reflecting on this, i think i change my mind a bit. The fundamental problem isn't that the web security model is full of hacks, but that the websocket spec decided to ignore it and instead focus on the socket (tcp connection) model of secirity. If you open a socket all the server has to authenticate is the ip address. Anyone can open a socket to anywhere, any orher authentication has to be taken from higher level protocol. In websockets, its mostly the same. Anyone can open to anywhere, and webserver just has ip address and origin to authenticate. Anything else should be done in higher level protocol. The problem is people see websocket and assume WEBsocket not webSOCKET.
EDIT: Nope, exploit worked for me against webpack-dev 3.10.3 used by react-scripts 3.4.1
TypeScript code:
const sock:WebSocketLocal = (function local_socket():WebSocketLocal {
// A minor security circumvention.
const socket:WebSocketLocal = <WebSocketLocal>WebSocket;
WebSocket = null;
return socket;
}());
TypeScript definitions (index.d.ts): interface WebSocketLocal extends WebSocket {
new (address:string): WebSocket;
}
If the 'sock' variable is not globally scoped it cannot be globally accessed. This means third party scripts must know the name of the variable and be able to access the scope where the variable is declared, because the global variable name "WebSockets" is reassigned to null and any attempts to access it will break those third party scripts.However there are still so many different ways to defeat this (e.g. creating a web worker, creating a new window that handles the WebSocket and posting messages to it, etc.) that it's basically pointless to try.
"In all seriousness, this attack vector is pretty slim. You’ve got to tempt unwitting users to visit your site, and to stay on it while they’re developing JS code."
Simply call Date.now() when adding the iframe and when that iframe's onerror event fires then diff the two. I think you can do this with img tags, frames, and anything backed by a network call that lets you observe load failures.
CORS doesn't save you because you aren't trying to reach into that iframe and run Javascript or access the DOM. A CSP doesn't save you because the site you're visiting is opting to do this and can put whatever they want in their CSP.
So, something like evil counterparts to HN, reddit, StackOverflow, or latestcatvideos.com.
[1]: https://youtu.be/pRlh8LX4kQI?t=954
[2]: https://chromium.googlesource.com/chromiumos/platform2/+/HEA...
WebSocket isn't bound by CORS, AFAIK.
This would prevent having to worry about people who use other hostnames for host even in localdev.
{"type":"error","data":"Invalid Host/Origin header"}
I don't think I changed any significant settings in CRA, this is pretty close to the default. Not sure what exactly determines whether this works or not.
It's not clear (without a lot more digging) what impact the sockjs changes have on this issue.
I have at least 3 create-react-app and one next app running. I even ran a quick websocket server on port 3000 just to see but nada.
It was only tested on Firefox, as a basic proof-of-concept. AIUI, chrome et al offer similar functionality but maybe the API is different
It may also take a few minutes to find and connect to the websocket, I think CRA webserver maybe only binds to one client at a time, so maybe it would pick up the connection after a webpack-dev-server reload or two.
With Websockets something like this is effectively not possible, because WebSockets were designed with this in mind
- A browser will only start transmitting data over the ws once the handshake is done. So just making a request has very limited ways for an attacker to transmit user defined data (basically the Host header/Origin header and cookies... which will not really work as an attack vector for newline-delimited or binary protocols)
- The handshake itself works by the client sending a nonce to the server which the server then has to hash together with a special uuid. Only actual websocket servers know how to do this step correctly, and thus the browser will refuse to even open connections to servers which aren't actual websocket servers. So the attacker will not be able to send truly arbitrary data or read any responses.
- Even after the handshake, browser-to-server data is masked by XORing the data with browser-picked keys. The attacker therefore cannot control what the data will end up looking like when it is sent to the server. And unaware servers will certainly not try to reverse the XORing.
What you're left with regarding websockets are timing attacks to do some port and network scanning, and attacking actual websocket servers, which do not check the Origin or use some kind of token to verify incoming connections, analog to attack-able regular http endpoints that do not do auth and csrf tokens properly.
I'll readily admit tho, that a lot of developers forget about verifying incoming websocket connections. I have fucked this up myself in the past, and I have found such issues in other websites, including one problem that let me take over user accounts via an unsecured websocket if I was able to get such users to open an attack website (or ad).
Excuse me, but what in the world? XHR has all kinds of cross-site request protections that even make developing apps locally a pain. How come websockets don't come with such protections?
Are there apps that take over this responsibility?
https://dev.solita.fi/2018/11/07/securing-websocket-endpoint...
"you can", or is it blocked by default?
Ahh, feels so good
Is there a whole group of people that are just learning about Websockets for the first time?