For the people wondering what the motivation is, https://www.w3.org/TR/fetch-metadata/#intro has a good summary:
Interesting web applications generally end up with a large number of web-exposed endpoints that might reveal sensitive data about a user, or take action on a user’s behalf. Since users' browsers can be easily convinced to make requests to those endpoints, and to include the users' ambient credentials (cookies, privileged position on an intranet, etc), applications need to be very careful about the way those endpoints work in order to avoid abuse.
Being careful turns out to be hard in some cases ("simple" CSRF), and practically impossible in others (cross-site search, timing attacks, etc). The latter category includes timing attacks based on the server-side processing necessary to generate certain responses, and length measurements (both via web-facing timing attacks and passive network attackers).
It would be helpful if servers could make more intelligent decisions about whether or not to respond to a given request based on the way that it’s made in order to mitigate the latter category. For example, it seems pretty unlikely that a "Transfer all my money" endpoint on a bank’s server would expect to be referenced from an img tag, and likewise unlikely that evil.com is going to be making any legitimate requests whatsoever. Ideally, the server could reject these requests a priori rather than delivering them to the application backend.
Here, we describe a mechanism by which user agents can enable this kind of decision-making by adding additional context to outgoing requests. By delivering metadata to a server in a set of fetch metadata headers, we enable applications to quickly reject requests based on testing a set of preconditions. That work can even be lifted up above the application layer (to reverse proxies, CDNs, etc) if desired.
This sounds like Referer, but worse.
I remove unwanted headers from requests generated by user agents I cannot adequately control, e.g., graphical web browsers, using a loopback-bound forward proxy. Perhaps this will be another one to remove.
The only speed bumps I have encountered are timestamps, i.e., links that "expire".
This spec seems really powerful, provided all browser support it :)
[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers#fe...
Internet Explorer is dead (ok, is a Zombie. But was supper-seeded by Edge for most users).
Safari is sadly not yet supported.
The nice thing is that you can employ security enhancements based on this technique even if it's not supported by all your clients.
I.e. you can automatically reject requests if the headers are given and have a bad value, which would add additional protection against certain attacks for all users except the ones stuck on IE or Safari.
“ There are some exceptions to the above rules; for example if a cross-origin GET or HEAD request is made in no-cors mode the Origin header will not be added.”
It seems though that a browser would not allow 'non-simple' headers in no-cors mode[0].
Authorization headers for example would not be allowed (if i'm reading correctly). So any API using that header would not be affected by this issue right?
[0] https://developer.mozilla.org/en-US/docs/Web/API/Request/mod...
The user action part is very nice if it can't be overwritten with just javascript. The other parts I'm not sure what the browser is helping with, that can't just be done with standard headers.
tl;dr yes. It's not always sent.
According to [0] we can force CORS behaviour be using a non-simple request in our webapp. By setting the mime type to JSON for example.
> Hence the banking server or generally web application servers will most likely simply execute any action received and allow the attack to launch.
While these are useful headers, there are protections today via XSRF tokens to prevent these attacks that all major sites implement, so it isn’t likely your bank is vulnerable.
The problem is the header isn’t really usable until uptake is substantial, dropping requests now creates a workflow deviation based on user agent, meaning while some gain security, the header cannot be relied on entirely.
I went with the long term support releases and have had a better experience. Course, still no sound lol but I use Chrome when I want sound. I still like Firefox, just can't use recent releases.
https://web.dev/fetch-metadata/#step-5:-reject-all-other-req...
https://web.dev/samesite-cookies-explained/
SameSite cookies are supported in Safari and IE11, so they're potentially a better candidate, but there are still come caveats (see here for some of them: https://security.stackexchange.com/questions/234386/do-i-sti...).
I've been following their work pretty closely, but I'm at a loss trying to think of anything...
- getting onto webextension based extensions has allowed Firefox to become much snappier than it used to be
- Firefox sync for history syncing is fantastic
- Firefox on mobile (Android) is a delight and the only browser I use on mobile now
- Rust (from Mozilla) is cool and is being used to build cool and important things