E.5. Replay Attacks on 0-RTT
Replayable 0-RTT data presents a number of security threats to TLS-
using applications, unless those applications are specifically
engineered to be safe under replay (minimally, this means idempotent,
but in many cases may also require other stronger conditions, such as
constant-time response). Potential attacks include:
- Duplication of actions which cause side effects (e.g., purchasing
an item or transferring money) to be duplicated, thus harming the
site or the user.
- Attackers can store and replay 0-RTT messages in order to re-order
them with respect to other messages (e.g., moving a delete to
after a create).
- Exploiting cache timing behavior to discover the content of 0-RTT
messages by replaying a 0-RTT message to a different cache node
and then using a separate connection to measure request latency,
to see if the two requests address the same resource.
Ultimately, servers have the responsibility to protect themselves
against attacks employing 0-RTT data replication. The mechanisms
described in Section 8 are intended to prevent replay at the TLS
layer but do not provide complete protection against receiving
multiple copies of client data.
It seems practically guaranteed a lot of devs will enable it without understanding the ramifications.. I hope embeddings like Nginx add a nice configuration interface like "enable_0rtt YES_I_UNDERSTAND_THIS_MIGHT_BE_INSANE;" or similar. Meanwhile I wonder if concentrators like Cloudflare will ever be able to support it, without knowing lots more about the apps they are frontingI guess e.g. Nginx could also insert an artificial header to mark requests received as 0-RTT, and frameworks like Django could use that header to require views be explicitly marked with a decorator to indicate support, or something like that
There is an Internet Draft for that [1]. It is co-authored by Willy Tarreau of haproxy and implemented within haproxy 1.8 [2].
[1] https://tools.ietf.org/id/draft-thomson-http-replay-01.html
[2] https://www.mail-archive.com/haproxy@formilux.org/msg28004.h... (Ctrl+F 'Early-Data') https://www.mail-archive.com/haproxy@formilux.org/msg27653.h... (Ctrl+F '0-RTT')
Having to write your app to understand that TLS 1.3 was used, and 0RTT was used, seems like a really really bad idea. The longer I'm in engineering, the more I realize that the number of people who understand the ramifications here is much smaller than the number who can throw together a TLS 1.3 listening HTTP server by following some dude's tutorial.
Framework support is not going to be enough. This seems like a bad, bad move.
I believe the argument against not doing it was that some companies will just implement their own protocols instead. Eh, I think the chances of that happening were pretty slim. Now most of the problems we'll see with TLS 1.3 will likely be related to 0-RTT.
Also, wasn't that basically the same argument for implementing MITM in TLS 1.3? That if they don't do it the banks and middlebox guys will just stick to TLS 1.2 or whatever?
And who cares about a little bit of an extra HTTPS delay, when just adding Google analytics and Facebook Pixel to your site can increase the delay by over 400 ms? Some poor performance tracking tracking scripts add 800 ms on their own.
That's how you might feel if you live with fast broadband internet. A lot of the world is stuck with high latency, low-bandwidth connections. Applications that are incredibly lean will still be slow if the server is in San Jose and the client is in, say, Uganda.
Intercom and other live chat solutions are typically the biggest offenders on modern pages. They serve 10-20x the script as GA and the Facebook pixel.
I would love if instead the pre-shared secret enabling 0RTT could be something obtained through DNS instead, if that's possible. But that would require a secure DNS, which we don't have.
Normally there's a handshake involved: your browser and the server send packets to each other to set up an encrypted channel, then the server uses its certificate to prove that it's in control of its end of the private channel, then you can send a request. So if you and the server are, say, 50 ms apart, there's usually an extra 200 ms for this handshake, which 0-RTT can save you.
The danger is that because your browser isn't setting up an encrypted channel but just sending a request and hoping for the best, someome who can capture the packet can just re-send it to trigger the request twice. Duplicaing the request is fine for, say, the HN home page, but annoying for a comment reply and a real problem for an online purchase.
"Deploying TLS 1.3: the great, the good and the bad”—https://www.youtube.com/watch?v=0opakLwtPWk
This looks trivial when you are running one Apache httpd on a Raspberry Pi on your desk. Why not just build any of these approaches right into the standard? And then you try to figure out how to make it work for Netflix or Google, who have thousands of clusters of servers - and your brain explodes.
So that's why the standard doesn't specify one solution and require everybody to use it but it does say if you need 0-RTT then you need to figure out what you're going to do about this, including specifying some of the nastier surprises that shuffle message orders and change which servers get which messages.
Example: let's say you think you're clever, you have two servers A and B, load balanced so they usually take distinct clients but can fail into a state where either takes all clients. You might figure you can just track the PSKs in each server, offer 0-RTT and if a client tries to 0-RTT with a PSK from the other server (somehow) it'll just fail to 1-RTT, no big deal.
Er, nope. Bad guys arrange for a client to get the "wrong" server, they capture the 0-RTT from that client, the "wrong" server says "Nope, do 1-RTT", the client tries again and in parallel the bad guys play the 0-RTT packets to the "right" server, which does the exact same thing as the "wrong" one - the replay has succeeded.
The question is just which draft Chrome and Firefox were using back then. The changes for the middleboxes were according to the changelog in draft-22, and IIRC consisted basically in adding back a few unnecessary fields, and allowing an useless handshake message (which is ignored by the receiver). The main trick was IIRC to make all TLS 1.3 connections (resume or not) appear identical to a TLS 1.2 resume connection.
A more detailed history of all changes to the spec can be found at its git repository: https://github.com/tlswg/tls13-spec/
See below if you haven't come across this https://www.thesslstore.com/blog/tls-1-3-banking-industry-wo... https://tools.ietf.org/html/draft-rhrd-tls-tls13-visibility-...
- Static RSA and Diffie-Hellman cipher suites have been removed; all public-key based key exchange mechanisms now provide forward secrecy.MITM proxies, which are trusted by the client and which terminate and recreate the TLS connection, will continue to function. (Assuming they implemented TLS 1.2 correctly, which some didn't.)
Other types of MiTM still work, such as an active MiTM through a malicious root certificate.
In fact, Appendix C5[1] reads:
Previous versions of TLS offered explicitly unauthenticated cipher
suites based on anonymous Diffie-Hellman. These modes have been
deprecated in TLS 1.3. However, it is still possible to negotiate
parameters that do not provide verifiable server authentication by
several methods, including:
- Raw public keys [RFC7250].
- Using a public key contained in a certificate but without
validation of the certificate chain or any of its contents.
[1] https://tools.ietf.org/html/draft-ietf-tls-tls13-28#appendix...(... edit, actually, I recognize this username from previous nonsensical discussions about crypto and backdoors: https://news.ycombinator.com/item?id=13364173 )
The rest of your comment is less sensible than the first. Everyone will implement it, and it's up to the user to decide whether they feel that they need the feature and know that their application is unaffected by replay attacks.