ML-KEM is intended to replace the traditional and the elliptic-curve variant of the Diffie-Hellman algorithm for creating a shared secret value.
When FIPS 203, i.e. ML-KEM is not used, adversaries may record data transferred over the Internet and they might become able to decrypt the data after some years.
On the other hand, there is much less urgency to replace the certificates and the digital signature methods that are used today, because in most cases it would not matter if someone would become able to forge them in the future, because they cannot go in the past to use that for authentication.
The only exception is when there would exist some digital documents that would completely replace some traditional paper documents that have legal significance, like some documents proving ownership of something, which would be digitally signed, so forging them in the future could be useful for somebody, in which case a future-proof signing method would make sense for them.
OpenSSH, OpenSSL and many other cryptographic libraries and applications already support FIPS 203 (ML-KEM), so it could be easily deployed, at least for private servers and clients, without also replacing the existing methods used for authentication, e.g. certificates, where using post-quantum signing methods would add a lot of overhead, due to much bigger certificates.
What changed is that the new timeline might be so tight that (accounting for specification, rollout, and rotation time) the time to switch authentication has also come.
ML-KEM deployment is tangentially touched on in the article because it's both uncontroversial and underway, but:
> This is not the article I wanted to write. I’ve had a pending draft for months now explaining we should ship PQ key exchange now, but take the time we still have to adapt protocols to larger signatures, because they were all designed with the assumption that signatures are cheap. That other article is now wrong, alas: we don’t have the time if we need to be finished by 2029 instead of 2035.
> For key exchange, the migration to ML-KEM is going well enough but: 1. Any non-PQ key exchange should now be considered a potential active compromise, worthy of warning the user like OpenSSH does, because it’s very hard to make sure all secrets transmitted over the connection or encrypted in the file have a shorter shelf life than three years. [...]
You comment is essentially the premise of the other article.
However that does not mean that the switch should really be done as soon as it is possible, because it would add unnecessary overhead.
This could be done by distributing a set of post-quantum certificates, while continuing to allow the use of the existing certificates. When necessary, the classic certificates could be revoked immediately.
Personally, my reading between the lines on this subject as a non-expert is that we in the public might not know when post-quantum cryptography is necessary until quite a while after it is necessary.
Prior to the public-key cryptography revolution, the state of the art in cryptography was locked inside state agencies. Since then, public cryptographic research has been ahead or even with state work. One obvious tell was all the attempts to force privately-operated cryptographic schemes to open doors to the government via e.g. the Clipper chip and other appeals to magical key escrow.
A whole generation of cryptographers grew up in this world. Quantum cryptography might change things back. We know what papers say from Google and other companies. Who knows what is happening inside the NSA or military facilities?
It seems that with quantum cryptography we are back to physics, and the government does secret physics projects really well. This paragraph really stood out to me:
> Scott Aaronson tells us that the “clearest warning that [he] can offer in public right now about the urgency of migrating to post-quantum cryptosystems” is a vague parallel with how nuclear fission research stopped happening in public between 1939 and 1940.
Things need to be rolled out in advance of need, so that you can get a do-again in case there proves to be a need.
Perhaps it's already necessary, or it will be in the following months. We are hearing only about the public developments, not whatever classified work the US is doing
I think the analogy with the Manhattan project is apt. The US has enormous interest in decrypting communication streams at scale (see Snowden and the Utah NSA datacenter), and it's known for storing encrypted comms for decrypting later. Well maybe later is now
If you worry about a >=1% risk of quantum attacks being available soon, you should also worry about a >=1% risk of the relatively new ML-KEM being broken soon. The risk profile is pretty comparable. For both cases there are credible expert opinions that say the risk is incredibly overrated and credible expert opinions that say the risk is incredible underrated.
Filippo has linked opinions that quantum attacks are right around the corner. People like Dan Bernstein (djb) are throwing all their weight to stress that anything but hybrids are irresponsible. I don't think there is anybody that says "hybrids are a bad idea", just people that want to make it easy to choose non-hybrid ML-KEM.
Says who?
There's a big difference between “we can't be sure that ECDH stays secure for five more years” and “ECDH is nearly guaranteed to be broken”. There has been two major papers in the beginning of the year that advanced the state of the art enough to question the prior assumption about the slowness of QC progress. Now we know that rapid advances are possible and we must take that into account in risk assessment. But that doesn't mean that rapid advances are guaranteed. Things could stay stagnant for 15 more years at this point before the next breakthrough. And if that's the case, then ECDH could very well remain relevant for the remaining century.
We just cannot know if it happens, so we can't take the risk. But that doesn't mean that we are certain that the risk will materialize.
Exactly in the way the succeeding sentence defines: "For both cases there are credible expert opinions that say the risk is incredibly overrated and credible expert opinions that say the risk is incredible underrated."
> when ECDH is nearly guaranteed to be broken in five years
Most of your argument (and that of many others pushing the contra-hybrid point) hinges on this. I don't think this position is justified. I believe there is significant risk for quantum attacks in the near term (and thus fully support the speedy adoption of hybrids), yes, but quite far away from certainty. Personally, I'd even say better than coin-flip is pushing it. I mean, look at what Scott Aaronson is writing on that matter:
"I also continue to profess ignorance of exactly how many years it will take to realize those principles in the lab, and of which hardware approach will get there first. […] This year [=2025] updated me in favor of taking more seriously the aggressive pronouncements—the “roadmaps”—of Google, Quantinuum, QuEra, PsiQuantum, and other companies about where they could be in 2028 or 2029." -- https://scottaaronson.blog/?p=9425
This is nothing like "nearly guaranteed" in five years.
> and Kyber is two decades old
But the implementations aren't and it's not been under heavy scrutiny for that long. One can very much make the point that we weren't that critical when elliptic curve cryptography entered the scene, but we do now have the luxury to have these heavily battle-tested primitives and implementations at our disposal, so why throw them out of the window so eagerly? Also an interesting comparison to elliptic curve cryptography is that it took until 2005 to get good key exchanges primitives and until 2011 to get good signature primitives (Curve25519, now known as X25519, and Ed25519 respectively) and mainstream availability of those took waaaay longer.
Coming back to this again, for second remark:
> when ECDH is nearly guaranteed to be broken in five years
Another important point is all quantum attack on ECDH will require inherently expensive equipment for the foreseeable future, see adgjlsfhk1's comment https://news.ycombinator.com/item?id=47665561 , whereas a stupid Kyber implementation error in a mainstream library can very likely end up being attackable by a Metasploit plugin. Our threat model should most definitely include nation state attackers prominently, but these are not at all the only attackers that we should focus on. There is still significant value in keeping out attackers that did not spend >100k$ on equipment.
> Yes, djb keeps making the same crankish complaint without any evidence or reason, that doesn't mean you have to repeat it uncritically.
I did not repeat it uncritically, I just happen to share his conclusion, even after months of following the pro and contra discussion. Also, how can you say he complains without reason? He has explained them at length, see https://cr.yp.to/2025/20250812-non-hybrid.pdf for example. Whether his methods of complaining are commendable or effective is another topic, though.
This very much exists. In particular, the cryptographic timestamps that are supposed to protect against future tampering are themselves currently using RSA or EC.
Of course, the modern version of this is putting the timestamp and a hash of the signature on the blockchain.
The weird thing we have right now is that quantum computers are absolutely hopeless doing anything with RSA and as far as I know, nobody even tried EC. And that state of the art has not moved much in the last decade.
And then suddenly, in a few years there will be a quantum computer that can break all of the classical public key crypto that we have.
This kind of stuff might happen in a completely new field. But people have been working on quantum computers for quite a while now.
If this is easy enough that in a few years you can have a quantum computer that can break everything then people should be able to build something in a lab that breaks RSA 256. I'd like to see that before jumping to conclusions on how well this works.
> Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments betray a lack of expertise. As Scott Aaronson said:
> Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”
To summarize, the hard part of scalable quantum computation is error correction. Without it, you can't factorize essentially anything. Once you get any practical error correction, the distance between 32-bit RSA and 2048-bit RSA is small. Similarly to how the hard part is to cause a self-sustaining fissile chain reaction, and once you do making the bomb bigger is not the hard part.
This is what the experts know, and why they tell us of the timelines they do. We'd do better not to dismiss them by being smug about our layperson's understanding of their progress curve.
The actual challenge is we still don’t know if we can build QC circuits that factorize faster than classical both because the amount of qubits has gone from ridiculously impossible to probably still impossible AND because we still don’t know how to build circuits that have enough qbits to break classical algorithms larger or faster than classical computers, which if you’re paying attention to the breathless reporting would give you a very skewed perception of where we’re at.
It’s also easy to deride your critics as just being contrarian on forums, but the same complaint happens to distract from the actual lack of real forward progress towards building a QC. We’ve made progress on all kinds of different things except for actually building a QC that can scale to actually solve non trivial problems . It’s the same critique as with fusion energy with the sole difference being that we actually understand how to build a fusion reactor, just not one that’s commercially viable yet, and fusion energy would be far more beneficial than a QC at least today.
There’s also the added challenge that crypto computers only have one real application currently which is as a weapon to break crypto. Other use cases are generally hand waved as “possible” but unclear they actually are (ie you can’t just take any NP problem and make it faster even if you had a compute and even traveling salesman is not known to be faster and even if it is it’s likely still not economical on a QC).
Speaking of experts, Bas is a cryptography expert with a specialty in QC algorithms, not an expert in building QC computers. Scott Aronson is also well respected but he also isn’t building QC machines, he’s a computer scientist who understands the computational theory, but that doesn’t make him better as a prognosticator if the entire field is off on a fool’s errand. It just means he’s better able to parse and explain the actual news coming from the field in context.
If you look back at my writing from 2025 and earlier, I'm on the conservative end of Q-day estimates: 2035 or later. My primary concern then is that migrations take a lot of time: even 2035 is tight.
I'm certainly not an expert on building quantum computers, but what I hear from those that are worries me. Certainly there are open challenges for each approach, but that list is much shorter now than it was a few years ago. We're one breakthrough away from a CRQC.
There is no such equivalent for qubits or error correction. You can't say, we produce this much extra error correction per day so we will hit the target then and then.
There is also something weird in the graph in https://bas.westerbaan.name/notes/2026/04/02/factoring.html. That graph suggests that even with the best error correction in the graph, it is impossible to factor RSA-4 with less then 10^4 qubits. Which seems very odd. At the same time, Scott Aaronson wrote: "you actually can now factor 6- or 7-digit numbers with a QC". Which in the graph suggests that error rate must be very low already or quantum computers with an insane number of qubits exist.
Something doesn't add up here.
At the theory level, there were only theories, then a few breakthroughs, then some linear production time, then a big boom.
> Something doesn't add up here.
Please consider it might be your (and my) lack of expertise in the specific sub-field. (I do realize I am saying this on Hacker News.)
It's because the plot is assuming the use of error correction even for the smallest cases. Error correction has minimum quantity and quality bars that you must clear in order for it to work at all, and most of the cost of breaking RSA4 is just clearing those bars. (You happen to be able to do RSA4 without error correction, as was done in 2001 [0], but it's kind of irrelevant because you need error correction to scale so results without it are on the wrong trendline. That's even more true for the annealing stuff Scott mentioned, which has absolutely no chance of scaling.)
You say you don't see the uranium piling up. Okay. Consider the historically reported lifetimes of classical bits stored using repetition codes on the UCSB->Google machines [1]. In 2014 the stored bit lived less than a second. In 2015 it lived less than a second. 2016? Less than a second. 2017? 2018? 2019? 2020? 2021? 2022? Yeah, less than a second. And this may not surprise you but yes, in 2023, it also lived less than a second. Then, in 2024... kaboom! It's living for hours [4].
You don't see the decreasing gate error rates [2]? The increasing capabilities [3]? The ever larger error correcting code demonstrations [4]? The front-loaded costs and exponential returns inherent to fault tolerance? TFA is absolutely correct: the time to start transitioning to PQC is now.
[0]: https://www.nature.com/articles/414883a
[1]: https://algassert.com/assets/2025-12-24-qec-foom/plot-half-l... (from https://algassert.com/post/2503 )
[2]: https://arxiv.org/abs/2510.17286
...
365 days later, you have 365 grams after spending ungodly amounts of energy to separate isotopes. AND STILL NO BOMB! Not even a small one. These scientists are just some bullshit artists.
52kg later: BOOM!
I don't like this analogy very much, because in practice making a nuclear reaction is much, much easier than making a nuclear bomb. You don't need any kind of enrichment or anything, just a big enough pile of natural uranium and graphite [1].
Making a bomb on the other hand, required an insane amount of engineering: from doing isotope separation to enrich U235 to an absurd level (and / or, extract plutonium from the wastes of a nuclear reactor) to designing a way to concentrate a beyond critical mass of fissile element.
The Manhattan project isn't famous without reason, it was an unprecedented concerted effort that wouldn't have happened remotely as quickly in peacetime.
The Manhattan Project scientists actually did this before anybody broke ground at Los Alamos. It was called the Chicago Pile. And if the control rods were removed and the SCRAM disabled, it absolutely would have created a "small nuclear explosion" in the middle of a major university campus.
Given the level of hype and how long it's been going on, I think it's totally reasonable for the wider world to ask the quantum crypto-breaking people to build a Chicago Pile first.
> On 2 December 1942
https://en.wikipedia.org/wiki/Chicago_Pile-1
> on July 16, 1945
https://en.wikipedia.org/wiki/Trinity_(nuclear_test)
Two years and a half. This is still a good metaphor for "once you can make a small one, the large one is not far at all."
( Not impossible, more strictly "beyond reach" economically and processing wise, operating on over estimates of the effort and approach )
They ignored letters from Albet Einstein on the topic, they ignored or otherwise disregarded several letters from the Canadian / British MAUD Committee / Tube Alloys group and it took a personal visit from an Australian for them to sit up and take note that such a thing was actually within reach .. although it'd take some man power and a few challenges along the way.
* https://en.wikipedia.org/wiki/MAUD_Committee is one place to start on all that.
This is far from true. On the experimental side, gate fidelities and physical qubit numbers have increased significantly (a couple of orders of magnitude). On the theory side, error correction techniques have improved astronomically -- overhead to of error corrections has dropped by many orders of magnitude. On the error correction side progress has been feverish over the last 4 years in particular.
I'm still dubious about the accelerated timeline given what quite a bit of what is presented as progress in the field is fraud or borderline fraud when inspected closely. (e.g. some of the recent majorana claims by Microsoft are at best overhyped, at worst fraud)
>[...] the availability of HPKE hybrid recipients, which blocked on the CFRG, which took almost two years to select a stable label string for X-Wing (January 2024) with ML-KEM (August 2024), despite making precisely no changes to the designs. The IETF should have an internal post-mortem on this, but I doubt we’ll see one
My kingdom for a standards body that discusses and resolves process issues.
Your reasoning relies on this being true:
> [CRQCs] will be slow, expensive, and power hungry for at least a decade
How could you know that? What if it was 5 years? 1 year? 6 months?
I predict there will be an insane global pivot once Q-day arrives. No nation wants to invest billions in science fiction. Every nation wants to invest billions in a practical reality of being able to read everyone's secrets.
> Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”
That quote, alone, removed a lot of assumptions I had been carrying around.
There is a critical flux/density/mass threshold for nuclear bombs. You can create small nuclear explosions with particle accelerators, which is how it all started. You just cannot scale those accelerators to anything macroscopic. But the microscopic explosions where done very very early, otherwise nobody would have had the necessary data to later extrapolate this to larger scales.
The interesting question after that first discovery of fission was only about how large the critical density or mass would be for a self-sustaining reaction. But as soon as you knew the critical mass, and had enough fissile material to go over that threshold, things became feasible, and easier with even more material.
Quantum computing doesn't have such a threshold, quite the opposite. As far as we know, larger problem sizes and larger numbers of qbits make things harder. Quantum error correction only changes the exponent in that relation.
Fault tolerance is the hard part for QC, once it's achieved, the difference between factoring 35 and RSA-2048 is an engineering challenge, not an impossibility.
Prior to 1940 it was known that clumping enough fissile material together could produce an explosion. There were engineering questions around how to purify uranium and how to actually construct the weapon etc. But the phenomenon was known.
I say this because there’s a meme that governments are cooking up exotic technologies behind closed doors which I personally tend to doubt.
This is almost perfect analogy to the MP though. We know exactly what could happen if we clumped enough qubits together. There are hard engineering challenges of actually doing so, and governments are pretty good at clumping dollars together when they want to.
FWIW, constructing a weapon with highly enriched uranium is, relatively, simple. At the time, the choice was made to use a gun-type weapon that shot a projectile of highly enriched uranium into a a "target" of highly enriched uranium. The scientists were so sure it would work that the design didn't necessitate a live test. This was "little boy", which was eventually dropped on Hiroshima.
Fat Man utilized plutonium which required an implosion to compress the fissile material that would set off the chain reaction. This is a much more complex undertaking, but it's much more efficient. Namely, you need much less fissile material, and more of that fissile material is able to participate in the chain reaction. This design is what allows for nuclear tipped missiles. The same principles can be applied to a U-235 based weapon as well.
The implosion based design is super interesting to read about. One memorable aspect is that the designers realized that applying a tamper of uranium (U-238) around the fissile material allows for significant improvement in yield. The chain reaction is exponential, so the few extra nanoseconds that the uranium keeps the fissile material together leads to significant increase in yield.
Like when the government made XKeyscore[1]?
It was also about far more than the science. It was about industrializing the entire production process and creating industrial capability that simply did not exist before.
And the Manhattan Project cost $30B in today’s money. Compared with some of the numbers Congress has allocated recently, I’d call that a bargain.
You don't use zero days immediately. You stockpile them for when the time is right. A quantum computer is the ultimate zero day.
When I say they’re not hiding “exotic technologies” I’m referring to things that would at a minimum win a Nobel prize. Alien technologies like antigravity or faster than light travel that people sometimes talk about. I am not even talking about things like Stuxnet which was impressive but not revolutionary.
>In symmetric encryption, we don’t need to do anything, thankfully
This is valuable because it does offer a non-scalable but very important extra layer that a lot of us will be able to implement in a few important places today, or could have for awhile even. A lot of people and organizations here may have some critical systems where they can make a meat-space-man-power vs security trade by virtue of pre-shared keys and symmetric encryption instead of the more convenient and scalable normal pki. For me personally the big one is WireGuard, where as of a few years ago I've been able to switch the vast majority of key site-to-site VPNs to using PSKs. This of course requires out of band, ie, huffing it on over to every single site, and manually sharing every single profile via direct link in person vs conveniently deployable profiles. But for certain administrative capability where the magic circle in our case isn't very large this has been doable, and it gives some leeway there as any traffic being collected now or in the future will be worthless without actual direct hardware compromise.
That doesn't diminish the importance of PQE and industry action in the slightest and it can't scale to everything, but you may have software you're using capable of adding a symmetric layer today without any other updates. Might be worth considering as part of low hanging immediate fruit for critical stuff. And maybe in general depending on organization and threat posture might be worth imagining a worst-case scenario world where symmetric and OTP is all we have that's reliable over long time periods and how we'd deal with that. In principle sneakernetting around gigabytes or even terabytes of entropy securely and a hardware and software stack that automatically takes care of the rough edges should be doable but I don't know of any projects that have even started around that idea.
PQE is obviously the best outcome, we ""just"" switch albeit with a lot of increase compute and changed assumptions in protocols pain, but we're necessarily going to be leaning on a lot of new math and systems that won't have had the tires kicked nearly as long as all conventional ones have. I guess it's all feeling real now.
Given TFA accepts that error correction is the bottleneck for progress, and the gap between any and lots of error correction is small, and we presently have close to 0 error correction then nothing has practically changed with reduced qubit requirements.
Of course, it’s totally fine to have and announce a change of view on the topic, though I don’t see how the Google paper materially requires it.
The most concrete issue for me, as highlighted by djb, is that when the NSA insists against hybrids, vendors like telecommunications companies will handwrite poor implementations of ML-KEM to save memory/CPU time etc. for their constrained hardware that will have stacks of timing side channels for the NSA to break. Meanwhile X25519 has standard implementations that don't have such issues already deployed, which the NSA presumably cannot break (without spending $millions per key with a hypothetical quantum attack, a lot more expensive than side channels).
The fact that only NSA does that and they really have no convincing arguments seems like the biggest reason why the wider internet should only roll out hybrids. Then possibly wait decades for everything to mature and then reconsider plain modes of operation.
Truly, truly can't understand why anyone finds this line of reasoning plausible. (Before anyone yells Dual_EC_DRBG, that was a NOBUS backdoor, which is an argument against the NSA promoting mathematically broken cryptography, if anything.)
Timing side channels don't matter to ephemeral ML-KEM key exchanges, by the way. It's really hard to implement ML-KEM wrong. It's way easier to implement ECDH wrong, and remember that in this hypothetical you need to compare to P-256, not X25519, because US regulation compliance is the premise.
(I also think these days P-256 is fine, but that is a different argument.)
That is insanely irresponsible and genuinely concerning. I don't care if they have a magical ring that defies all laws of physics and assuredly prevents any adversary stealing the backdoor. If an organization is implementing _ANY_ backdoor, they are an adversary from a security perspective and their guidance should be treated as such.
1) We can broadly trust the US government 2) We should adopt new encryption partly designed and funded by the US government, and get rid of the battle tested encryption that they seem not to be able to break
Forgive me for being somewhat suspicious of your motives here
NSA still has the secret Suite A system for their most sensitive information. If they think that is better than the current public algorithms and their goal is to make telecommunications vendors to have better encryption, then why doesn't they publish those so telco could use it?
> Truly, truly can't understand why anyone finds this line of reasoning plausible. (Before anyone yells Dual_EC_DRBG, that was a NOBUS backdoor, which is an argument against the NSA promoting mathematically broken cryptography, if anything.)
The NSA weakened DES against brute-force attack by reducing the key size (while making it stronger against differential cryptanalysis, though).
https://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA's...
Also NSA put a broken cipher in the Clipper Chip (beside all the other vulnerabilities).
These are algorithms that NSA will use in real systems to protect information up to the TOP SECRET codeword level through programs such as CNSA 2.0[1] and CsFC.
[1] https://media.defense.gov/2025/May/30/2003728741/-1/-1/0/CSA...
[2] https://www.nsa.gov/Resources/Commercial-Solutions-for-Class...
I guess the NSA thinks they're the only one that can target such a side channel, unlike, say, a foreign government, which doesn't have access to the US Internet backbone, doesn't have as good mathematicians or programmers (in NSA opinion), etc.
> Timing side channels don't matter to ephemeral ML-KEM key exchanges, by the way. It's really hard to implement ML-KEM wrong. It's way easier to implement ECDH wrong, and remember that in this hypothetical you need to compare to P-256, not X25519, because US regulation compliance is the premise.
Except for KyberSlash (I was surprised when I looked at the bug's code, it's written very optimistically wrt what the compiler would produce...)
So do you think vendors will write good code within the deadlines between now and... 2029? I wouldn't bet my state secrets on that...
That's a timing side-channel, irrelevant to ephemeral key exchanges, and tbh if that's the worst that went wrong in a year and a half, I am very hopeful indeed.
Age should be using 256 bit file keys, and default to PC keys in asymmetric mode.
It simply is not. NIST and BSI specifically recommend all of AES-128, AES-196, and AES-256 in their post-quantum guidance. All of my industry peers I have discussed this with agree that AES-128 is fine for post-quantum security. It's a LinkedIn meme at best, and a harmful one at that.
My opinion changed on the timeline of CRQC. There is no timeline in which CRQC are theorized to become a threat to symmetric encryption.
Then again, something something md5. 'Just replace those bytes with sha256()' is apparently also hard. But it's a lot easier than digging into different scenarios under which md5 might still be fine and accepting that use-case, even if only for new deployments
You can’t just throw “Grover’s algorithm is difficult to parallelize” etc. It’s not same as implementation, especially when it gets to quantum computers. It’s very specialized.
Slightly off-topic but: Does anyone know what the Signal developers plan on doing there to replace SGX? I mean it's not like outside observers haven't been looking very critically at SGX usage in Signal for years (which the Signal devs have ignored), but this does seem to put additional pressure on them.
Security researchers like Matthew Green seem to care[0], the Signal people surely do, I myself do, too. Isn't that enough to raise that question?
> if you're paranoid enough to worry about it
You make it seem like that's an outlandish thought, when in reality there have been tons of reported vulnerabilities for SGX. And now QC represents another risk.
> it's not like Signal would be less secure than any other third party if SGX were broken
That's a weird benchmark. Shouldn't Signal rather be measured by whether it lives up to the security promises it makes? Signal's whole value proposition is that it's more secure than "third parties".
[0]: https://blog.cryptographyengineering.com/category/signal/
while i agree with filippo, the way you worded this makes me think that you may not be aware that gutmann is also an expert in the field. so, if you are giving filippo weight because he is an expert, it is worth giving some amount to gutmann as well.
Also, I went over Filippo's post again and still can't see where it references the Gutmann / Neuhaus paper. Are we talking about the same post?
> This paper presents implementations that match and, where possible, exceed current quantum factorisation records using a VIC-20 8-bit home computer from 1981, an abacus, and a dog.
From the link:
> Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments betray a lack of expertise[1]. As Scott Aaronson said[2]:
> > Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”
[1]: https://bas.westerbaan.name/notes/2026/04/02/factoring.html
E.g. can I use my Yubikey with FIDO2 for SSH together with a PQ encryption, such that I am safe from "store now, decrypt later", but can still use my Yubikey (or Android Keystore, for that matter)?
They/we need to migrate those protocols to PQ now, so that you all can start migrating to PQ keys in time, including the long tail of users that will not rotate their keys and hardware the moment the new algorithms are supported.
For example, it might be too late to get anything into Debian for it to be in oldstable when the CRQCs come!
Sure, I'm just trying to understand the consequences of that. Felt great to finally have secure elements on smartphones and laptops (or Yubikeys), protecting against the OS being compromised (i.e. "you access my OS, but at least you can't steal my keys").
I was wondering if PQ meant that when it becomes reality, we just get back to a world where if our OS is compromised, then our keys get compromised, too. Or if there is a middle ground in the threat model, e.g. "it's okay to keep using your Yubikey, because an attacker would need to have physical access to your key, specialised hardware AND access to a quantum computer in order to break it". Versus "you can stop bothering about security keys because with "store now, decrypt later", everything you do today with your security keys will anyway get broken with quantum computers eventually".
If you are doing encryption, then you do have reason to worry, and there aren't great options right now. For example if you are using age you should switch to hybrid software ML-KEM-768 + hardware P-256 keys as soon as they are available (https://github.com/str4d/age-plugin-yubikey/pull/215). This might be a scenario in which hybrids provide some protection, so that an attacker will need to compromise both your OS and have a CRQC. In the meantime, depending on your threat model and the longevity of your secrets (and how easily they can rotated in 1-2 years), it might make sense to switch to software PQ keys.
If you are doing a post-quantum key exchange and only authenticating with the Yubikey, then you are safe from after-the-fact attacks. Well, as long as the PQ key exchange holds up, and I am personally not as optimistic about that as I’d like to be.
Let me rephrase it to see if I understand correctly: so it is fine to keep using my security keys today for authentication (e.g. FIDO2?), but everything else should use PQ algorithm because the actual data transfers can be stored now and decrypted later.
Meaning that today (and for a few years), my Yubikey still protects me from my key being stolen when my OS is compromised.
Correct?
Another challenge of the transition is how much silicon we have yet to even implement. Smart cards? Mobile acceleration/offloading? We're at the mercy of vendors.
> [1] The whole paper is a bit goofy: it has a zero-knowledge proof for a quantum circuit that will certainly be rederived and improved upon before the actual hardware to run it on will exist. They seem to believe this is about responsible disclosure, so I assume this is just physicists not being experts in our field in the same way we are not experts in theirs.
The zero-knowledge proof may come across as something of a gimmick, but two of the authors (Justin Drake and Dan Boneh) have strong ties to cryptocurrency communities, where this sort of thing is not unusual.
I also don’t think it’s particularly strange to focus on cryptocurrencies. This is one of the few domains where having access to a quantum computer ahead of others could translate directly into financial gain, so the incentive to target cryptocurrencies is quite big.
Changing the cryptographic infrastructure we rely on daily is difficult, but still easier than, for example in Bitcoin, where users would need to migrate their coins to a quantum-resistant scheme (whenever such a scheme will be implemented). Given the limited transaction throughput, migrating all vulnerable coins would take years, and even then, there would remain all those coins whose keys have been lost.
Satoshi is likely dead, incapacitated, or has lost or destroyed his keys, and thus will not be able to move his coins to safety. Even if he has still access, the movement of an estimated one million BTC, which are currently priced in by the market as to be permanently lost, would itself be a disruptive price event, regardless if done with good or bad intentions.
If you know which way the price will go (obviously way down in this case), you can always profit from such a price move, even if Satoshi's coins were blacklisted and couldn't be sold directly.
How? I just googled: about 55 million addresses with bitcoin in them, about 144 blocks per day, about 3000 to 5000 tx per block.
In something like 100 days all the coins would be moved to other addresses.
I gotta say it'd be hilarious if to speed up that migration-to-quantum-resistant-addresses process, the Bitcoin community were to finally allow bigger blocks.
EDIT: I take it if the network had to have full blocks for 100 days, then "shit would happens". Maybe they should force an orderly move: e.g. only addresses ending with "3a" are eligible to be moved in a block whose hash ends with an "3a", etc. to prevent congestion?
This paper claims 60-70% throughput loss with 59 times(!) larger storage space requirements.
Doubt, the moment people get vocal about their fund being stolen that will be it for crypto, it will crash the bank run. The only way it could work is that if you steal too little to be noticed, which will also be too little to finance your venture...
The snarky reply would be that having their funds stolen is not something that seems to discourage people from having cryptocurrencies as it happens all the time:
Is that so? I always thought that the design choice that only hashes of the public keys were public was a pretty clever way to make the whole scheme quantum-proof. What did I miss?
* https://arxiv.org/pdf/2603.28627
The new thing here seems to be the use of the neutral atom technique. Supposedly we are up to 96 entangled qubits for a second or two based on neutral atoms.
Shouldn't that be enough capability to factor 15 using Shor's?
It's maybe good to remember that SIDH was broken in polynomial time by a classical computer 3 years ago... I'm really concerned by the current rush for PQ solutions and what are the real intentions behind it. On a side note there might even be a world where a powerfully enough quantum computer that break 2048 bigs RSA will never exists (Hooft, Palmer... Recent quantum gravity theory).
https://www.nature.com/articles/s41598-024-53708-7
> Overall, 8,219,999 = 32,749 × 251 was the highest prime product we were able to factorize within the limits of our QPU resources. To the best of our knowledge, this is the largest number which was ever factorized by means of a quantum annealer; also, this is the largest number which was ever factorized by means of any quantum device without relying on external search or preprocessing procedures run on classical computers.
"Factored" is doing a lot of lifting here and is borderline deceptive. Plenty of researchers have long ago pointed out that this won't scale, see M Mosca for reference.
It is not clear at all that quantum annealing provides any speedup compared to a classical computer.
Annealing is in fact proven to be able to do certain things faster than any classical CPU; whether you can make use of that particular feature is a different question. If you're into spinglasses, maybe
You had written. As long as we're in agreement that rushing PQ appears to be the appropriate choice. The only question is the precise form it should take, with the author arguing that hybrid would be unacceptably slow to roll out due to various social and bureaucratic reasons.
He's also pointing out that the only scenario in which hybrid is of benefit is one in which crypto related QC remains either relatively ineffective or extremely expensive in the medium term. Since that assumption is looking increasingly suspect it calls into question the point of hybrid to begin with. In the face of cheap QC hybrid adds zero value.
[0] https://csrc.nist.gov/csrc/media/Events/2023/third-workshop-...
Having PQ and your adversaries not knowing is far more valuable than the few hundred billion you could get from cracking (and tanking) BTC.
- There is a dark outlook on Bitcoin as the community and devs can't seem to coordinate. Especially on what to do with the "Satoshi coins"
- Ethereum has a hard but clear path (pretty much full rewrite) with a roadmap [0]
- The highly optimized "fast chains" (Solana & co) are in a lot of trouble too.
It would be funny if Bitcoin the asset end up migrating to Ethereum as another erc20 token
- [0] https://pq.ethereum.org/
Existing PQ standards have signatures with the wrong efficiency tradeoffs for usage in Bitcoin-- large signatures that are durable against a lot of use and supports fast signing, while for Bitcoin signature+key size is critical, keys should be close to single use, and signing time is irrelevant.
To the extent that I've seen any opposition related to this isn't only been in related to schemes that were to inefficient or related to proposals to confiscate the assets of people not adopting the proponent's scheme (which immediately raises concerns about backdoors and consent).
There is active development for PQ signature standards tailored to Bitcoin's needs, e.g. https://delvingbitcoin.org/t/shrimps-2-5-kb-post-quantum-sig... and I think progress looks pretty reasonable.
Claims that there is no development are as far as I can tell are just backscatter from a massive fraud scheme that is ongoing (actually, at least two distinct cons with an almost identical script). There are criminal fraudsters out seeking investments in a scheme to raise money to build a quantum computer and steal Bitcoins. One of them reportedly has raised funds approaching a substantial fraction of a billion dollars from victims. For every one sucker they convince to give them money, they probably create 99 others people panicked about it (since believing it'll work is a pre-req to handing over your money).
They're going to lose those assets regardless, either to the first hacker with a QC or via a protocol-level burn. The latter is arguably better for the network's long-term health, as it reduces circulating supply rather than subsidizing an attacker.
I can understand disagreeing about timelines but is there a flaw in the logic that once the underlying crypto is broken, "consent" is a moot point?
> A scam creates the credulous, not the skeptical. To portray skeptics as byproducts of a scam is an insult to logic — and a classic straw man fallacy.
No. When the scam is successful against a target the target is in on it and all for it and hands over their money. When the scam fails there are a number of different outcomes and one of them is thinking "this is real, going to happen, very scary, and also absolutely illegal, immoral, and/or self defeating, so I want no part of it".
Inherently scams tend to only convert a small percentage of their prospects,-- ones that don't aren't ambitious enough (e.g. aren't asking for enough money) and risk running their path too quickly by signing on too many people and getting too much exposure too fast.
This is far from my understanding. Changing out this signature scheme is hard work, but doesn't require a rewrite of the VM.
The fact that the signature size is multiplied by ~10 will greatly affect things like blockspace (what I guess is even more a problem with Bitcoin !)
Also they are the only blockchain I believe that put an emphasis on allowing large number of validators to run on very modest hardware (in the ballpark of a RPI, N100 or phone).
My understanding is they will need to pack it with a larger upgrade to solve all those problems, the so called zkVM/leanVM roadmap.
And then there are the L2 that are an integral part of the ecosystem.
So this is the greatest upgrade ever made on Ethereum, pretty much full rewrite, larger than the transition to proof of stake. I remember before the Proof of Stake migration they were planning to redo the EVM too (with something WASM based at the time) but they had to abandon their plan. Now it seems there is no choice but to do it.
Aren't they relying on asymmetrical signing aswell?
Also companies like PQShield.
The hardware (IP) exists to solve this in time, and is being integrated into products gradually.
No idea how widespread it will become or over what timescale.
Also...
> Trusted Execution Environments (TEEs) like Intel SGX and AMD SEV-SNP and in general hardware attestation are just f**d. All their keys and roots are not PQ and I heard of no progress in rolling out PQ ones, which at hardware speeds means we are forced to accept they might not make it, and can’t be relied upon.
This part is embarrassing. We’ve had hash-based signatures that are plenty good for this for years and inspire more confidence for long-term security than the lattice schemes. Sure, the private keys are bigger. So what?
We will also need some clean way to upgrade WebAuthn keys, and WebAuthn key management currently massively sucks.
compare to SGX, a more critical impacted component is TPM chip, secured/measured boot depends on TPM, and cost of replacing all servers and OS ...
Of course, many critical components on a motherboard and CPU verify their firmware using non-post-quantum keys, which is another issue.
If you want something that includes details on how they were deployed, I'm afraid that's all very recent and I don't have good references.
I feel like the NSA pushing a (definitely misguided and obviously later exploited by adversaries) NOBUS backdoor has poorly percolated into the collective consciousness, missing the NOBUS part entirely.
See https://keymaterial.net/2025/11/27/ml-kem-mythbusting/ for whether the current standards can hide NOBUS backdoors. It talks about ML-KEM, but all recent standards I read look like this.
Also, that was the time of export ciphers and Suite A vs Suite B, which were very explicit about there being different algorithms for US NatSec vs. everything else. This time there's only CNSA 2.0, which is pure ML-KEM and ML-DSA.
So no, there is no history of the NSA pushing non-NOBUS backdoors into NatSec algorithms.
In fairness, that was from 1975. I don't particularly trust the NSA, but i dont think things they did half a century ago is a great way to extrapolate their current interests.
Google with Softbank invested about $230M into QC last year. Microsoft, IBM and Google have spent on QC $15B combined, through all of the time they researched it. $15B spent in 20 years, less than $1B per year, by three companies.
Google spent upwards of $150B last year in datacenters.
This may tell us something about how close we are to a working quantum computing.
The analogy to a small atomic bomb is on point.
Problem is the experts don't tell the truth, they say whatever game theory version of the world they came up with will make people do what they think people should do. If experts just said the literal truth it'd be different, and then when they would walk it back would be understandable.
But when later it becomes clear the experts said outright lies because they thought it'd induce the right behavior, that goes out the window.
Does this mean Bitcoin is going to $0? Absolutely not, it’s just going to take the community organizing and putting in the gigantic effort to make the changes. Frankly I’m not personally clear if that means all existing cold wallets need to be flashed/replaced? All existing Bitcoin miner software needs to be updated? All existing Bitcoin node software needs to be updated?
The market cap of a cryptocurrency — or any commodity, really — is not its market value. If you have all bitcoins in existence and try to sell them you will crash the price to zero. The slope of that price graph — from the current market price to zero — determines how much you make in total. Most cryptocurrency exchanges have public order books, so you can see how much (and at what price per coin) you can actually sell into the market before you eat up all the bids. Last time I checked it was closer to $10bn than $1trn.
The real problem is building a system that can survive noise, errors, and decoherence. Once you solve that, scaling it up is non-trivial but has a very exponential path.
At the very least, you want to start using hybrid legacy / pqc algorithms so engineers at Cisco will know not to limit key sizes in PDUs to 128 bytes.
The incident you're thinking of doesn't sound familiar. None of the extensions in 1.1 really were that big, though of course certs can get that big if you work hard enough. Are you perhaps thinking instead of the 256-511 byte ClientHello issue addressed ion [1]
[0] https://blog.cloudflare.com/pq-2025/ [1] https://datatracker.ietf.org/doc/html/rfc7685
Given the author's "safety first" stance on pqc, it seems a bit incongruent to continue to fly to conferences...
The idea that a startup would be competitive in the VC “the only thing that matters are the feels” environment seems crazy to me.