Given an ECDSA signature and control over the curve domain parameters, it's straightforward to create a second private key that matches the original public key, without knowledge of the original signing private key. Here's how:
To start with, you need to understand a little bit about how curve cryptography works. A curve point is simply the solution to an equation like
y^2 = x^3 + ax + b mod p
The "curve" itself consists of the parameters a, b, and p; for instance, in P-256, a is -3, b is (ee35 3fca 5428 a930 0d4a ba75 4a44 c00f dfec 0c9a e4b1 a180 3075 ed96 7b7b b73f), and p is 2^256 - 2^224 + 2^192 + 2^96 - 1.To use that curve for cryptography, we standardize a base point G, which generates all the points we'll use. A private key in ECC is simply a scalar number k mod p; the public key corresponding to that private key is kG (the curve scalar multiplication of the point G times our secret k). Everybody using P-256 uses the same base point; it's part of the standard.
Assume that we have a signature validator in CryptoAPI that allows us to specify our own nonstandard base point. We're ready to specify the attack; it's just algebra:
Let's call Q the public key corresponding to the signature; for instance, Q could be the ECC public key corresponding to an intermediate CA.
Q is a point on a named curve (like P-256). Q = xG for some private key x; we don't, and won't ever, know x. G is the standard generator point for (say) P-256.
What we'll do is define a "new curve", which is exactly P-256, but with a new generator point. We'll generate our own random private key --- call it x' --- and then from that random private key compute a malicious generator G' = (1/x')*Q.
On our "new curve", Q remains a valid point (in fact, our evil curve is the same curve as P-256, just with a different generator), but now Q' = x'G', and we know x'.
Now we sign a fake EE certificate with our evil private key x'. Presumably, Windows is just looking at the public key value and, reading between the lines of the DoD advisory, the curve equation, but not the base point. By swapping base points, we've tricked Windows into believing the private key corresponding to Q is x', a key we know, and not x, the key we don't know.
I'm paraphrasing a shorter writeup Pornin provided, and the basic curve explanation is mine and not his, so if I've worded any of this poorly, blame me and not Thomas Pornin. The actual exploit-development details of the attack will involve figuring out in what circumstances attackers can swap in their own base point; you'd hope that the actual details of the attack are subtle and clever, and not as simple as "anyone could have specified their own base point straightforwardly at any time".
See also this related exercise in Sean Devlin's Cryptopals Set 8:
https://toadstyle.org/cryptopals/61.txt
This attack --- related but not identical to what we suspect today's announcement is --- broke an earlier version of ACME (the LetsEncrypt protocol).
I'm normally not much of a pessimist but things like this really make me wish we could just burn all the things and start over.
Wrap all of this with an "IN MY OPINION"...
That would make things worse because we'd make the same mistakes again. I've been on many start over projects (Xeon Phi, for example, threw out the P4 pipeline and went back to the Pentium U/V). It doesn't work. You know what the most robust project I've worked on? The instruction decoder on Intel CPUs. The validation tests go back to the late 1980's!
You make progress by building on top and fixing your mistakes because there literally IS NO OTHER WAY.
Go read about hemoglobin, its the one of the oldest genes in the genome, used by everything that uses blood to transport oxygen, and it is a GIGANTIC gene, full of redundancies. Essentially a billion years of evolution accreted one of the most robust and useful genes in our body, and there's nothing elegant about it.
I think that's where we are headed. Large systems that bulge with heft but contain so many redundant checking code that they become statistically more robust.
How about... no. Review the code and determine our undefined behavior is understood well enough that we can accept the bad inputs. Those asserts & tests were created for a reason, and need to be maintained. Not just removed because no one can spot the subtle failure modes.
As long as you are talking about knowledge, not artefacts. There is indeed no choice but accrete, organise, and correct knowledge over time, because anything you forget is something you might get wrong all over again.
Artefacts are different. It often makes sense to rebuild some of them from scratch, using more recent knowledge. We rarely do that, because short terms considerations usually win out (case in point: Qwerty).
> I think that's where we are headed. Large systems that bulge with heft but contain so many redundant checking code that they become statistically more robust.
Only if we give up any hope of improving performance, energy consumption, or die area. Right now the biggest gains can be found by removing cruft. See Vulkan for instance.
Modern cryptography has gotten better at taking human (developer) error into account for designing secure API’s, but the fact of the matter is that the math is subtle, and cryptography in general is subtle due to the places where it collides with reality, so burning everything down and starting over is likely to just cause us to rediscover issues we already know about.
Say we stop using X509 certificates. We will continue to use signing to cryptographically bind claims, and we will still use PKI-like structures to attest trustworthiness, so we are still vulnerable to the same approaches of attacks even if we don’t use certificates.
We have actually learned a few things, such as that cryptographic agility seems to cause more issues than the problems it solves, so the field is improving in important ways, but if it’s not a key matching weakness, it’s going to be some Unicode encoding BS or some other critical but not validated data somewhere else, or... (because this is a real in the wild problem) using phone numbers to generate cryptographic keys for coin wallets.
At least with the ACME vulnerability, there was a novel service model involved. Here, we're talking about certificates that allow you to embed what is in effect your own cryptographic algorithm to interpret the certificate under!
This is a rare instance where I'm happy to concede that closed source allowed a terrible bug to survive far longer than it would have if nobody needed to break out a copy of Ghidra to read the code that validated elliptic curve signatures.
Is that so? The ability to shift to new hash functions and ciphers within the bounds of a single protocol seems to have accelerated the adoption of better primitives.
Signature verification is one of the hardest things to get right. One reason is, they're harder to test: when you encrypt or hash something, you have a whole bunch of bits you can check against test vectors. With signature verifications, you only have one bit: it's a match, or it's a fail.
Moreover, it's very easy to forget to check something (here, that we are using the same base point). Other constructions, like EdDSA, are simpler, and verification requires a single equality check. Harder to botch.
And even then, implementers can be tempted to get clever, which requires extra care. I've personally been bitten by not verifying mathematical jargon, and mistakenly thought "birational equivalence" was complicated speak for "bijection". Almost, but not quite. This single oversight lead to a critical, easy to find, easy to exploit vulnerability.
We found out 15 months later, 1 full year after public release, by tinkering with the tests. A cryptographer would have found it in 5 minutes, but as far as I can tell, they're all very busy.
A signature verification returning an actual AD would be interesting as well.
Personally, I'd hope most of the formal verification folks are working in firmware for industrial/medical embedded systems, and/or the microcontroller designs that go into those same systems. A lack of encryption (outside of military contexts) doesn't usually directly cause people to die.
But it's only for parsing the data. What gets down with it after parsing can still be buggy.
Hang on, if I could come up with a way of securely sharing the key I used with my recipient, I wouldn't need to actually mail the OTP to them, since they could generate it themselves. Should probably include a nonce too.
Now, if only there were a secure method to share the key...
You can scroll the code block horizontally to see what's hidden, or here is the equation without code formatting:
y^2 = x^3 + ax + b mod p
ECParameters ::= CHOICE {
namedCurve OBJECT IDENTIFIER
implicitCurve NULL
specifiedCurve SpecifiedECDomain
}
PKIX, which is what TLS and most other standards use for ASN.1 message grammars, already mandates that "implicitCurve and specifiedCurve MUST NOT be used". See https://tools.ietf.org/html/rfc5480#section-2.1.1 (There are many other related RFCs. It gets very confusing, especially once you take into account obsolete and draft RFCs.)Newer curves, like EdDSA curves, are always each specified with unique OIDs and have a simpler public key grammar. See https://tools.ietf.org/html/rfc8410 Older public curves share an OID and a more generic ("flexible") syntax, thus the ECParameters field. (RSA public keys also have a parameters field, but it's unused. However, annoyingly, some implementations omit the field altogether, others set a NULL value.)
There probably aren't any such certificates out in the Web PKI because Mozilla's rules prohibit them (and almost as important Firefox won't validate them). But Microsoft trusts a whole bunch of dubious kinda sorta wanna be Certificate Authorities, maybe one of those issued crap with specifiedCurve for a standard curve ? Or maybe a corporate internal CA?
That's a test somebody can do: If you use a Microsoft internal CA can you easily mint these stupid specifiedCurve certificates with it? Or does it make you pick a named curve and emit the OID properly? How about other popular private CA solutions? EJBCA maybe?
This stems from a time where people thought maximum flexibility in cryptography is a good idea. It's not.
From what I understand we can put custom parameters into a certificate, but the parameters come with the key, not the signature. So we have
CA cert + key A + parameters A
signs
My cert + key B + parameters B + forged signature from A
Now we can only mount this attack if we can somehow control a part of the parameters (the basepoint) that is used to verify the forged signature. Is the bug that windows is using parameters B to verify signature from A? Or am I missing something and there is another way to supply parameters with a signature?
And yes, the bug here would be that Windows accepted parameters B without confirming they match A, it only checked that the public key was the same.
So you have official and trusted root / intermediate cert C1 which contains pubkey 1 and parameters A, which uses privkey 1 (secret obviously). When signing it doesn't usually specify parameters (just gets it from the trusted cert), the leaf certificate just contain the reference to the trusted cert and its public key and of course also contains the actual signature.
In the attack you reuse pubkey 1 but create parameters B and associated privkey 2, and using that you create a leaf cert that contain both the same references that an official signature would contain - except you also specify parameters B, and then supply the signature that validates only under parameters B, and then Windows accepts both the parameters and the signature.
I tried creating a cert with custom curve parameters here: http://dpaste.com/1Q2MYWF
It seems the parameter block is all part of "Subject Public Key Info". The signature is just a binary blob at the bottom. But openssl doesn't really break that down, does this signature have its internal encoding that allows supplying additional parameters?
And if that's the case: How does that make any sense? It sounds like just asking for trouble. (I mean... there never can be a situation where the parameters of the signature do not match the parameters of the key.)
edit: seems to be this bug letting attacker specify params https://twitter.com/thracky/status/1217175743316348929
#!/usr/bin/env sage
nistp256r1_order = 0xFFFFFFFF00000000FFFFFFFFFFFFFFFFBCE6FAADA7179E84F3B9CAC2FC632551
nistp256r1_modulus = 2**224 * (2**32 - 1) + 2**192 + 2**96 - 1
nistp256r1_a = 0xFFFFFFFF00000001000000000000000000000000FFFFFFFFFFFFFFFFFFFFFFFC
nistp256r1_b = 0x5AC635D8AA3A93E7B3EBBD55769886BC651D06B0CC53B0F63BCE3C3E27D2604B
nistp256r1_field = GF(nistp256r1_modulus)
nistp256r1 = EllipticCurve(nistp256r1_field, [0,0,0,nistp256r1_a,nistp256r1_b])
nistp256r1_base_x = 0x6B17D1F2E12C4247F8BCE6E563A440F277037D812DEB33A0F4A13945D898C296
nistp256r1_base_y = 0x4FE342E2FE1A7F9B8EE7EB4A7C0F9E162BCE33576B315ECECBB6406837BF51F5
nistp256r1_gen = nistp256r1(nistp256r1_base_x, nistp256r1_base_y, 1)
curve = nistp256r1
curve_order = nistp256r1_order
curve_gen = nistp256r1_gen
CG = Zmod(curve_order)
### these are "inputs" to the system. Only pubkey is known to an attacker
privkey = CG.random_element()
Q = curve(ZZ(privkey) * curve_gen)
### The attacker generates the necessary malicious generator
kprime = CG.random_element()
kprimeinv = kprime.inverse_of_unit()
Gprime = ZZ(kprimeinv) * Q
### We can now verify that the attacker knows a private key corresponding
### to the public key under their generator
Qprime = curve(ZZ(kprime) * Gprime)
print("Q==Q'", Qprime == Q)
print(Qprime.xy())
print(Q.xy())
### So if the attacker believes Gprime is a correct generator,
### this is bad news.
### you can implement something relying on it, with Gprime as your generator, here.
This is using real secp256r1, i.e. NIST P256, parameters. You can try this in sagemath's online tools and see that it works very efficiently. inverse_of_unit is a nice function that computes the inverse of kprime in the multiplicative group modulo the prime order of the curve. You can do exactly the same thing with the XGCD algorithm from sage.arith.misc. ZZ is required simply for type conversion: sage doesn't understand how to multiply a point by a Zmod element. We help it out by converting to an integer. .xy() is not required, it simply prints affine, instead of projective, coordinates. I wrote this with sage 9, which is based on python 3. The 0,0,0 are not necessary for the elliptic curve constructor (two parameters = short weierstrass) but I'm in the habit...As a corollary to Lagrange's theorem, any prime order group is cyclic, which means any randomly chosen point is a generator and since the group of curve points under elliptic curve point addition is of prime order, any randomly chosen point on the curve necessarily generates the whole group. However, we need to choose a random secret key to relate the old generator to the new, because solving G' = k'Q is a discrete logarithm problem and we can't solve those.
P256 is obviously quite difficult for humans to deal with in their minds, so if you want to play with a more human-sized curve, try:
E=EllipticCurve(GF(17),[3,5])
which is of prime order and has 23 points (and so cofactor 1 like P256).I found this: https://media.defense.gov/2020/Jan/14/2002234275/-1/-1/0/CSA...
So based on my limited understanding:
1. The certificates have a place for defining curve parameters.
2. The attacker specifies their own parameters so that they match the start of a standard curve but choose the rest of the parameters themselves. With the right ECC math they are able to generate a valid signature for the certificate even though they don't own the private key corresponding to the original curve.
3. The old crypto API -didn't- check that certificates were signed from a fixed set of valid parameters. It would just check for sig validity allowing for spoofing of the cert.
Interesting stuff. So you might be able to cryptographically prove if there was ever any attacks in the wild from this at a given time (if we assume dates are checked at least)?
I wonder what happens at the Microsoft Security Response Center when a big vuln hits like this? Does it tie up all their resources just working on the one vuln?
Generally no, if only because of Microsoft’s sheer size.
For something like this issue, while its potential impact is big, I would guess that it only tied up the team(s) that work on CryptoAPI.
edit: more detail
Another attack, implemented on ECDSA and similar in spirit (though not the same attack) is in Sean Devlin's Set 7 of Cryptopals:
Maybe they just mean because it can sign Authenticode signatures
That said, this same patch set also has a separate pre-auth RCE on Microsoft's Remote Desktop Gateway, which has been documented as CVE-2020-0609 (not ...-0601). See https://www.kb.cert.org/vuls/id/491944/
Doesn't npm use node.js for this, which uses openssl?
> https://nodejs.org/api/tls.html#tls_tls_ssl
Third party tools connecting to the npm server that use Window's TLS library would absolutely be affected though.
- Fake windows updates
- The notorious SMB protocol -- "The Server Message Block (SMB) protocol provides the basis for file and print sharing and many other networking operations, such as remote Windows administration. To prevent man-in-the-middle attacks that modify SMB packets in transit, the SMB protocol supports the digital signing of SMB packets." Could prob impersonate a Windows server or computer in a home group, IDK.
- Likely fractal attacks on active directory that would allow injecting admin accounts on any work station in a network and enabling remote desktop.
- Fake SSL certs -- also: hey user, here's a [trojan] to fix the latest Windows vuln [fake Microsoft.com]. It's a race to update with the offical update, really. If attackers were to DDoS the update service, it would be very very bad.
- Fake signed trusted programs that security software may "ignore" and that windows itself would allow to run with fewer warnings. Trusted MS programs could be a very good way to write persistent root kits.
I'm sure windows experts can think of more stuff. But for me it's a good lesson for how much we depend on the certificate system for security.
Take a job as a pentester (or don’t) and you’ll look at your list, nod, and say “Yes. This is normal.”
It’s normal to be broken. That’s why to do pentests on every piece of security infrastructure.
The hypothesis that systems like this ought to be secure is empirically false. I am trying to shake the shock out of you, because your surprise = my surprise before being a pentester. But the job forces you to come to terms with the fact that everyone, everywhere, is broken, always, and this is neither surprising nor (and you’ll hate this part) a big deal.
Bug is fixed. Life goes on. Yes, of course the infrastructure could have been attacked from any time between “forever ago” and that fix. Ask yourself: why is this surprising to me? And carefully examine the assumptions with which you want to say “because it’s their job to make it secure...”
To be clear, I wish the world were different. But I wish we’d take a hard look at reality and the history of vulnerabilities. Stop thinking things are secure just because the label says “secure”. People devote their entire existence to seeking out and exploiting the tiniest imperfections, sometimes for no reason other than because it’s fun to do so. There is zero chance software would end up impenetrable under those conditions.
Hell, even Tarsnap screwed up once, and Colin is pretty much cryptographically-signed Jesus. So if someone as smart and dedicated as him can make these mistakes, what hope have we? Especially when “we” consists of a large number of programmers working together, and all the complexities that entails?
I would expect all connections to the Windows Update servers to be protected with TLS, and as a second layer the updates themselves to be signed, but if this vulnerability allows bypassing both signatures, this could be really bad.
Later
Dmitri Alperovitch at Crowdstrike says this doesn't impact Windows Update.
So: maybe.
This sounds exactly how pdf signatures were attacked and successfully defeated https://media.ccc.de/v/36c3-10832-how_to_break_pdfs https://www.youtube.com/watch?v=k8FIDGmmYvs
The NSA's Neuberger said this wasn't the first vulnerability the agency has reported to Microsoft, but it was the first one for which they accepted credit/attribution when MS asked.
Sources say this disclosure from NSA is planned to be the first of many as part of a new initiative at NSA dubbed "Turn a New Leaf," aimed at making more of the agency's vulnerability research available to major software vendors and ultimately to the public.
Snowden.
That'd be more of a communal-good, de-escalation approach. There's certainly something to be said for the fact that it displays the talent and expertise available too though (i.e. helping for recruitment).
Could be the knew about it for a while and had milked it hard until they caught someone else using it. Or like the parent said, previously discovered flaws meant that someone might catch this one, too.
More like "do the actual job they are paid to do"
Mission Statement The National Security Agency/Central Security Service (NSA/CSS) leads the U.S. Government in cryptology that encompasses both signals intelligence (SIGINT) and information assurance (now referred to as cybersecurity) products and services, and enables computer network operations (CNO) in order to gain a decision advantage for the Nation and our allies under all circumstances.
"The National Security Agency/Central Security Service (NSA/CSS) leads the U.S. Government in cryptology that encompasses both signals intelligence (SIGINT) and information assurance (now referred to as cybersecurity) products and services, and enables computer network operations (CNO) in order to gain a decision advantage for the Nation and our allies under all circumstances."
What NSA needs are NOBUS ("nobody but us") backdoors. Dual_EC is a NOBUS backdoor because it relies on public key encryption, using a key that presumably only NSA possesses. Any of NSA's adversaries, in Russia or Israel or China or France, would have to fundamentally break ECDLP crypto to exploit the Dual_EC backdoor themselves.
Weak curves are not NOBUS backdoors. The "secret" is a scientific discovery, and every industrialized country has the resources needed to fund new cryptographic discoveries (and, of course, the more widely used a piece of weak cryptography is, the more likely it is that people will discover its weaknesses). This is why Menezes and Koblitz ruled out secret weaknesses in the NIST P-curves, despite the fact that their generation relies on a random number that we have to trust NSA about being truly random: if there was a vulnerability in specific curves NSA could roll the dice to generate, it would be prevalent enough to have been discovered by now.
Clearly, no implementation flaw in Windows could qualify as a NOBUS backdoor; many thousands of people can read the underlying code in Ghidra or IDA and find the bug, once they're motivated to look for it.
Sounds like "we find so many critical bugs... we don't need all of them to achieve our goals, so let's blow some of them for PR"
> Within the federal space, we've been making unprecedented plans for patching systems as soon as this patch is released today. In my agency we're going to be aggressively quarantining and blocking unpatched systems beginning tomorrow. This patch has been the subject of many classified briefings within government agencies and military.
https://old.reddit.com/r/sysadmin/comments/eoll74/all_hands_...
Some speculation on CVE-2020-0601.
Earlier version of Windows cryptography API only supported a handful of elliptic curves from NIST suite-B. It could not handle say an arbitrary prime-curve in Weierstrass form with user defined parameters
…
While it could not grok arbitrary curves, Windows API made an attempt to recognize when a curve with explicit user-defined parameters was in fact identical to "built-in" curve that is supported
It appears that mapping was "lazy:" it failed to check that all curve parameters are identical to the known curve.
In particular, switching the generator point results in a different curve in which an attacker can forge signatures that match a victim public key
> https://twitter.com/esizkur/status/1217176214047219713
It looks like this may be a caching issue: There's a CCertObjectCache class in crypt32.dll. In the latest release its member function FindKnownStoreFlags (called from its constructor) started checking the public key and parameters
> https://twitter.com/thracky/status/1217175743316348929
ChainComparePublicKeyParametersAndBytes used to just be a memcmp before the patch. Same with any calls to IsRootEntryMatch. Both new functions.
https://portal.msrc.microsoft.com/en-US/security-guidance/ad...
> A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.
Firefox uses its own NSS libraries not cryptoAPI to verify certs and is completely unaffected. I assume every major browser uses NSS or their own APIs as well. And of course RSA and AES certificates remain unaffected.
Does Firefox still use NSS when using the Windows Certificate Store for the source of trusted root certs? What about Chrome?
You're right that RSA certificates are unaffected. There's no such thing as AES certificates, though.
Yes. When enabled this feature in Firefox just effectively copies certificates from one of the Windows trust stores but continues to use its own (NSS) logic for trust decisions. Note also that Firefox's config switch only looks at your local changes - a corporate CA, a MITM proxy on a dev's workstation, something like that. Firefox continues to rely on Mozilla's judgement not Microsoft's for global trust policy.
> What about Chrome?
Chrome is probably affected. Chrome uses the platform (in this case crypt32.dll) trust decisions and then layers on additional rules from Google, such as the requirement for proof of CT logging. So unless an additional rule is blocking the weird curves they'll pass on Chrome on Windows.
And the discussion on HN: https://news.ycombinator.com/item?id=22039481
X.509 is an over-engineered legacy-cruft-encrusted nightmare. I've implemented stuff that uses it and I never, even after the most careful auditing by myself and peers, leave with the sense that I have handled everything correctly or that my code is totally air-tight.
https://twitter.com/AmitaiTechie/status/1217156973268893696
Of course, that relies on not having Defender disabled by an alternate product.
https://docs.microsoft.com/en-us/windows/security/threat-pro...
> In Windows Server 2016, Windows Defender AV will not disable itself if you are running another antivirus product.
https://support.symantec.com/us/en/article.tech237177.html
Or Mcafee:
https://kc.mcafee.com/corporate/index?page=content&id=KB8245... (search for DisableRealtimeMonitoring)
For a deeper dive: I ran into issues on a security assessment trying to run procdump on lsass being blocked by Defender. Workaround.. was to find a machine with McAfee installed where that behavior was allowed.
(the tweet where I found it at https://mobile.twitter.com/NSAGov/status/1217152211056238593 has an image version of that PDF, in case you don't trust that domain)
The only way the attacker can tell the MS Crypto API is via the TLS protocol. You can only do it if it's relevant. The only option for that is to use ECDH, which allows the server to supply EC parameters for the Diffie-Hellmann exchange.
My bet is that the problem is that MS Crypto API took those parameters as correct without checking them against what's in the certificate. I.e.,
ServerKeyExchange - here's the EC spec, we just need the public key Certificate - ah - here's public key, we have the ECparams - let's run the math
:)
https://portal.msrc.microsoft.com/en-US/security-guidance/ad...
> Saleem Rashid worked out a POC for this on Slack in something like 45 minutes today
So I would be amazed if there were not some malicious certificates out there already.
Or rather, does it treat such faked certificates as authentic itself?
No, only the Windows native one. For instance, Firefox (which uses NSS) would be safe.
Also, according to https://support.microsoft.com/help/4534310, it looks like Windows 7 got security patches for this month.