It's hard to overstate just how much LE changed things. They made TLS the default, so much that you didn't have to keep unencrypted HTTP around any more. Kudos.
Snowden's revelations were a convincing argument, but I would place more weight on Google in its "we are become Evil" phase (realistically, ever since they attained escape velocity to megacorphood and search monopoly status), who strove to amass all that juicy user data and not let the ISPs or whoever else have a peek, retaining exclusivity. A competition-thwarting move with nice side benefits, that is. That's not to say that ISPs would've known to use that data effectively, but somebody might, and why not eliminate a potential threat systemically if possible?
Snowden may have been a coincidence, too. We knew encryption was better, it was just too much of a hassle for most sites.
(I do work at Mozilla now, but this predates me. Still think it's one of its most significant (and sadly often overlooked) contributions though.)
LE allowed more sites to get certificates. This has obvious benefits for e-commerce, for example
But so-called "tech" companies, e.g., "Big Tech", have, since before and after LE was started, continued to perform the largest mass scale intentional erosion of privacy in human history
The exfiltrated data is encrypted in transit using TLS. This may prevent ISPs or other passive network observers from competing with the so-called "tech" companies in the data collection, surveillance and ad services business
Arguably the use of TLS certificates increases privacy from ISPs or other passive network observers, but it does not increase privacy from so-called "tech" companies, who are perhaps the greatest threat to privacy that computer users face. Their "business model" depends on violating privacy norms
And, in fact, commercial CA certificates as pre-installed in browsers and required on the www ("WebPKI") effectively obstructs computer users from monitoring their own egress traffic in real-time. Hence corporations and other computer users must work around "WebPKI" to perform "TLS inspection"
M
Sorry to everyone else who was listening in on the wire. Come back with a warrant, I guess?!
For example the cookies of the NYT:
- Store and/or access information on a device 178 vendors
- Use limited data to select advertising 111 vendors
- Create profiles for personalised advertising 135 vendors
- Use profiles to select personalised advertising
- Understand audiences through statistics or combinations
of data from different sources 92 vendors
There is no way to escape any of this unless you spend several hours per week to click through these dialogs and to adjust adblockers.And even if you block all cookies, ever-cookies and fingerprinting, then there are still cloudflare, amazon, gcp and azure who know your cross-site visits.
The NSA is no longer listening because there is TLS everywhere? Sure, and the earth is flat.
I read NYT with no cookies, no Javascript and no images. Only the Host, User Agent (googlebot) and Connection headers are sent. TLS forward proxy sends requests over internet, not browser. No SNI. No meaningful "fingerprint" for advertising
This only requires accessing a single IP address used by NYT. No "vendors"
TLS is monitored on the network I own. By me
I inspect all TLS traffic. Otherwise connection fails
I’d be very surprised if they haven’t had several of the root trust entities compromised from day one. I wouldn’t rely on TLS with any of the typical widely-deployed trust chains for any secrecy at all if your opponent is US intelligence.
Otherwise I find it a scourge, particularly when I want to run https over a private network, but browsers have a shitfit because I didn't publicly announce my internal hosts.
There's plenty of traffic that has no need to be encrypted, and where not much privacy is added since the DNS queries are already leaked (as well as what the site operator and their many "partners" can gather).
I'm glad you can get free certs from Let's Encrypt, but I hate that https has become mandatory.
I don't recall the exact details but it was basically buggered - short key length. Long enough to challenge a 80386 Beowulf cluster but no match for whatever was humming away in a very well funded machine room.
You could still play with all the other exciting dials and knobs, SANs and so on but in the end it was pretty worthless.
Don't forget when flying to the USA, ticking the box to say you won't try to overthrow the government.
I'm sure that clause has stopped many an invading army in their tracks.
(Seriously: I strongly suspect that Let's Encrypt's ISRG are the good guys. But a security mindset should make you question everything, and recognize when you're taking something on faith, or taking a risk, so that it's a conscious decision, and you can re-evaluate it when priorities change.)
Modern TLS doesn't even rely on the privacy of the private key 'as much' as it used: nowadays with (perfect) forward secrecy it's mainly used to establish trust, and after which the two parties generate transient session keys.
* https://en.wikipedia.org/wiki/Forward_secrecy
So even if the private key is compromised sometime in the future, past conversation cannot be decrypted.
Adding multiple signatures to a certificate would be difficult because the extensions must be a part of the certificate which will be signed. (However, there are ways to do such thing as web of trust, and I had thought of ways to do this with X.509, although it does not normally do that. Another way would be an extension which is filled with null bytes when calculating the extra signatures and then being filled in with the extra signatures when calculating the normal signature.)
(Other X.509 extensions would also be helpful for various reasons, although the CAs might not allow that, due to various requirements (some of which are unnecessary).)
Another thing that helps is using X.509 client certificates for authentication in addition to server certificates. If you do this, then any MITM will not be able to authenticate (unless at least one side allows them to do so). X.509 client authentication has many other advantages as well.
In addition, it might be helpful to allow you to use those certificates to issue additional certificates (e.g. to subdomains); but, whoever verifies the certificate (usually the client, but it can also be the server in case of a client certificate) would then need to check the entire certificate chain to check the permissions allowed by the certificate.
There is also the possibility that certificate authorities will refuse to issue certificates to you for whatever reasons.
It might be interesting for ACME to be updated to support signing the same key with multiple CA's. Three sounds like a good number. You ought to be able to trust CA's enough to believe that there won't be 3 of them conspiring against you, but you never really know.
An active adversary engaging in a man-in-the-middle attack on HTTPS can do it with the private key, as you suggest, but they can also do it with a completely separate private key that is signed by any CA the browser trusts. There are firewall vendors that openly do this to every single HTTPS connection through the firewall.
HPKP was a defense against this (https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning) but HPKP caused other, worse problems, and was deprecated in 02017 and later removed. CT logging is another, possibly weaker defense. (It only works for CAs that participate in CT, and it only detects attacks after the fact; it doesn't make them impossible.)
It is called a private key for a reason. Don't tell anybody. It's not a secret that you're supposed to share with somebody, it's private, tell nobody. Which in this case means - don't let your "reseller" choose the key, that's now their key, your key should be private which means you don't tell anybody what it is.
If you're thinking "But wait, if I don't tell anybody, how can that work?" then congratulations - this is tricky mathematics they didn't cover in school, it is called "Public key cryptography" and it was only invented in the 20th century. You don't need to understand how it works, but if you want to know, the easiest kind still used today is called the RSA Digital Signature so you can watch videos or read a tutorial about that.
If you're just wondering about Let's Encrypt, well, Let's Encrypt don't know or want to know anybody else's private keys either, the ACME software you use will, in entirely automated cases, pick random keys, not tell anybody, but store them for use by the server software and obtain suitable certificate for those keys, despite not telling anybody what the key is.
The actual communication is secured by a public/private keypair which is separate from the CA certificate.
Browsers have a set of certificates "pre-accepted", these are the default root certificates. There have been issues with some of them over time (e.g. DigiNotar) and they hav changed over time. If you hear of someone speaking about the "CA cartel" this is what they mean.
So a compromised CA can make you think you are talking to someone, like your bank, when you are not. But it doesn't enable snooping on traffic on the wire.
A CA can protect from this compromise by keeping the root private key entirely offline and signing a couple intermediate CA certificates. Then, if one of those intermediate CAs gets compromised, it can be revoked and a new intermediate CA created. You as a user of the CA can't do much though-either you choose to trust it (or delegate that trust) or don't.
You can create your own self-signed root CA certificates and client/server certificates signed by that CA fairly easily. But you then have to add the root CA as a trusted certificate into every device you want to use it, including those of your friends, employees, etc. This isn't quite as bad as it sounds, the last time I checked even phones would install a new root CA certificate if you opened a .crt file, and deploying CA certificates via Microsoft Group Policy is a thing.
* https://en.wikipedia.org/wiki/Simple_Certificate_Enrollment_...
See, SCEP assumes that Bob trusts Alice to make certificates. Alice uses the SCEP server provided by Bob, but she can make any certificate that Bob allows. If she wants to make a certificate claiming she's the US Department of Education, or Hacker News, or Tesco supermarkets, she can do that. For your private Intranet that's probably fine, Alice is head of Cyber Security, she issues certificate according to local rules, OK.
But for the public web we have rules about who we should issue certificates to, and these ultimately boil down to we want to issue certificates only to the people who actually control the name they're getting a certificate for. Historically this had once been extremely hard core (in the mid-1990s when SSL was new) but a race to the bottom ensued and it had become basically "Do you have working email for that domain?" and sometimes not even that.
So in parallel with Let's Encrypt, work happened to drag all the trusted certificate issuers to new rules called the "Ten Blessed Methods" which listed (initially ten) ways you could be sure that this subscriber is allowed a certificate for news.ycombinator.com and so if you want to do so you're allowed to issue that certificate.
Several ACME kinds of Proof of Control are actually directly reflected in the Ten Blessed Methods, and gradually the manual options have been deprecated and more stuff moves to ACME.
e.g. "3.2.2.4.19 Agreed‑Upon Change to Website ‑ ACME" is a specific method which is how your cheesiest "Let's Encrypt in a box" type software tends to work, where we prove we control www.some.example by literally just changing a page on www.some.example in a specific way when requested and that's part of the ACME specification so it can be done automatically without a human in the loop.
If a cert has to be renewed once every 3 years, plenty of companies will build an extremely complicated bureaucratic dance around the process.
In the past this has resulted in CAs saying "something went wrong, and we should revoke, but Bank X is in a Holiday Freeze and won't be able to rotate any time in the next two months, and they are Critical Infrastructure!". Similarly, companies have ended up trying to sue their CA to block an inconvenient revocation.
Most of those have luckily been due to small administrative errors, but it has painfully shown that the industry is institutionally incapable of setting up proper renewal processes.
The solution is automated renewal as you can't make that too complicated, and by shortening the cert validity they are trying to make manual renewal too painful to keep around. After all, you can't set up a two-months-long process if you need to renew every 30 days!
Tl;dr is to limit damage from leaked certs and to encourage automation.
Decreasing Certificate Lifetimes to 45 Days
There's a TLSA resource record for certificates instead of a TXT encoding.
As far as I know no major browser supports it, and adoption is hindered by DNSSEC adoption.
https://letsencrypt.org/repository/#isrg-legal-transparency-...
(There's dozens of us!)
It’s bizarre. There is a photo at the top, no name, no site title. No about page. Extremely untrustworthy.
Scroll down to the footer--> click on "Homepage"
Then you will get to his homepage: https://www.brocas.org/
Ironic that someone specializing in security doesn't understand how to make their information trustworthy. But I suppose it's easier and more fun to try and understand machines than other human beings.