You asked them to host your site. They secured it.
Github is the designated endpoint for your site traffic, so they can not be 'rogue'. You explicitly granted them control over that endpoint, and their securing that traffic does not diminish your security in any way.
It's free hosting, version controlled, and now with a free TLS security upgrade. That's actually pretty awesome.
From a Web PKI perspective I feel it's fine. DV is DV after all.
I do always create CAA records for my owns domains though, even if it's just:
issue ";"
issuewild ";"For CAA I would love to, but my registrar still doesn't allow me to create these kind of records :/
They mean something much more like "you are communicating with a machine authorized to respond on this domain by the owner of that domain". There is no obligation that the machine in question belongs to the domain owner. So when you delegated the (sub-)domain to GitHub, you also delegated the ability to generate at least low-class SSL certs to verify that delegation is correct and authorized and that HTTPS is legal.
What's important is that nobody can get that authorization without your delegation. And even with Let's Encrypt, you should find you can't just stroll out and get a certificate for any domain you choose. At some point you have to have control of the domain itself to get a cert.
(This is also why I have no problem with anything Cloudflare does with a certificate. There is no reason that they can't be shared with an authorized delegate by the domain owner. What matters is that CloudFlare can't do it without authorization, and if the owner wishes to revoke that delegation they have a clear path and CloudFlare can't do anything about it.[1] Cert delegation happens all the time, though; everyone running their HTTPS website off a hosted VM image is delegating the actual HTTPS-ing to the VM host, for instance.)
The tricky bit here is that you did not fully understand what you were authorizing when you delegated the domain to GitHub. No criticism intended, this is complicated business. It somehow needs to be fixed but heck if I know how.
[1]: In this case, note that CloudFlare may have a valid cert for your domain for a while after you leave, but when people check DNS to find where your domain is, they'll connect to you rather than CloudFlare. This is not a CloudFlare-specific issue, it would apply to GitHub here or any other delegate. The fundamental gap here is that domain delegation has no temporal component and SSL certificates do; an impedance mismatch is inevitable. In theory you ought to be able to revoke their certificate but that's a shipping container loaded with cans of worms.
I was surprised when they issued certificates for my domains (as well as injecting a tonne of broad CAA records into my zone). You have to disable Universal SSL from the bottom of the Crypto tab. So, on second thought, I sympathize with you.
If you have an IP based delegation (A or AAAA record) you're probably okay, but if you have CNAME delegation you're beholden to the named entity. I've commented on this before on HN when CloudFlare did the same thing to me: https://news.ycombinator.com/item?id=16579486
That's understandable. Fastmail do the same, i.e. acquiring certs for their customers' domains without asking or informing, with a view to moving them to HTTPS.
The general opinion here seems to be in favour of this practice. So we can look forward to a future where your domain may publish on the web only with the permission of a CA.
You can validate if they've been configured correctly here: https://dnsspy.io/labs/caa-validator
The article is pretty strongly worded for something that isn't all that bad. Yes, they issued a certificate, but you've sort-of given them permission to do so by hosting your content with them. If they own/control the server, they can get their certs validated.
It's a pretty good example of why you'd want something as Certificate Transparency even on HTTP-only domains, to know _when_ someone issues a certificate without you knowing about it. I use Oh Dear! app for that feature: https://ohdearapp.com/
Well this is interesting, I already have a CAA DNS record on my root domain, but of course its also set to 'letsencrypt.org' since that is what I use on my root domain. Although I don't guess it matters since its on the root and not the subdomain
Edit: Actually, looks like a CAA record on the root domain will also limit subdomains. So, although I already had a CAA record setup, looks like this new Github feature will work as expected when it rolls out to my account without any changes since I was already using letsencrypt
Bare -> LE delegation WWW -> explicit LE delegation * -> no delegations, and will override "bare" since resolution walks up the domain tree.
Oh Dear! looks really interesting (though I won't pay for monitoring my personal blog).
I am not sure I fully understand your HTTP-only remark, since how the communication is made (HTTP-only, HTTPS, IMAP, etc.) is not related on how the certificate is generated (which implies CT).
(I'm not affiliated)
It's using LetsEncrypt under the hood, and only generating a cert for the custom cname pointing at the Github page.
If you don't like that, don't set a cname record.
"GitHub Pages sites have been issued SSL certificates from Let's Encrypt, enabling HTTPS for your custom domain. This isn't officially supported yet and it's not possible for you to enable and enforce it on your sites at this time."
EDIT: found out the "official statement" here https://gist.github.com/coolaj86/e07d42f5961c68fc1fc8#gistco...
Fortunately LE are moving towards shorter and shorter validity periods for certs, which at least limits your risk somewhat.
GitHub is gradually (and silently) deploying HTTPS to custom-domains websites hosted on GitHub Pages, using DV from Let's Encrypt.
The author has the right to be annoyed that this was done without notification. (though I would say despite how it was done, there was no-harm no-foul here). I also eagerly await this change to my own github-pages hosted custom domain sites.
They are not redirecting port 80 traffic though, at least not yet.
In my books that's not rogue. If you don't trust them to serve https why are you trusting them to serve http? Feels a bit outrage for the sake of it
Imagine, you are the host of a domain and you receive a HTTPS request.
What are your possibilities ?
A) Drop the request ? Fallback to HTTP and get the user MITM
B) Self-signed certificate
C) A certificate trusted by a well known authority
D) MITM yourself with CloudFlare ? Put CloudFlare in front then CloudFlare will proxy the traffic in pure HTTP to GitHub.
Now talking about risks:
$ openssl s_client -servername blog.securem.eu -connect blog.securem.eu:443 | openssl x509 -noout -dates
notBefore=Apr 15 15:48:38 2018 GMT
notAfter=Jul 14 15:48:38 2018 GMT
https://letsencrypt.org/2015/11/09/why-90-days.htmlThe certificates are valid only for 90 days.
It looks like just inventing a problem. If you decided to give control of part of your domain to GitHub, yes they will be able to serve content on your behalf. That's normal, and logic.
I also can't remember whether there's an API for legitimate owners to revoke a cert issued to someone else that's no longer OK. Let's Encrypt does have to be able to do that, but if there's no API it might be very manual.
Of course, this is useless if the certificates were issued under a different CA, so your point is still valid. Prevention is better :) !
I didn't know about LE revocation mechanism at the time.
https://www.sslshopper.com/ssl-checker.html#hostname=blog.se...
> The certificate will expire in 87 days.
Good reminder for everyone to check their cert expirations!
The vast majority of websites still use traditional, yearly carts without automation. It may not be perfect, but it’s not the worst thing in the world.
Wish it was though, as we have an open Issue on GitHub from a user about not using HTTPS. This would mean 1 less open issue. :D
Maybe this is way easier than handling things on my own, but it seems like an achilles heel of fully automated SSL.
If it's cheap or free, well, hard to complain.