- Create a font such that every character shows up as a * and use it for a text input.
- make the input field use white text on a white background using a fixed-width font, monitor length, and display the correct number of *'s above it using a div.
- implement the text box ground up from scratch using div's and JS, like google docs does.
- implement a HTTPS password field in an iframe and communicate with it over post messages.
I know that you're just exploring solutions because it's interesting, but all those things take longer than Let's Encrypt.
My experience with let's encrypt so far:
- the name of their tool was changed form "letsencrypt" to "certbot", breaking my cronjob
- for daemons that try to access the cert/key as non-privileged users, additional fiddling with permissions is necessary, which may even be overwritten on cert update if done incorrectly
- when the certs are renewed, daemons need to reload them. This means that ideally, you need to detect when a renewal actually happens (as opposed to an attempt), keep an up-to-date list of all daemons that use the certs and possibly completely restart them, dropping all existing connections (some daemons just don't support a live reload)
I'm not saying these problems are unsolvable, but may take way more than 5 minutes and I, for one, opted to renew my startcom certificate for another 3 years instead.
The cronjob correctly renewed the certificate at least once on multiple servers, with no issues whatsoever. I was also warned that the certificates were expiring via email, which I thought was awesome.
Your mileage might vary.
Wait really? When did you do this? I thought all certs signed by their root after sometime in October or November are all considered invalid due to their shenanigans with WoSign. Not so?
In the real world, for many commercial operations, and especially for legacy code, there can be significant hurdles. For example, it took the NY Times 2 years to move to HTTPS, significantly more than 5 minutes, and they haven't even migrated 100% yet. https://www.thesslstore.com/blog/new-york-times-moves-websit...
Troy Hunt, a security expert, wrote about this topic a year and a half ago, and explained why, at the time he wrote it, his site wasnt HTTPS. http://www.troyhunt.com/were-struggling-to-get-traction-with... (this was before let's Encryt)
>unless you’re reading this in the future, you’re reading it on my blog over an insecure connection. That wasn’t such a big deal in 2009 when I started it, but it is now and there’s nothing I can do about it without investing some serious effort and dollars. I run on Google’s Blogger service which ironically given their earlier mentioned push to SSL, does not support it. Whilst Google doesn’t give me a means of directly serving content over SSL, I could always follow my own advice and wrap CloudFlare’s free service around it, but that won’t work unless I update the source of every single image I’ve ever added to almost 400 existing blog posts… and there’s no find and replace feature. This is the joy of SaaS – it’s super easy to manage and good at what it’s specifically designed to do, but try and step outside of that and, well, you’re screwed.
Another real world example, that I've encountered - your software is an intraweb site running on your customer's server and you have to play by their rules and policies on what your customers can put on their servers and when. And it's on exactly nobody's priority list.
https://www.troyhunt.com/https-adoption-has-reached-the-tipp...
TL;DR: HTTPS is now workable and affordable.
This is entirely dependent on how your site is hosted. For some shared hosting platforms, it may not be possible at all.
It's only easy if you're using a Linux webserver. In IIS land it's a pain.
And if not, it's no big deal, users will just be informed that the page is not secure, which is true.
Don't like the message? Work to secure your damn website.
When companies like Google and Mozilla decided how to handle HTTPS, they decided based on their needs and their perception of everyone else's needs, like banks and major corporations. This led, IMHO, to a complete failure to recognize a lot of other uses for the Internet, and so their solutions fail to adequately account for them.
This has been coming for quite a long time. The time for excuses is over. If you think the safety of your users is "simply not worth it", well, I'd like to know what your websites are so I can block them at my firewall. I'm not saying this to be a dick, I'm saying this because this is an attitude of callous disregard on display, and it's downright odious given the modern security climate.
LE is not that hard to use, and I seriously question whether you can't make an API call once every 90 days per subdomain. The requirements have never been lower.
This. This this this. Automates everything so easily. I have helped someone personally deploy HTTPS for over a dozen sites and they all auto-renew without a hitch every 90 days.
No Excuses. EFF did us a solid
The top-level page also has to be served over HTTPS for the warning to not appear. (source https://developers.google.com/web/updates/2016/10/avoid-not-...)
There seems to be many people with similar problems of false positives for nonexistant passwords so I guess it's a bug.