It's the one new browser feature I never really considered wanting/needing before, that's really stood out to me as being incredibly valuable since I've started to see the warnings pop up.
"Note that is warning in the url bar is only in Firefox Nightly and Firefox Developer Edition. This has not been released to Firefox Beta and Firefox Release."[0]
So this is not a security feature that most end users can rely on, yet.
[0] - https://developer.mozilla.org/en-US/docs/Web/Security/Insecu...
"Password fields present on an insecure (http://) page. This is a security risk that allows user login credentials to be stolen."
I use DE, so I have been seeing these for a little while, but I think even in the stable channel you can toggle the security.insecure_password.ui.enabled param to true in about:config.
1: https://developer.mozilla.org/en-US/docs/Web/Security/Insecu...
Also I love FF.
Or like "did you know your house COULD have been ransacked today, but it didn't happen!!"
Now all my users are going to hear that my site is insecure, when nothing at all changed.
How long ago did they announce that? I think just a couple months? They should have announced this much sooner.
It's going to hit me hard as my site is pretty niche and driving even more people away is the last thing I hoped for :( My shared hosting doesn't offer Let's Encrypt, and makes me pay to "install" a free certificate anyway. So I have to move everything to a different web host.
You're being pretty irresponsible if you aren't using SSL for passwords. You users should be told that your site is insecure, because it is. You should care more about the security of your users.
If your hosting does not allow SSL, you have an obligation to change hosts for the safety of your users. If you aren't willing to do that, you're negligent and you should stop doing business with the public.
This is a huge red flag. If you really don't think SSL is important, it raises disturbing questions about your approach to security in general. Which other standard security practices have you ignored? Are you using strong hashing for passwords? Are you properly handling input to prevent SQL injection?
That's definitely a significant security risk, even if it wasn't being explicitly labeled before now. It sucks for independent website operators like you, but I'd probably blame your hosting for making it difficult to secure your site rather than browser vendors for protecting their users.
Correct, nothing has changed, it has always been insecure.
Maybe your website has some important information and the user is just unaware of the dangers when entering his credentials on your website. Good for him, now firefox is warning him. He can choose to continue or not. Fortunately, if your website doesn't really hold any sensitive information then the user will go forward.
In fact you could be proactive and announce to your shared host that for this reason you will be relocating. Let them know there will be a trend of other webmasters relocating for the same reason.
As more and more website features (passwords, geolocation) start requiring HTTPS by browsers we will naturally approach the point where HTTPS is free and ubiquitous, at which point everybody wins.
Also, you've had a one year notice that this was going to happen: https://blog.mozilla.org/tanvi/2016/01/28/no-more-passwords-...
I'm not sure you should be allowed to drive a webserver.
Nothing's changed: Your site really is insecure, it's just that now users know.
A year ago.[1]
[1]: https://blog.mozilla.org/tanvi/2016/01/28/no-more-passwords-...
Your negligence pose security risk to your users, they should be warned.
I dare you to go to a hacker or security conference once, just for kicks. Connect to any wifi there, log in. See what happens.
Developer - "chrome labels password fields as insecure over http"
Pm - "what if it wasn't a password field"
I suppose you could implement your own (e.g. type = "text" with an onKeyDown listener that cached each keystroke and inserted a into the field), but that sounds like a terrible solution in so many ways.
I would think the laziest possible way to workaround this would be to use a CDN like Cloudflare to proxy all traffic to your site. Looks like they have a service called Flexible SSL that terminates HTTPS at the CDN, and sends unencrypted traffic to your backend:
- Create a font such that every character shows up as a * and use it for a text input.
- make the input field use white text on a white background using a fixed-width font, monitor length, and display the correct number of *'s above it using a div.
- implement the text box ground up from scratch using div's and JS, like google docs does.
- implement a HTTPS password field in an iframe and communicate with it over post messages.
Right-click paste from my password manager, and it doesn't work. Thanks.
---
This idea is terrible in general, but if you do, against all that is holy, implement it, please, please use onInput.
Why? They added a "Show Password" radio and I guess they figured this hack made more sense than simply using JS to update the DOM to turn it from a type password to a type text.
For anyone who is wondering what these ways are, here are a couple:
1) Backspace is a crufty special case
2) What happens when someone highlights text in the input box and types over it?
Interesting. Where are all these warnings when a CDN man in the middle attacks your connection? Or when google gets to access all the email communication of gmail users? Or when ad networks track you all around the web?
For those wondering you can mask a normal text field in css input { -webkit-text-security: disc; }.
Although our reason was actually to do with password managers. At $DAYJOB we have a CRM/ERP system with lots of password fields for other entities (not the current cookie user). It's increasingly difficult to opt out of browser autofill, and LastPass in particular was corrupting password data in the system whenever forms were submitted.
[1] https://xenforo.com/community/threads/let-tls-wait-paid-dele...
Pm - "why is this page insecure"
Developer - "chrome labels password fields as insecure over http"
Pm - "we'll need to setup encryption. It will need to be FIPS-140 certified or it's not secure"
Developer - "But you didn't care when there was no encryption"
Pm - "We don't need to certify plaintext, that should be obvious. You need to learn more about security".
Sigh. Our industry in a nutshell.
https://www.bancomer.com/index.jsp
Click "Acceso a clientes" and write numbers.
Developer - "chrome labels password fields as insecure over http"
Pm - "what if it wasn't a password field"
Pm - "If its important enough to hide, its important enough to stop from being intercepted. Think social security numbers, PINs, tokens, drivers license numbers, etc. Why aren't we encrypting things that matter?"
Developer - "chrome labels password fields as insecure over http"
Pm - "what if it wasn't http?"
Developer - "..."
Pm - "put one of them modal over it"
Developer - "but then how will anyone..."
Pm - "we're switching to <completely different stack they heard about from someone in their uber last week>"
/me opens a sake bottle.
The brittleness of SSL libraries manifests not just in the form of security exploits, but also in the form of delaying the next generation of HTTP technology. Node doesn't support natively support HTTP/2 due to HTTP2 fitting issues [https://github.com/nodejs/NG/issues/8]. Jetty was delayed for Java SLL changes. Same with Go.
If Google wants to make the whole web secure? That's great. But we also need to work on making it simple to secure. So much research goes into novel ciphers and optimal ways to defeat timing attacks, and etc etc, but the spike in complexity means that we're reaching a point where almost no individual or group can approach a correct implementation.
It worries me that we're approaching a point where we're utterly dependent on a security standard no one can understand.
Software is no exception. SSL libraries will get better if they get used more. The developers will make them better. Or if they can't, we'll find a solution that works.
The question is whether the benefit of the disruption outweighs the cost. Browser-makers decided that their users' needs were best served by this change. Mozilla and Google have been telegraphing their actions in this direction for years. They have attempted to make a responsible and gradual transition, and to a large extent have succeeded.
Every once in awhile though, a break needs to be made and some folks will get left behind until they adapt, or don't.
I keep hearing this, but failing to see it. Since OpenSSL's inception.
> To ensure that the Not Secure warning is not displayed for your pages, you must ensure that all forms containing <input type=password> elements and any inputs detected as credit card fields are present only on secure origins.
[1]: https://developers.google.com/web/updates/2016/10/avoid-not-...
Edit: It appears PCI DSS V3.2 does ask that the form itself be on a secure page (section 4.1.g):
"for browser-based implementations: 'HTTPS' appears as the browser Universal Record Locator (URL) protocol, and Cardholder data is only requested if “HTTPS” appears as part of the URL."
That's false for open wifi networks. Remember Firesheep? Just fire it up at your local coffeeshop and off you go.
Even for more secure public wifi like WPA2, vast majority of coffeeshops still don't change the default router admin passwords so you can take over it easily and listen in on all the traffic.
Further, it's not hard for some rando to setup a safe-looking access point and get people to connect to it. Camp out near an office with a router, I'm sure you'd get plenty of hits.
There's no shortage of attack vectors with no warrants required.
When you use HTTP everything is sent in plain text. This means...
- Anyone on the same network as you can see all of your traffic. This includes company networks, coffee shop wifi, your house, the library; any place that has a WiFi network. Caveat: it's possible to use network isolation to hide your traffic but this is crazy rare to see and typically is done to isolate networks, not individual traffic.
- Your ISP can see and log everything sent over HTTP.
- Anyone at the router level that your traffic passes through. Your traffic makes a lot of hopes over various routers on the internet before making it to your final destination.
Overall it's a terrible idea for anything that needs to be sent securely.
Making sure the server you are talking to is the correct one, and making sure that nobody along the way injects ads, trackers, malware, or anything else.
> Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.
Like lets say something rare was available for purchase, or music festival tickets that will sell out in 2 more minutes
would your first thought be "woah thanks Chrome you really saved me this time!"
https://medium.com/punk-rock-dev/https-new-year-avoid-the-no...
Get that HTTPS motor running. This really does make it easy.
https://www.chromium.org/Home/chromium-security/prefer-secur...
Thanks -- this has solved an ongoing problem I was having with a couple of clients.
Modems are anyways way more insecure due to the default password being admin or 123456, etc. And many are accessible from the public internet.
I think my modem got hacked due to that (I saw some login attempts a little before the DNS got changed causing Youtube to stop working).
Or would every router get its own certificate? But then netgear would have to become its own CA.
And in either case DNS hijacking is a massive issue.
It would function in the same way - if Chrome detects CC/password forms, it labels the site as Not Secure.
And for all those "small businesses" that are going to get affected by this? It's hard to muster up much sympathy at this point. It's 2017, and you're still horsing around with vanilla http?
I am not an affiliated developer, but I am a user, and have recommended this to others as well, its a solid product.
I like it.
This was always my first install on a new Chrome.
https://chrome.google.com/webstore/detail/unsecure-login-not...
[1]: https://www.wordfence.com/blog/2017/01/gmail-phishing-data-u...
Currently DNSFilter and others Man in the Middle traffic destined for sites our customers have decided to block. This works great for http, but not https, as certificate warnings are presented.
The standard work around is arguably less secure: adding a third-party CA to all end-points. This can still present problems with HSTS and certificate pinning.
I'd like to work with Google to create a standard where vendors can either be on a whitelist or have new recognized SSL cert fields, not to MITM traffic, but just to present users with a friendlier message explaining whats happening, and providing a separate https:// url to visit for information from the vendor about the block.
Implementing such a standard in browsers would further increase user security, and provide a viable method for filtering on guest networks where there is no end-point access.
When actively interrupting an HTTPS connection as a network element, there is no way to provide information to the user about the reason for the interruption or steps the user could take to prevent the interruption.
This can be done with HTTP, where a filtering proxy could show a page 'our software thinks this page violates company policies, but click here to override or contact IT to fix', or see also captive portals.
Maybe the right answer is simply there's no reasonable way to handle this use case in a secure manner, but taking away an established use is a real issue.
For instance, I have a website hosted at Hurricane Electric on a virtual server plan. I've had hosting there for well over a decade. I like their service, the virtual host works well for most of my needs. There are two areas where it doesn't work, though (AFAIK):
1. I can't run a pure NodeJS website.
2. I can't set up HTTPS.
Number one isn't relevant to this discussion; but as far as I know, the second one is a big deal. There isn't any way (AFAIK) to host multiple virtual servers each with their own certificate.
So right now (well, with the release of v56 of Chrome) - if you have a Wordpress site or something on a virtual host that has a login - it's going to show something that says "unsecure" for the login/password form. Honestly, I am fine with that. My own site isn't a Wordpress site, but I do have a login/password box on the site, and having it show that it is insecure is not a big deal to me. While there isn't much or anything I can do about it, I do understand and support the reasoning.
But...
...in the future, they want to mark -all- non-HTTPS sites as "insecure" - regardless of what the site does, presumably. It could just be a collection of static html pages (no javascript, no forms, nothing special), and it will still be marked as "insecure"? Does this sound reasonable? Suddenly, all of these pages will be deemed pariahs and non-trusted because they choose to use non-encrypted means of presentation?
Is there any solution to this, as it stands? Or are all of us with virtual hosting solutions going to have to migrate to some cloud-based server solution, with it's own IP, then obtain our own certificate (easier today, I know - and cheap to free, too) - just to get around this? Is this the end of virtual private server hosting (or is it going to be relegated to third-tier)?
I don't currently know what if anything Hurricane Electric plans to do regarding these changes. I don't want to move to another hosting provider if I can avoid it (while HE isn't the cheapest for what you get, they are nice in that they assume you know wtf you are doing - your hosting is basically access to the server via ssh and sftp - so you better know how to admin and set things up via a shell, because they aren't going to hold your hand).
I'm thinking I should send an email to them to ask them what they're planning to do - if anything.
So right now it would be counterproductive to mark all http pages as "not secure". But it's the long-term goal.
It would freak a significant proportion of the community out who fail to distinguish what is a web page and what is their computer. They would honestly think that their computer is not secure and their local files/photos would be at risk. After the initial panic and speaking to their "friend who is good with computers" that they will just ignore it forever.
The far better approach is to aggressively warn at users at the point where they are trying to do something secure over an insecure connection. Personally I would be adding UI elements next to HTML forms saying "Do not enter your password here".
From a naive perspective, all requests passing private information are protected in such a scenario. But, of course, since the www. subdomain is uncovered, it can be MITMed and replaced with a phishing site. (A.K.A. a "spear-phishing" attack.)
This change somewhat fixes that scenario: the developer can no longer keep the /login route on the insecure www. subdomain; they'll have to serve that page securely (either by making www. secure, or, more likely—because it's lazier—just moving /login to the app. subdomain.)
Even though www. can still be MITMed to replace it with a phishing subdomain, that subdomain can no longer serve a login form itself. It would have to link to a "real" phishing FQDN (one the attacker controls enough to get a TLS cert for), at which point that domain can just be found and blacklisted by the browser vendors.
In a sense, it forces the attacker into the open, where the attacker themselves can be caught/blocked, rather than simply their attack being caught/blocked. (In other words, it forces MITM attackers into a situation more akin to current botnet malware-writers, where they must put up C&C infrastructure which can be traced back to them.)
Of course, this isn't nearly as good as just securing the www. subdomain; and it will force much more work (likely, doing said securing) on those who want to embed a login form directly on their www. subdomain's landing page. But, in the interim while we work on universalizing TLS, it will drastically decrease the value of spear-phishing attacks, just as spam filters drastically decrease the value of unsolicited bulk email ad campaigns.
This is extremely different for most users that don't use password managers and/or unique passwords per site. As if your password is leaked. Maybe all your other sites are now leaked(with same pass) . The same can't be said of cookies.
sorry to say, but https is not an altruistic move by google.
Better security = Greater trust of the web = Higher adoption = More ads.
No different to them working on Google Fiber.
Google should've started way way earlier
The work actually started quite a while back, but the overall ads industry and internet as a whole moves really really slow. Add the mobile ecosystem to the equation, and there is a bunch of issues.
The whole work is a combination of a bunch of things (in no chronological order): 1. Google pushed search ranking changes. 2. Google moved all of ads to HTTPS, and this took some time to make it happen. 3. Apple created ATS to make people think about it. 4. Apple wanted to enforce ATS for non-web content, had to back out. 5. Let's encrypt made access to certs free. 6. Big vendors joined.
Unfortunately, the world is slow when it comes to changes like these, but i am quite happy with the outcome so far.
edit: added context.
Well OBVIOUSLY when the traffic is increasingly going to the same top ten sites like Faceboo, Twitter and Co.
Google seriously has to stop trying to police the god damn web.
There are many companies I have personally witnessed that use a direct IP to access web based solutions to their inhouse software, how are these people supposed to get a ssl cert.
We need to educate people, not shame them in to doing the things big google wants from them.
Is there an option to turn this off, for those of us who feel we need it like a hole in the head?