Could you explain why browsers should flag sites like this? It's possible that I'm too naive to realize the issue, and I would appreciate some education on it.
EDIT: changed "blocked" to "flagged"
The common refrain is to think about repressive governments and what they can (and do) do with this information, but even here in the States think about your ISPs selling your browsing history to advertisers. Or think about ISPs being required to report to the US Government whenever you visit some informative but http-only page about terrorism / chemistry that happens to also be used in explosives / infosec topics / etc. Consider being put on a watchlist simply for having viewed StackOverflow questions relating to XSS or SQLi vulnerabilities.
If you determine the word "insecure" to mean that security or privacy expectations held by the average user are being violated, then all HTTP-only pages are insecure -- not because you may be viewing modified information or because you may be submitting sensitive information, but because the fact that you visited that page while alone is something that the average user likely suspects is secret and/or private, but isn't. To put it bluntly: would you browse an HTTP-only porn site? I wouldn't.
HTTPS is indeed more secure for users, but it does have some cost, and I think OP has a sensible argument.
If you really think HTTPS is the best thing ever and is absolutely better than HTTP in every sense, you're just looking at it superficially.
When you start looking into how the entire Internet works and what role each party plays in the ecosystem, and how much "real" power each party has, you'll find that HTTPS is THE biggest centralization force of the web. If you think centralization and oligopoly by big tech companies is awesome, fine.
But there are people who don't like that direction for a good reason.
Saying HTTPS is centralized is like saying PGP is centralized. If it is, it's only because the underlying technologies (HTTP itself and DNS) are centralized, not an encryption and document-signing protocol layered on top of it.
Even if you can argue that HTTPS is a centralization force, it's almost certainly hyperbole to argue that it's the biggest centralization force on the web today. Surely network effects (Facebook, Amazon), huge amounts of capital, and control over a huge amount of information (Google) are far bigger factors?
However, the whole dns system is centralized. Any alertanative dns root is kinda ignored as well. Its kinda sad.
Also there only a handful of browsers. Even less browser engines. This is also sad, and partly to blame is how complicated the standards are these days.
This, very much this. Plaintext doesn't require what is essentially authorisation from a central authority in order to communicate.
- Your address needs to be given to you by your ISP or ARIN.
- Major ISPs need route to your address and/or accept your BGP announcements.
- You probably need a name which is bought from a few large DNS management companies or their resellers.
- You're required to have an email address to field abuse complaints which means you most likely will be paying an email provider.
- If you're not running your own hardware you will have to pay a hosting company.
- If your site is large you'll probably need a CDN to handle the traffic of which there are only a few major players.
- Although it's a blacklist you effectively need Google's blessing to not appear on the SafeBrowsing list.
Is the CA system really that much more of a hurdle? No question it's a little scummy at times but it's cheap and relatively low maintenance.
CAs don't provide permission, they vouch for an identity.
Saying that CAs give you permission to communicate is like saying notaries give you permission to sign a contract. You can assert your identity without verification as long as the other party in the relationship is fine with the increased risk of fraud.
Similarly, you can use HTTPS without a signed certificate (precisely as you can use HTTP without HTTPS) as long as you and the other party is happy with the risk that Verizon could be "sanitizing" your speech or injecting your real-world identity into all your HTTP requests without your knowledge.
But both the site operator and visitor stand to lose from someone tampering your traffic. And increasingly, it's the users that are getting burned in this relationship.
The owner is a domain parker and is upset that he has to update hundreds of sites in order to be marked as insecure, is what I can gather
In the last few weeks he's also wrote:
- http://scripting.com/2018/02/21.html
He then makes the argument that HTTPS will make it such that only "super nerds" will be able to create websites. But right now I can host my own blog on services like Netlify and get an HTTPS certificate for multiple domains with one click. If I want my own server it literally takes two seconds for me to set up certbot and get certificates for free. Then he makes a odd and somewhat rambling comparison to the Grand Canyon.
He then argues that Google labeling HTTP as not secure is the first step down a path which leads to "blocking the pages outright", which is a prime example of the slippery slope fallacy.
In another blog post he argues that by applying Occam's razor, it's clear that Google just wants to protect their ad revenue (because HTTPS would allow ISPs to replace Google's ads with their own). Which honestly sounds insane.
I'm really surprised that this person is a software developer. I'm even more surprised that he still believes this stuff after working for 24 years.
I'm really surprised that you think everyone has/should have(?) the same beliefs about such things.
He then argues that Google labeling HTTP as not secure is the first step down a path which leads to "blocking the pages outright", which is a prime example of the slippery slope fallacy.
20 years ago people thought buying computers on which you can't install software some central authority didn't approve of was preposterous, and yet here we are today with walled gardens and the like. This is a rise of authoritarianism, all in the name of "security". The frog boils slowly.
If you think it's weird and very bad, explain why you think so. I for one can sympathize with his point of view.
> It's like a massive book burning, at a much bigger scale than ever done before.
How on earth did the author reach these conclusions?
Just because these owners don't care about their site, doesn't mean they are not valuable.
This also applies to AMP. Its bad enough they have so much control of the web based on how the rank pages in search results, but there is not much we can do about that.
Not really.. I dream all the time of a land where decentralized exchanges exist for these services, and that a clunky web browser is not required for accessing information online.
The truth is the new decentralised web does not suit the old https signing-AUTHORITY model. it's time for a decentralised system with no authority other than key-holding. The same goes for DNS.
For those who do not know, the author Dave Winer is pretty famous especially in the early web.
https://en.wikipedia.org/wiki/Dave_Winer
But he does love a rant.
The author is Dave Winer. Known for many things, among them RSS. HN users seem to like RSS and dislike what happened to Google Reader.
There is nothing unreasonable about supporting both HTTP and HTTPS.
There are decisions that should be left to users. Denying them meaningful options is something that should raise a red flag and spur some commentary.
For example, if users want to use RSS, then we should be wary of any company that effectively tries to dissuade them from using RSS.
Similarly, if users want to use HTTP for some content (and perhaps HTTPS for other content), then we should be wary of any company that effectively tries to dissuade them from ever using HTTP for any content.
Not all content needs to be encrypted. Moreover HTTPS via SSL/TLS is not the only way to distribute encrypted content. We should not pretend there is only one way to do it, let alone coerce people to do it only one way.
As a user, I would be just as satisfied with a page of HTML that is PGP-signed, encrypted and sent over HTTP as I would with HTML sent over a so-called "secure channel" via SSL/TLS, what with the third party reliances the commercial domain name and commercial x509 certificate schemes routinely entail.
Besides the issues of requisite third party involvement in encryption, TLS as implemented so far has some serious weaknesses and shortcomings, and is not the only solution to "secure content". If a company is going to issue warnings to users, then that should be among them. Promoting a false sense of "security" should be avoided.
When a company running the largest search engine on the www penalizes websites for not implmenting some feature, whether it is AMP or HTTPS or something else, this should raise red flags. Expect some commentary.
He's probably trolling.
But yeah, what does he knows...
You could impersonate google.com on wifi, but you couldn't get a valid cert for google.com because you don't own the nameservers or any of the servers that google.com points to.
You're right, if that were the case, it would be terrible!
Luckily, it's not: to get a certificate for your domain, I need to either control your domain, or control the computer that the domain points at. In either case, you have much bigger problems than certificate issuance.
I would agree that the biggest issue with TLS is the certificate authorities. All the trust lies in them. If they issue a bad certificate for google, microsoft, usbank ect... It can cause problems. This is part of the reason HTTP key pinning exists. Further, CAs have issued bad certs. It's happened and will happen again.
However, there are a lot websites that in theory do not need TLS. However, the browsers are preventing HTTP only sites from using newer features. For instance HTTP 2.0 does not require TLS, but none of the browsers support plain text HTTP 2.0. (So the browsers are ignoring the standard)
So why do the browsers push so hard for TLS. As this article mentions MITM. Considering how complicated the web standards for CSS, JS, and HTML are these days. There is a potential attack vector. So in theory an attacker could insert content on the page your viewing that uses a zero day exploit. However, I think people are forgetting it's probably easier to get people to just visit a website with the exploit via some click baity title.
The next reason is that it prevents people from snooping on other people. However, a website that displays the time for instance is not really that big of issue. However, there is content people view that they would not like someone else knowing. Although, the DNS system can give that away if the website is topical in nature. However, I would not call this insecure as the browsers do. Really it should be noted that the content your viewing may be observed. However, that is not a security issue unless the site is serving private information. At that point the server operator is screwing up.
Really, if the user can't verify (does not know how) and does not verify the certificate it's not secure. It's more likely to be secure. Even if the user verifies the certificate if the server or user's computer is compromised the encryption is good as non-existent. If your worried about the user loading any scripts with any possible exploits Java-script probably should be removed the web standards. As I mentioned it's not hard to get users to view a site with some payload taking advantage of an exploit.
Part of the problem is that we have everyday people using the web more and more these days. They don't know necessarily know when they should be worried about security. However, what is annoying me more for instance with HTTP 2.0. I can't go into Firefox's about:config and enable plain text HTTP 2.0. I can agree with sane defaults, but really I should be able to change those.
H2 requires TLS for practical reasons; without it, poorly-written transparent proxies mangle this protocol that they don't understand. Requiring SSL was the solution to this otherwise intractable problem during the initial SPDY work, and became a hard requirement for SPDY. But due to pressure from certain groups during the IETF standardization process (who didn't want the web to "go dark"), this requirement from SPDY was dropped in the official HTTP2 spec.
But dropping the requirement from the spec doesn't solve the problem that put it there in the first place. You still can't reliably use any protocol newer than HTTP/1.1 unencrypted with many ISPs. It's been demonstrated to fail in ways that are difficult to debug and which would otherwise make HTTP2 seem unreliable. So no consumer-facing implementation will let you try.
That said, any CA can sign anything and your browser will trust it in most* cases.
* - Not under certificate pinning or CA pinning though.
A week or month from now, the new owner of the domain sets up a HTTPs website. With the old certificate I have, I can now launch an MITM attack on the new owner for about 2-3 months!