A MITM (e.g. a router along a multi-hop route between the victim client and StackExchange) could silently drop the unsafe HTTP requests and maliciously repackage it as an HTTPS request, thereby circumventing the revocation.
Also: even if an insecure HTTP request isn't dropped / makes it through to StackExchange's endpoint eventually (and thereby triggering the API key revocation), a MITM with a shorter trip time to SE's servers could race for wrecking havoc until the revocation happens.
Nevertheless, SE's revocation tactic contributes positively to a defense in depth strategy.
This approach is a practical choice based on the reality that the bulk of unencrypted traffic is not being actively mitmed and is at most being passively collected. Outside of actually developing cryptosystems, security tends to be a practical affair where we are happy building systems that improve security posture even if they don't fix everything.
Outside of actually developing cryptosystems, security tends to be a practical affair where we are happy building systems that improve security posture even if they don't fix everything.
there was a time in the 1990s when cryptography geeks were blind to this reality and thought we'd build a very different internet. it sure didn't happen, but it would have been better.
we had (and still have today) all the technology required to build genuinely secure systems. we all knew passwords were a shitty alternative. but the people with power were the corrupt, useless "series of tubes!" political class and the VCs, who obviously are always going to favor onboarding and growth over technical validity. it's basically an entire online economy founded on security theater at this point.
As a developer I like to make many options available for debugging in various situations, including disabling TLS. This isn't controversial, every Go and Rust library I've ever seen defaults to no TLS, preferring to make it easy rather than required, so reflecting those defaults in the service's configuration is natural and intuitive.
I make sure my example configurations are as close to what will be needed in production as possible, including not just TLS but at least one "mutual TLS" validation. I even sync these back from production if it turns out something had to be changed, so the examples in the repository and built artifact are in line with production.
Yet I routinely find at least some of these disabled in at least some production deployments, presumably because the operator saw a shortcut and took it.
Let's rework Murphy's original law: if there are multiple ways to deploy something and one of those will be a security disaster, someone will do it that way.
ref: https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CO...
> that's more secure, but still not bulletproof
I've never heard of bulletproof ever actually being achieved in IT security. Not even air gaps.It's not different levels of good or bad... everything else is wrong.
But is that actually true? With TCP Fast Open, a client can send initial TCP data before actually learning whether the port is open. It needs a cookie previously received from the server to do so, but the cookie is not port-specific, so – assuming the server supports Fast Open – the client could have obtained the cookie from a prior connection over HTTPS or any other valid port. That’s the impression I get from reading the RFC, anyway. The RFC does mention that clients should distinguish between different server ports when caching refusals by the server to support Fast Open, but by that point it’s too late; the data may have already been leaked.
I think Stack Exchange's solution is probably the right one in that case -- and hopefully anyone who hits it will do so with dev keys rather than in production.
Probably best to listen on 80 and trash the token right then as the majority of the time there won't be a MITM and breaking the application will force the developer to change to https
Returning an error (and/or blocking the port entirely) allows the developer to understand he is using the wrong protocol and fix it.
In this scenario, the end user never actually performs an http request, because the protocol was fixed by the service developer.
Still, I agree that this is a very good way to teach your users to not start with http:! And that this is what one should do.
It could make sense for first-party SDKs for an API to block http access to the first-party API domain, but that should be unnecessary – typically users would use the default base URL hardcoded in the client library, and only replace it if they're going through some other proxy.
When they _do_ go through some other proxy, it's commonly in an internal network of some kind, where http is appropriate and should not be blocked.
In other words I don't have your key, or any key, but I have "all of them".
The correct response to this though is that "there are lots of keys, and valid keys are sparse."
In other words the jumper of valid keys that could be invalidated in this way is massively smaller than the list of invalid keys. Think trillions of trillions to 1.
Cracking hashing requires large parallel processing, something you can't do if you're API limited
So yes, it would open the door to revoking random API keys, but that's not a bad thing; when using an API key, you should be ready to rotate it at any point for any reason.
$ curl http://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer 123" \
-d '{}'
{
"error": {
"type": "invalid_request_error",
"code": "http_unsupported",
"message": "The OpenAI API is only accessible over HTTPS. Ensure the URL starts with 'https://' and not 'http://'.",
"param": null
}
}http://api.openai.com/v1/chat/completions/../bar responds with error messages about http://api.openai.com/v1/chat/bar which might suggest some path traversal vulnerability that could be exploited.
Generally an API client is not going to need .. to be resolved in a path. It should return 400 - Bad Request (deceptive routing).
https://security.stackexchange.com/questions/122441/should-h...
Doesn’t HSTS require only responding to a user via HTTPS (even for error codes).
leading www perhaps, leading api no.
Have you rolled this out to prod yet? Did you check how many users this might effect? I can imagine some (probably amateur) apps are going to break when this hits, so some notice might be nice.
I'm not asking those questions critically, mainly wanting to facilitate a full discussion around the pros and cons (I think the pros are are much stronger personally).
I personally think listening, accepting that user mistakes can expose API keys to MITMs, and returning the user-facing error is better than a "connection refused" error, but it is a tradeoff.
> HTTP 403 provides a distinct error case from HTTP 401; while HTTP 401 is returned when the client has not authenticated, and implies that a successful response may be returned following valid authentication, HTTP 403 is returned when the client is not permitted access to the resource despite providing authentication such as insufficient permissions of the authenticated account.[a]
> Error 403: "The server understood the request, but is refusing to authorize it." (RFC 7231)
You do not throw 403 if the client is authorized to access whatever resource it's trying to.
The 426 on the sibling comment is great, though. But if you don't find an error code for your case, you don't go and redefined a well defined one. You use 400 (or 500).
IMO, 400 is more accurate, but really either could be acceptable, so long as the client is notified of the error. But, I wouldn’t automatically redirect the client. That’s what we are trying to avoid.
It also made me realize that cURL's default to not redirect automatically is probably intentional and is a good default. Praise be to Daniel Stenberg for this choice when implementing cURL.
400 - HTTP is unsupported, use HTTPS.> Provider B: Reported on 2024-05-21 through their HackerOne program. Got a prompt triage response, stating that attacks requiring MITM (or physical access to a user's device) are outside the scope of the program. Sent back a response explaining that MITM or physical access was not required for sniffing. Awaiting response.
I think Provider B morally should require HTTPS, but it really surprises me that the author would say "MITM or physical access is not required for sniffing."
Is that true? Isn't HTTP sniffing an example of a MITM attack, by definition? Am I using the words "MITM" or "sniffing" differently from the author?
I'm familiar with the following attacks, all of which I'd call "MITM":
1. Public unencrypted (or weakly WEP encrypted) wifi, with clients connecting to HTTP websites. Other clients on the same wifi network can read the unencrypted HTTP packets over the air.
2. Public encrypted wifi, where the attacker controls the wifi network (or runs a proxy wifi with the same/similar SSID,) tricking the client into connecting to the attacker over non-TLS HTTP.
3. ISP-level attacks where the ISP reads the packets between you and the HTTP website.
Aren't all of these MITM attacks, or at the very least "physical access" attacks? How could anyone possibly perform HTTP sniffing without MITM or physical access??
“Passive eavesdropper” is often used to describe what you talk about. Someone on an unencrypted WiFi network sniffing your traffic isn’t really “in the middle” at all, after all.
"Active MITM" would be how you describe someone who does modify traffic.
And an attacker in each of the scenarios GP mentioned can modify traffic. (For ISP/attacker-controlled networks it's trivial; for other networks you just need to ARP spoof)
The "eavesdropping" attack happens when you capture unecrypted packets. From there, you could either try to hijack the session by inserting yourself into the local conversation (effectively launching a "MITM" attack) or completely independently of the local conversation attempt to impersonate the login session (effectively launching an "impersonation" attack).
Also, while 1 is arguably a case of "physical access", I don't think 3 is. If you have a tap in an ISP, you don't have "physical access" to any of the endpoints of the HTTP connection. Otherwise, you could say you have "physical access" to literally every machine on the internet, since there is some physical path between you and that machine.
For another example, Kubernetes started as IPv4-only, and there are still plenty of plugins that have IPv4-only features.
That's sniffing. The other two are MITM. The sniffer isn't in the middle of anything; you never speak to him.
So, mere evesdropping is mitm. So, how is evesdropping on unencrypted traffic or wifi traffic meaningfully different than evesdropping by decrypt & reencrypt?
I don't think the term mitm includes a requirement that you aren't talking to who you thought you were talking to, because generally you do still talk to who you thought you were, just not directly.
The traffic may be modified along the way rather than merely copied, or one or the other endpoint may be wholly faked rather than merely relayed, but they also may not be, the attacker may simply relay everything verbatim in both directions, IE pure evesdropping, and the attack would still be called mitm.
Then again, I guess a keylogger is evesdropping and not called mitm.
OK but we're talking about evesdropping HTTP, which is unencrypted.
...that's the requirement. The indirection of someone interposing themselves between you and the party you're trying to speak to is what is referred to by the phrase "man in the middle".
There is no phrase "man to the side". If you don't represent yourself as being the party they want to talk to, you aren't performing a man-in-the-middle attack.
MITM means that you need to be in the middle at the time of the attack and so is more limited than an attack that works on logs.
They may have chosen this wording ("it's not MITM") to get the team into action rather than dismissing the risk
Edit: another legitimate-sounding question downvoted in this thread without further comment (since I'm the only comment still after it got downvoted). Can people maybe just explain what's wrong with a post when it's not a personal attack, not off topic, not answered in the article, or any other obvious downvote reason? Everyone would appreciate the author learning from the problem if there is one
Do applications with pinned certificates break? A bunch of mobile apps do that to get in the way of Wireshark.
I'm guessing there's some (?) advantage of SVCB and HTTPS records that can't be achieved with SRV[1] and TXT records, but I haven't read the RFC yet to know what that advantage is.
[1]: `_https._tcp` or `_http3._udp` or whatever.
The number of times I have had to deal with password or secret key resets because of this is way too high. I remember working with a guy that thought sharing a demo of an internal application on YouTube was okay. Of course it had snippets of company secrets and development API keys clearly visible in the demo.
Revoking a key like this is a problem that's solution is at worst a dozen clicks away fix. The alternative, leaking keys can be much worse.
So go and reset those creds for the guy who made a mistake happily and be grateful. He had an opportunity to learn.
Hardcoded keys are a big one. But even just hunting down all config where the key needs to change can be a major hassle. Is there some ci yaml that expects a runner to have the key in a .env file that only runs on major release? Good chance you won't realize that key exists.
Still a great idea to revoke those keys. But it will damage some customers in the short term. And they will be angry, as people often are when you demonstrate their mistake.
That second person is way to common in companies that hire from the bottom of the barrel.
Popular providers have so many API users today that even a rare mistake could expose quite many users in absolute numbers. Would rather have providers to check this out rather than have this poor practice abused by the next DNS hijacking malware affecting home routers.
If you make a breaking API change like this, some portion of clients are just never going to update. If you’re a usage-based billing SaaS provider, that means lost revenue.
Likely the only way this issue is fixed widely is if it ends up on a security audit checklist.
Maybe I'll do some more quirky anachronisms, like only serve the site via HTTP 1.0 or something. Who knows. Since my site has very little functionality, it doesn't really matter, it's just for fun.
It’s completely trivial to set up, there’s really no downside at this point.
For most cases these are non issues, but there are many scenarios where those things can outweigh the potential of your ISP modifying/reading. If that's still a concern, you can tunnel through your ISP to a more trusted exit point.
Well, firstly, I'd say, make ot https for fun. It's pretty simple to do, makes itself automatic, and costs no money. Just exploring this route can be very illuminating.
Secondly it prevents your site from being altered by the outside. There's a lot of equipment between your site and the client. Being HTTP allows it to be altered along the way.
Gor example, and thd least serious, is that your ISP can inject adverts onto your page. Most ISPs don't do this, but quite a few do.
Second, a malicious user could inject additional text onto upur pages, proclaiming your love for a political party, some racial or misogynistic slur, or whatever.
Third, you're sending a signal to all that you either don't understand the security risks, or don't care about them. This can have consequences (reputationally) if you are in, or going into, a computing career.
Making it HTTPS can be fun too!
HTTPS is about protecting the client.
1. Install and run certbot (+the DNS plugin for your DNS provider).
2. Find an nginx config file on the internet that only includes the necessary ciphers to work on modern devices. Then include that config file + lines to your cert paths that certbot spits out into your nginx config.
3. Set up a generic redirect server block to send everything on port 80 to port 443.
4. Reboot nginx.
It's at least better than fiddling with openssl directly, but this isn't fun, it's busywork.
Anyway, all the steps together take what - all of 5 minutes? - that's if you do DNS Challenges. If you do HTTP challenges it doesn't even take that.
So you're saying this 5 minutes isn't worth the effort? That it's somehow too hard?
Secure site not available
Most likely, the web site simply does not support HTTPS.
However, it’s also possible that an attacker is involved. If you continue to the web site, you should not enter any sensitive info. If you continue, HTTPS-Only mode will be turned off temporarily for the site.
[Continue to HTTP site]
(Copied from Firefox on Android)Personally I think that's reason to do it alone, for the price of a free Let's Encrypt cert you can deliver much better UX. You presumably want people to visit your site.
Neither desktop firefox nor chrome seem to do this by default, at least on my Mac (actually I think I'm wrong about Firefox on desktop as well, thanks to a guestbook signer!). Maybe it's a Firefox mobile thing, rather than a modernity thing?
>for the price of a free Let's Encrypt cert you can deliver much better UX
I'm going to get around to it, I promise haha.
Btw if anyone does visit the site, please do sign my guestbook: http://darigo.su/33chan (but be warned, a MITM might intercept and alter your comment!!)
they do, you probably just checked the "i'm sure this is safe" button
posted a pic on the imageboard hehe
For a personal website that people might commonly want to visit, though, consider the second point made in this other comment: https://news.ycombinator.com/item?id=40505294 (someone else mentioned this in the thread slightly sooner than me but I don't see it anymore)
The API security thing, yes, that makes sense. Personally, I run a number of servers for small groups where the sensitive stuff is SSL only - you won't even get an error going to port 80, other than the eventual timeout. But for reasons above, I cannot just turn port 80 off, and it's perfectly safe redirecting to 443.
https://httpwg.org/specs/rfc9110.html#status.426:
> The server MUST send an Upgrade header field in a 426 response to indicate the required protocol(s) (Section 7.8).
https://httpwg.org/specs/rfc9110.html#field.upgrade:
> The Upgrade header field only applies to switching protocols on top of the existing connection; it cannot be used to switch the underlying connection (transport) protocol, nor to switch the existing communication to a different connection. For those purposes, it is more appropriate to use a 3xx (Redirection) response (Section 15.4).
If you’re going to talk cleartext HTTP and issue a client error rather than redirecting, 403 Forbidden or 410 Gone are the two most clearly correct codes to use.
Ignoring the mandated semantics and requirements of status codes is sadly not as rare as it should be. A few I’ve encountered more than once or twice: 401 Unauthorized without using WWW-Authenticate and Authorization; 405 Method Not Allowed without providing Allow; 412 Precondition Failed for business logic preconditions rather than HTTP preconditions; 417 Expectation Failed for something other than an Expect header. I think it only ever really happens with 4xx client errors.
HTTP/1.1 426 Upgrade Required
Upgrade: TLS/1.0, HTTP/1.1
Connection: Upgrade
So you can use 426 Upgrade Required here, and I'd argue it's the most correct code to use in such a case. npm doesn't send the Upgrade header though, so that's a mistake.https://www.iana.org/assignments/http-upgrade-tokens/http-up...
Unfortunately what i see happen all the time is quick fixes are pushed to the infra. For example they deploy and typo the URL. Now we have a prod outage and infra is pulled in to fix this asap. No time to wait for that 10 minute deploy pipeline that requires all the tests to run and a deploy to dev.
This happens once and then infra is asked why we don’t already redirect all URLs. Management doesn’t care about security and they just lost money. Guess what you are doing now. This is the world we live in.
Depending on server-side HTTP -> HTTPS redirects for security reinforces/rewards bad practices (linking to HTTP, users directly entering HTTP etc.), in a way that makes users vulnerable to one of the few remaining attack vectors of "scary public Wi-Fis".
I think this is pretty convincingly argued in TFA, honestly: modern browsers understand and respect HSTS headers, maintain enough local state that such headers are meaningful, and HSTS preloading is easy enough to set up that it should be achievable by most website operators.
Furthermore, it is actually quite hard to concoct a scenario where a user clicking an HTTP link and getting immediately redirected constitutes a danger to their security: unlike with API endpoints, people clicking links (and in particular, links which were typed out by hand, which is how you get the HTTP protocol in the first place) are generally not making requests that contain sensitive information (with the exception of cookies, but I would argue that getting someone to have a sane configuration for their cookies and HSTS headers is a far easier ask than telling them to stop responding to all port 80 traffic).
See the small steps over the years where it was first an add-on to force https-only mode (HttpsEverywhere, 2011), then browsers started showing insecure symbols for http connections (e.g. in 2019: https://blog.mozilla.org/security/2019/10/15/improved-securi...), and more recently I think browsers are starting to try https before http when you don't specify the protocol. I've also seen a mention of strict https mode or something, not sure if that's a private navigation feature or something yet to come, but warning screens equivalent to insecure certificate pages are getting there
If a site is down entirely, when chrome can't connect to port 443 it confidently declares that "the connection is not secure because this site does not support https" and gives a "continue" button. Then when you click "continue" nothing happens for a while before it finally admits there's nothing responding at all.
So it gives a misleading error and takes longer to figure out if a site is actually down.
Is there something to read about this, like a dev ticket?
It’s also not just about avoiding transmitting HTML etc. in plaintext; somebody being able to inject arbitrary scripts into sites you otherwise trust is bad as well.
But as I've said above, I think the HTTP -> HTTPS redirect should have never happened at the HTTP level. If we'd done it in DNS or at least as a new HTTP header ("optional TLS available" or whatnot), we could have avoided locking out legacy clients.
Specifying availability of TLS (and, perhaps, which ciphers are usable, in order to avoid the use of insecure ciphers) by DNS would do, if you can (at the client's option) acquire DNS records securely and can know that they have not been tampered with (including by removing parts of them). (This is independent of whether it is HTTP or other protocols that can optionally use TLS.)
(Actually, I think that using DNS in this way, would also solve "Gopher with TLS"; the format of gopher menus makes it difficult to use TLS, but knowing if TLS is available by looking at DNS records would make it work. Gopher servers can still accept non-TLS requests without a problem, if none of the valid selector strings on that server begin with character code 0x16 (which is always the first byte of any TLS connection, and is very unlikely to be a part of any Gopher selector string).)
It would also help to make cookies unable to cross between secure and insecure connections in either direction (always, rather than needing a "secure cookies" flag).
What was fine in plaintext?
Encryption is for more than secrecy, folks.
Why should there be more scary warnings when more websites use TLS? Sure, you get more scary warnings if you set your browser to "warn if it's http", but then you're asking for it.
Defaults. They matter.
I believe it's still possible for a client to leak its credentials if it makes a bare HTTP/1 call to an actual HTTPS endpoint. The server only gets a chance to reject its invalid TLS handshake after it's already sent headers that may contain sensitive information. After all, the TCP negotiation did go through, and the first layer 7 packet is the bare HTTP headers.
Of course the port numbers should help here, but that's not guaranteed in an environment where port numbers are assigned dynamically and rely on service discovery.
Serving exclusively HTTP/3 should close this gap but requires all clients to be ready for that. I know many internal deployments do this, but it's not a universal solution yet.
This would prevent my fat fingers from ever even making the mistake.
BLOCK outgoing port 80
How bad would that be? Would I be shooting myself in the foot somehow?
Perhaps I would do it on the egress rule for where my requesting service is running like in ECS.
> Node.js's built-in fetch happily and quietly followed those redirects to the HTTPS endpoint.
Okay.. does nodejs fetch respect HSTS?
Many languages/libraries use libcurl in some shape or form, but whether they set up an on-disk HSTS store or not - I don't know either.
Config file writes are to be expected for user-facing software, but for developers it should error out so they can just choose what it needs to do
E.g., "Error: HSTS response received after a redirect from HTTP, but no storage location specified for security states. Use the https:// protocol, specify a file location, or set insecure_ignore_hsts=true."
Edit: but it's a very legitimate question?! Just saw you got downvoted, I have no idea why
HSTS really only makes sense from a browser perspective (or, rather, a "permanently installed, stateful client" perspective). For an API like fetch it doesn't even make sense as a question IMO.
Whether that's still common enough to warrant the extra complexity in the fetch function is not something I'm qualified to judge
Why not?
I've noticed a gradual increase in this behavior during the past ... year maybe? I think for a lot of new people, downvoting equates disagreeing, which is not ideal.
Although, I also have no idea why someone would disagree with a neutral question like GPs, lol.
Hopefully, it's not the beginning of the end for HN as it is a great website.
Why not just implement your own fetch wrapper that throws if it's not an https connection?
Or use a library to do it. The core fetch functionality shouldn't have to deal with HSTS. There may be legitimate reasons to fetch over HTTP even after you received an HSTS header - for testing purposes, for example.
> Why not just implement your own fetch wrapper that throws if it's not an https connection?
That's the developer dealing with HSTS.
Sure, there is dns-01 and tls-alpn-01 but I presume that majority uses http-01 and migrating it is not a trivial matter.
Breaking an API because you read a blog post is a bad idea
Then you can work with those customers to help them upgrade their clients to HTTPS.
- Remove the ALB's HTTP listener
- Remove port 80 from the ALB's security group
Example: when teaching low-level socket programming, plain http is the easiest place to start.
Example: when providing simple, nonsensitive, read-only info via API, the overhead of https and certificate management seems unnecessary.
It seems to easy to make an error and end up in a situation like the post explains.
For testing/prototyping, it is invaluable to turn off all the security to rule out security misconfiguration instead of application error.
If your API is non-sensitive/relies on out of band security (like large files with checksums), you may still not want https, so there should be some configuration to turn it off. And for "integrations" like jsdelivr, perhaps https libraries should follow this rule, while http ones can have the flag off...
Then, if you mix the two (http and https) perhaps they can provide an noticeable alert to the user rather than failing silently...
Excuse me, short question:
If I am not offering a non-TLS endpoint in the first place, and the client, for some reason, decides to take something that is literally called "SECRET", and decides to shout it across the open internet unencrypted...
...how is that my problem again?
Why should my setup be more complex than it needs to be, to make up for an obvious mistake made by the client?
If you do not want to do that, then don't accept connections on port 80, at least when version 6 internet is being used. (For version 4 internet, it is possible that you might use the same IP address for domain names that do want to accept connections on port 80, so you cannot easily block them in such a case.)
And, if you want to avoid compromising authentication data, then TLS is not good enough anyways. The client will need to know that the server certificates have not been compromised. HMAC will avoid that problem, even without TLS.
There is also the payload data. TLS will encrypt that, but the URL will be unencrypted if TLS is not used, whether or not the API key is revoked; the URL and headers may contain stuff other than API keys.
Some APIs also might not need keys; e.g. read-only functions for public data often should not need any kind of API keys (and should not have mandatory TLS either, although allowing optional TLS is helpful, since it does provide some security, even though it doesn't solve everything).
HSTS is even worse.
TLS prevents spies from reading and tampering with your messages, but does not prevent the server from doing so (although in this case it might be unimportant, depending on the specific file being accessed). It also is complicated and wastes energy, and there are sometimes security vulnerabilities in some implementations so does not necessarily improve security.
Of course, these ideas will not, by itself, improve security, and neither will TLS; their combination also won't do. You will have to be more careful to actually improve security properly. Some people think that, if you add TLS and HTTPS, and insist on using it, then it is secure. Well, it is very wrong!!! TLS will improve security in some ways as described in the previous paragraph, but does not solve everything.
It is also problematic if a client does not have an option to disable TLS (or use unencrypted proxies), since if you deliberately want to do MITM on your own computer, you will then have to effectively decrypt and encrypt the data twice. (If the client uses TLS by default, that would work, although if it is according to the URL then it might be by the configurable URL instead; however, the URLs do not always come from the configuration file, and even if it does, and if you want to avoid typographical errors (although even if "https" is specified, typographical errors are still possible (e.g. in the domain name), so just checking for "https" won't even necessarily help anyways; specifying what certificates to expect might sometimes help), then you might have your program to display a warning message, perhaps.) Another problem is if the client and server require different versions of TLS and it is difficult to change the software (there are reasons you might want to change only some parts of it, and that can be difficult); using a local unencrypted proxy which connects to the server using TLS, can also avoid problems like this, too.
> The client will need to know that the server certificates have not been compromised. HMAC will avoid that problem, even without TLS.
HMAC doesn't solve the problem: the client still doesn't know that the shared key isn't compromised. What does it even mean for either a client or server to know something is compromised? If Alice and Bob have a shared key, what can they do to ensure Mallory doesn't also have the shared key?
Of course, that is true, but TLS doesn't help with that, either. However, if a reverse proxy (especially if run by a third party) or something else like that is compromised, then HMAC won't allow you to compromise the shared key if the reverse proxy does not know the shared key (unless they can somehow trick the server into revealing it, but that is a separate issue).
An additional issue arises if it is compromised even before the client receives the key for the first time, if it is sent using the same communication channels; in that case, of course neither TLS nor HMAC will help (although certificate revocation may do in this case, but some other method will then be needed to be able to correctly trust the new certificate).
However, different services may require different levels of security (sometimes, hiding the key isn't enough; and, sometimes, encrypting the data isn't enough). How you will handle that depends on what security you require.
A strawman fallacy.
> A great solution for failing fast would be to disable the API server's HTTP interface altogether and not even answer to connections attempts to port 80.
> We didn't have the guts to disable the HTTP interface for that domain altogether, so we picked next best option: all unencrypted HTTP requests made under /api now return a descriptive error message along with the HTTP status code 403.
So close and yet … their misconfigured clients will still be sending keys over unencrypted streams. Doh
And how does disabling the HTTP interface altogether prevent that? In that case, any sensitive credentials are still already sent by the client before the server can do anything.
People can make up their own minds if that's a good argument or not.
I can't think of large differences. What comes to mind are two human factors that both speak against it:
- Having an HTTP error page informs the developer they did something wrong and they can immediately know what to fix, instead of blindly wondering what the problem is or if your API is down/unreliable
- That page, or a config comment, will also inform the sysadmin that gets hired after you retire comfortably that HTTP being disabled is intentional. Turning something off that is commonly on might be a time bomb waiting for someone who doesn't know this to turn it back on
Edit:
Just saw this reason, that's fair (by u/piperswe) https://news.ycombinator.com/item?id=40505545
> If the client can't open a TCP connection to port 80, there's no unencrypted path to send the API keys down
If that immediately errors out on the first try, though, what is the risk of the key being intercepted? The dev would never put that in production, so I'm not sure how the pros and cons stack up. I also loved this suggestion from u/zepton which resolves that concern: https://news.ycombinator.com/item?id=40505525 invalidate API keys submitted insecurely
Why does everything have to be so complicated?
In fact if they do follow HSTS headers, a simple `Strict-Transport-Security: ...; preload` would have fixed the issues mentioned in the article.
As usual, any comment stating that people should “just” do x, is wrong.
I don't think non-enterprise, non machine use cases should be automatically handles, though. Attempting client upgrade is better than not, but we should be more clear about whether our devices are acting safely, i.e. calling out the change, and in the case of http local usage, reminding to use visible, out of band verification methods.
Of course this only works if the default is secure, but I am glad that browser still let me go unencrypted when I really need to, I prefer the giant warning banners...
• It allows very low power embedded devices to connect without extra overhead.
• It's not a real security concern if you're on a private network.
I'm not convinced that private networks should be assumed secure by default.
If you are still concerned, you can make API keys that have been registered by TLS to require TLS, while those that haven't, do not require TLS.
(However, the note about private networks only applies if you run the service yourself. Sometimes this will be the case, though. Even then, the administrators can configure it to use TLS if this is desired.)
1) HTTP can be modified by a man in the middle
2) It's better to default to requests and responses being private, even if you're only using a non-commercial/institutional service.
You could say "The person chose to send requests to HTTP instead of HTTPS" and assume that the consumer of the API didn't care about privacy but, as the article points out, it's easy to typo http instead of https.
HTTPS-only should be the default. Plain-text information delivery protocols that can easily be MITMd are unsuitable for almost all uses.
This just feels like contrarianism.
HTTP is better because: it lasts forever without mantainence, it's easier to learn set up with no third parties required, all software can access a HTTP website, it's low resource requirements, and HTTP+HTTPS is perfectly fine.
Whereas CA TLS only lasts a year or two without mantainence, is so complex to set up and keep running that a literal ecosystem of programs (acme, acme2, etc) exist to hide that complexity, only software within the last ~few years can access CA TLS websites because of TLS version sunsetting and root cert expirations and the like, and everyone centralizing in the same CA makes it extremely easy to apply political/social pressures for censorship in a CA TLS only browser world. Additionally requiring third party participating to set up a website makes it harder for people to learn how to run their own and it requires more compute resources.
CA TLS only feels like it should be "required" when the person can't imagine a web browser that doesn't automatically run all untrusted code sent to it. When the only types of websites the person can image are commercial or institutional rather than personal and the person believes all the web should cargo cult the requirements of commerce. Personal websites involving no monetary or private information transactions don't actually need to worry about targeted MITM and there's no such thing as open wireless access points anymore.
People need to stop pretending that HTTP injection is complicated enough to only happen in targeted scenarios, it's not, it's completely trivial.
Let's take a weather service. Seems like weather information is a read-only immutable fact and should not be something that needs protection from MITM attacks. You want to reach the largest audience possible and your authoritative weather information is used throughout the world.
One day, an intermediary system is hijacked which carries your traffic, and your weather information can be rewritten in transit. Your credibility for providing outstanding data is compromised when you start serving up weather information that predicts sunny skies when a tornado watch is in effect.
Additionally, you have now leaked information related to the traffic of your users. Even if the request is just vanilla HTTP-only, an adversary can see that your users from one region are interested in the weather and can start building a map of that traffic. They also inject a javascript payload into your traffic that starts computing bitcoin hashes and you are blamed for spreading malware.
In general, HTTPS protects both your interests and those of your users, even for benign data that doesn't necessarily need to sit behind "an account" or a "web login".
One thing to note is that nothing about HTTPS protects against this type of attack. Assuming your API doesn't have much else going on (most services, probably), an adversary can easily see that you visited mycoolweatherapi.example regardless of if HTTPS is being used or not.
What TLS protects is higher on the network layer cake
I think this is the most convincing argument, but, I think that some data doesn't care if it is not confidential. The weather is perhaps more pointed, but I think for large protected binaries (either executable or inscrutable, e.g. encrypted or sig protected archives), its a bit moot and possibly only worse performing.
However, also remember that https does not protect all data, just the application portion - adversaries can still see, map, and measure traffic to bobthebaker.com and sallyswidgets.biz. To truly protect that information, https is the wrong protocol, you need something like Tor or similar bit mixing.
Why would they want to do that? Is your weatherman always right?
> Additionally, you have now leaked information related to the traffic of your users. Even if the request is just vanilla HTTP-only, an adversary can see that your users from one region are interested in the weather and can start building a map of that traffic.
Ah, yes, people are interested in the weather. Wow!
Of course, they could get the same info from observing that users are connecting to the IP address of a weather API provider.
> They also inject a javascript payload into your traffic that starts computing bitcoin hashes and you are blamed for spreading malware.
Got there eventually. Crappy ISPs.
Then you have post Jia Tan world - if there is even slightest remote possibility, you just don't want to be exposed.
Just like washing hands after peeing, just do HTTPS and don't argue.
There are trade-offs but HTTP has it's place. HTTP is far easier to set up, more robust, far more decentralized, supports far more other software, and has a longer lifetime. For humans who aren't dealing with nation state threat models those attributes make it very attractive and useful.
We should not cargo cult the requirements for a multi-national business that does monetary transactions and sends private information with a website run by a human person for recreation and other humans. There is more to the web than commerce and we should design for human persons as well as corporate persons.
HTTP MITM is way too cheap and accessible to pretend you're not vulnerable to it constantly or that you need to be a target, you don't.
How NSA was just putting gear in ISPs or nodes where are internet gateways to pass all traffic through their automated tooling?
It was not monetary or business it is not “threat model” thing. Having all traffic encrypted is a must for basic human freedom.
Using HTTPS everywhere is one such rule of thumb.
It’s just not worth expending the mental energy considering every single edge case (while likely forgetting about some) in order to try and work out whether you can cut the corner to use HTTP rather than HTTPS, when using HTTPS is so easy.
Injecting ads by ISPs into http is documented and known - they can inject anything on transit of HTTP it can be done automatically and basically with 0 cost. ISP is one the other all kind of free WiFi.
It is not only “NSA will get me” or only financial transactions. There are new known exploits in browsers and systems found on daily basis.
So reasoning is someone has to be of interest - not true because it is cheap and automated tls is making cost higher for simply scooping stuff.
It's just clear text.
1. hides any private information anywhere in the request, URL or otherwise, API key or otherwise. Maybe you're fine if someone knows you used Bing (revealed through DNS lookups), but not what query you entered (encrypted to be decryptable only by Bing servers). An API key is obviously secret but something as oft-innocuous as search queries can also be private.
2. disallows someone on the network path from injecting extra content into the page. This can be an ISP inserting ads or tracking (mobile carriers have been playing with extra HTTP headers containing an identifier for you for advertising reasons iirc) or a local Machine-in-the-Middle attack where someone is trying to attack another website you've visited that used https.