What's discouraging terrorism is the US's overreaction outside the US. It's become very clear to terrorist organizations that if they attack the US, the US is going to hit back, even if it's insanely expensive and causes collateral damage. The people in charge, and many people around them, end up dead.
Remember ISIS, the Islamic State? ISIS is down to 1.5 square miles, surrounded, and everybody but the most fanatical fighters is surrendering. The holdouts have days to live.
We don't need more Big Brother.
The problem with this argument is that it can't justify continued spending because that would make it unfalsifiable. We need to spend $450B/year on a bear-repelling rocks because we currently pay for the rocks and there are no bears. And if any bears do appear then we obviously didn't have enough bear-repelling rocks and we need to start spending $900B/year.
If there is a real question as to whether the ~0 bears is a result of the rocks, it's time to cut the bear-repelling rock budget in half and see how many bears there are next year. If it's still ~0 then it didn't need to be as high as it was and it may still be too high.
Also, cutting the government's budget would not impact the cost of compliance to corporations.
Many that throw jabs at J2EE (written on purpose), never had the joys of trying out xBaseEE, CEE, C++EE (CORBA, DCOM/MTS),...
In comparison to some of their other hardware, these servers were more suited to organizations with more demanding needs like minimizing downtime or having lots of compute power or configuration flexibility.
But of course people quickly realized that a key characteristic of actual enterprise computing is large budgets, so it almost immediately turned into a game of labeling things with the word "enterprise" in hopes of vacuuming up as much of that money as possible.
There are these large corporations with a significant investment in their existing infrastructure and systems - and now they all need to make them interop. The mindset is “how do we make our CORBA ERP communicate with their Java CRM without needing to make any changes to either of them?”. Hence SOAP: It packages existing method-call semantics into a HTTP message that will cross a firewall: not even the IT dept needs to get involved to change firewall rules. And they hammered out a working spec within a couple years. That’s impressive considering the slow-moving nature of large, risk-averse enterprises. We now know that REST-is-Best, but it took the industry around 10 years to figure that out, and another 5 years for the tooling and ecosystem to catch up. SOAP was a quick-fix that was needed immediately.
So I’d recharacterise “Enterprise software” as “fits into your existing system and does what you need it to, right now” - and their MC Escher-inspired architecture is a consequence of it needing to support and fit-in to whatever systems were prevalent when their project was started.
It’s not Enterprise software that’s rigid and inflexible - but cutting-edge software that I have more problems with. I was working with Neo4j in 2016 and having issues with security because it didn’t have any built-in security support until last year. I had to change what I was doing to accommodate them, instead of vice-versa.
It turned me grey, bald and cynical aka experienced in every possible way to fuck something up. That turned out to be quite valuable!
> Tue, 27 September 2016 18:21 UTC
> The various suggestions for creating fixed/static Diffie Hellman keys raise interesting possibilities. We would like to understand these ideas better at a technical level and are initiating research into this potential solution.
The core argument made by BITS is that they need a way to log TLS traffic such that it can be decrypted later, in order to provide data retention in line with regulations. While this could be done by logging all ephemeral keys generated by the servers, BITS argues that this isn’t practical due to their use of dedicated packet logging hardware that is key-ignorant. Instead they want to use non-forward-secret TLS so they can decrypt past messages easily. Their beef with TLS 1.3 is that it removes all non-FS key exchange methods, and further that by explicitly obsoleting TLS 1.2 as a standard pushes them to have to adopt 1.3 in an enterprise environment (or risk current/future regulatory scrutiny over their use of an obsoleted standard). Hence why they want to develop a competing, active standard with non-FS key exchange.
It is vital to financial institutions and to their customers and regulators
that these institutions be able to maintain both security and regulatory compliance
during and after the transition from TLS 1.2 to TLS 1.3.
One example of that is the NIST's recommendation on password policies. Most of the time the regulatory mandates are outdated and it's hard to bring them up to speed, in the mean time, as a financial institution you simply cannot have your IT system incompliant, even that means having a less security practice.This just isn’t true, or rather “compliance” tends to be quite fuzzy.
Regulators generally expect you follow recommendations from places like NIST. But it’s not a hard requirement, you just need to explain why deviating is better.
Unfortunately most fincial institutions trip up at the “explain why it’s better” bit. Either because they aren’t competent enough, or (more likely) can’t be bothered.
It's not "useful" if your goal is to intercept and decrypt messages that are supposed to be secure, which is what both regulated entities and baddies want to do.
If you don't require forward secrecy you introduce a weakness. The protocol won't distinguish between whether that weakness is being exploited by regulated entities or baddies.
You don't need to weaken TLS in order to do what the regulated entities want to do - you just need to do the retention on the endpoints. The issue isn't that they can't do that, it's that they don't want to do that, probably for cost or convenience reasons. Those aren't reasons to weaken TLS for everyone who actually wants secure comms.
If they're actually recording the entirety of every TCP stream that comes into the datacenter, how many sets of credentials do you think are stored in that system? And right now, they're all encrypted with a single or small number of keys, that must be available to the system that is storing and parsing this data.
Also, given the breaches that have happened, I keep waiting for there to be a set of regulations from the other side requiring adequate protection and deletion of data. He seems entirely unconcerned with that aspect.
PS. I can't even get ETSI's website to load! https://www.etsi.org/
If a TLS 1.3 client will happily connect to an ETS server that isn't playing by the rules, doesn't that indicate a flaw in 1.3?
In this case, the server is using a predictable number instead of a random one for part of the protocol. Possibly a client could detect this by doing multiple transactions and seeing if a number gets reused, but that seems outside the scope of TLS.
There is a way to detect this. Record the last ephemeral public key that server used with you. If it uses the same one again, refuse to connect.
No opaque data leaves my network
Then this is the only way you can have outbound HTTPS connections. And for e.g. a bank, certain legal firms, or any company that has a lot of sensitive data they either don't want to be leaked, or at least want the option of detecting when it is leaked, that is a somewhat reasonable stance. In the case of banks, this is needed for regulatory compliance regarding insider trading. For legal companies, I imagine this is about ensuring certain confidentiality. I could see the same thing for companies dealing with trade-secrets.
The statement 'Just log it on the end-points' presumes complete access to those end-points and all software running on them.
This method is considered better than terminating TLS early at a proxy and setting up a separate tunnel to the clients because breaking PFS is passive, rather than active. Thus it is a lot less resource intensive, a lot less vulnerable (no internet facing box that, if broken, has all communication in plaintext), and introduces no extra latency.
It is essentially a 'better' way to do an authorized MitM on everything on your network, and some companies want this authorized MitM. Like any authorized MitM, it introduces a third party who can compromise security, which is not generally desirable, but some companies don't mind being that third party to their own employees.
> The statement 'Just log it on the end-points' presumes complete access to those end-points and all software running on them.
There still has to be some control over the endpoints. Otherwise, what prevents them from negotiating an algorithm in TLS 1.2 that has PFS?
And I am not sure if you're attempting to address this, but instead of terminating at a more edge-ish node, why not just decrypt and re-encrypt there? (So, it is still encrypted internally, but the node can inspect the data in an authorized manner.) (You seem to address it, but I'm not sure what you mean: yeah, having a centralized box decrypting your traffic means that an attacker that gets access to that can see a lot. But what were you doing in TLS 1.2 w/ a non-PFS ciphersuite that didn't involve a machine w/ the ability to decrypt everything?)
If your stance is ‘no opaque data leaves my network’ your only option is an air gap.
This expands to any IoT device with proprietary software on it, which by 2023 will be quite a lot of things.
And if you are in a corporate environment using a company computer you forfeit your privacy anyway. You can always go somewhere else or do your banking and Facebook on a different machine / not on company time.
Disabling PFS and thus enabling the decryption of all TLS sessions should be a conscious decision rather than something that was there 'by default' (and could easily be abused).
You can cover some of the same concern by implementing an agent on every connected computing device but this brings much greater complexity as you are monitoring potentially hundreds to thousands more places and still have to worry if you have complete coverage.
Consider an analogy of going through international customs. Do you employ customs officials at the border who are allowed to sample and inspect private belongings to verify laws are being followed? Or do you employ an official to help pack the belongings of each individual who you think may eventually cross the border? The second example is a bit stretched but hopefully illustrates the scale problem.
Without telling the person who's things were packed that they were packed by the official.
Also, banks seeing their own corporate traffic is ethical and moral. Whether they need to simply find another way to read all data leaving their network is another piece of the story.
EFF: "Everything sent over the network should be a secret! Nobody has a good reason to inspect traffic, it puts users' privacy at risk!"
Bank: "We keep trillions of your dollars. Inspecting our own traffic is how we make sure nobody is stealing it. We're a pretty big organization, so this stuff costs a lot of money, and is complex and takes a long time to get right. Can you give us a way to do that in this new TLS standard?"
EFF: "No!! Privacy!!!"
Bank: "Ok... I guess we'll have to make our own standard, then...?"
EFF: "Don't ANYONE use that standard, it will cause REAL HARM!!!!"
Bank: "..... Nobody else was going to... except us....."
Not necessarily trivial but not exactly impossible for someone who controls one of the endpoints.
Active interception with a middlebox still works exactly the same as it always has.
I doubt this pretty strongly.
And if they can put in enough effort to implement a new protocol, they can put in enough effort to log some keys.
They could do it in a much safer manner, too. They could have a TLS extension that appends the session key to the start of every connection, encrypted so that only the inspection device can use it. Then it would be transparent, connections not using it could be easily blocked, and you would still have forward secrecy in case the private key leaked.