If your stance is ‘no opaque data leaves my network’ your only option is an air gap.
If nothing else, it would make the remaining traffic stand out more since you wouldn't be spending time auditing normal apps which decode cleanly and it's highly likely that someone trying to circumvent such a system would be required to do things which stand out more than routine usage.
As a simple example, an organization which does that kind of monitoring is unlikely to allow users to install arbitrary applications or visit any site on the web. With a standard setup, someone trying to exfiltrate data could just hit a popular site like Github, Gmail, Dropbox, etc. but if they need to use some custom encryption or steganography code they're either forced to install it somewhere far less common (i.e. more likely to stand out) or installing something locally where client monitoring can report an unusual browser extension or application.
The reality is that world kept turning without these proxies and it will keep turning once they are made obsolete.
There are people who need to check off boxes in order to comply with certain rules. Their security reality is not actually all that important.
ETS just means they don't have to spend money on replacing their man-in-the-middle monitoring gear with client-local solutions on every workstation and server.
It al seems pretty silly though. Regulators aren't idiots; they know that HTTPS is everywhere now, and that TLS 1.3 means that third parties listening in on connections are going to be thing of the past, so the regulations will change to reflect that.
Doesn't it still require altering the TLS implementation to use the static DH keys instead of following the TLS 1.3 standard of using random keys?