Back in 2011, I was using it to learn API development by intercepting mobile app requests when I discovered that Airbnb’s API was susceptible to Rails mass assignment (https://github.com/rails/rails/issues/5228). I then used it to modify some benign attributes, reached out to the company, and it landed me an interview. Rest is history.
Sometimes it's easier to use mitmproxy with an existing implementation than to read the documentation!
;)
> Chrome does not trust user-added Certificate Authorities for QUIC.
Interesting. In linked issue chrome team says:
> We explicitly disallow non-publicly-trusted certificates in QUIC to prevent the deployment of QUIC interception software/hardware, as that would harm the evolvability of the QUIC protocol long-term. Use-cases that rely on non-publicly-trusted certificates can use TLS+TCP instead of QUIC.
I don't follow evolution of those protocols, but i am not sure how disallowing custom certificates has anything with "evolvability" of protocol ...
Anyone knows are those _reasons_?
They can easily release a sightly tweaked QUIC version in Chrome and support it on e.g Youtube, and then use metrics from that to inform proposed changes to the "real" standard (or just continue to run the special version for their own stuff).
If they were to allow custom certificates, enterprises using something like ZScaler's ZIA to MITM employee network traffic, would risk to break when they tweak the protocol. If the data stream is completely encrypted and opaque to middleboxes, Google can more or less do whatever they want.
Kinda related: https://en.wikipedia.org/wiki/Protocol_ossification
One of the reasons for developing HTTP 2 and 3 was because it was so difficult to make changes to HTTP 1.1 because of middleware that relied heavily on implementation details, so it was hard to tweak things without inadvertently breaking people. They're trying to avoid a similar situation with newer versions.
I think because of KZ case browsers and Chrome especially went for using only their own cert store instead of operating system one.
Those proxies tend to be strict in what they accept, and slow to learn new protocol extensions. If Google wants to use Chrome browsers to try out a new version of QUIC with its servers, proxies make that harder.
Maybe more accurately “chrome is designed to work for you in so far as that also works for google”. I share the long standing dismay that so many willingly surrendered their data and attention stream to an ad company.
And even when it is HTTP, as other commenters said, the reverse proxy is able to handshake connections to the backend much more quickly than an actual remote client would, so it's still advantageous to use http/2 streams for the slower part of the connection.
That's just called a web server and not a reverse proxy then. Both are just evolutions of CGI.
At a startup I was working on a few years ago, I set up mitmproxy in dev and eventually if memory serves right I also sometimes enabled it in prod to debug things.
That being said, we did not have a lot of users. We had in fact very very few users at the time.
The servers which run the frameworks have to http/3. In most cases the advantages should be transparent to the developers.
Evidently Firefox is not a typical browser anymore.
That said, for many people who care about this stuff, this could be an option. There's nothing preventing you from doing this technically speaking.
There's a small risk of triggering subresource integrity checks when rewriting Javascript files, but you can probably rewrite the hashes to fix that problem if it comes up in practice.