Another thing you can do with a proxy like this is limit the types of requests which can be made to the 3rd party service. This is very useful if the 3rd party service doesn't support fine-grained permissions for its API keys and you don't want your application to have full access to the 3rd party service.
(I implemented something very similar for my company, which I described here in case anyone is curious: https://sslmate.com/resources/protecting_customer_credential...)
I created a simple proxy application at my job to handle authenticating to various Atlassian services, like Jira, using OAuth 1.1. I used the proxy to not only perform the OAuth operations on behalf of a client, I also used it to limit operations to read-only. It works great, and was pretty easy to make.
As the article says, creating these proxies allows for a much smaller attack surface area. The proxy itself is relatively simple.
As an aside, the whole reason I needed the proxy I made was because Atlassian uses OAuth 1.1 instead of OAuth 2, or at least they did in the past with their server products. The cloud products may do something different now.
The problem with OAuth 1.1 is that it requires clients to perform cryptographic operations. These tend to be complicated and tricky to implement. OAuth 2 fixed that by eliminating the need for cryptographic operations or pushing them to the server instead of the client.
You can also take this a step further and do mathematical operations on encrypted data using homomorphic encryption without ever having to decrypt the data.
Just one small nitpick (mainly because I worked in this space for a few years) is that tokens and encrypted values are different. Tokens aren't encrypted and instead randomly generated using a KV pair look up table so that an attacker could never reverse engineer them. Whereas encrypted values obviously use a key (whether symmetric or asymmetric) and could theoretically (although pretty much never practically if you're using something like AES256) be hacked if someone got the key.
If I compromise your rails app, and (hypothetically) Stripe allows me to specify the message as it appears on a users credit card statement, could I not just ask it to insert the API key in that field as well and then check my bank statement? This gets easier if there is something where a value gets reflected back to the user, say an SSO error message.
My apology if there is already a protection for this, but I didn't see any obvious use restrictions in the Github README example.
I.e, if you only get control of the Rails app would need to find an api.stripe.com endpoint that reflects back the authentication header.
---- EDIT: No, I misunderstood it completely, you are right. But hmm. One way I can think of solving what you mentioned is if the token itself contains the processor parameters. That way it wouldn't be possible to change how the templating works after the secret have been tokenised (i.e by an attacker)
Did the team consider developing a custom secret engine [1] for Vault? or is it that the specific dances between Rails, tokenizer, ssokenizer cannot be accommodated by a secret engine?
[1]: https://developer.hashicorp.com/vault/tutorials/custom-secre...
I have to imagine somebody is going to build a Secret Engine that does this.
1. "Tokenizer is an HTTP proxy that injects third party authentication credentials into requests. Clients encrypt third party secrets using the proxy's public key. When the client wants to send a request to the third party service, it does so via the proxy, sending along the encrypted secret in the Proxy-Tokenizer header. The proxy decrypts the secret and injects it into the client's request. To ensure that encrypted secrets can only be used by authorized clients, the encrypted data also includes instructions on authenticating the client."
https://github.com/superfly/tokenizer
2. "Ssokenizer provides a layer of abstraction for applications wanting to authenticate users and access 3rd party APIs via OAuth, but not wanting to directly handle users' API tokens. Ssokenizer is responsible for performing the OAuth dance, obtaining the user's OAuth access token. The token is then encrypted for use with the tokenizer HTTP proxy. By delegating OAuth authentication to ssokenizer and access token usage to tokenizer, applications limit the risk of tokens being lost, stolen, or misused."
https://github.com/superfly/ssokenizer/
If these sound interesting to you, click the submitted link for the "big long essay about how the thingies came to be."
I suspect mTLS adoption has been slow because it’s easier to reason about authentication when the mechanics are “closer” to your application code. The mental model of bearer tokens in HTTP headers is pretty easy. Using mTLS requires understanding a lot more moving parts, and TLS still feels like a magical black box in many ways.
Are there any libraries you would recommend that provide a good developer experience around using mTLS?
Also, you’ve gotten the secret off the client machine, but the attacker can still do anything the secret can do by using the proxy? Perhaps I’m missing something.
The attacker can currently do anything with the secret by interacting with the sites allowlisted for that secret, but they can't exfiltrate the secret, which is the goal of this security control. You can do better, if you like, by further locking down which endpoints they can call, but the wins past "log carefully and no exfiltration" get smaller and smaller, and at some point you're burning time that can be spent more productively on unrelated controls.
If you get what it's doing, you get it. :)