It's perfectly fine for you to say non-Kubernetes isn't either your focus or on your 90 day roadmap :)
https://discuss.linuxcontainers.org/t/how-to-best-ask-questi...
packets boundaries are not an issue because detection happen at the SSL write where we have the full secret in the buffer and its position so we can know at rewrite time that the secret is cross 2 packets and rewrite it in 2 separate operations. We also have to update the TLS session hash at the end to not corrupt the TLS frame.
The added benefit is that you can also manage things like api rate limits, and implement all sorts of cool monitoring and api-specific threat detection centrally. I don't know of a way to do this outside of cloud provider services though.
Architecturally speaking, you have an environment that is at the same level of trust with respect to the data it processes, anything in there is unsecured, but all interactions outside of the system passes through a gateway proxy that manages all of what i mentioned earlier, including secret management.
- send traffic to the proxy (either in a non transparent way or using routes or even ebpf to redirect traffic to the proxy transparently)
- trust the proxy certs or use plain http/TCP to the proxy
With kloak, the app don't need any modification and you avoid a single point of failure (aka egress proxy). Each app has an independent ebpf program attached to it that can survive the control plane going down and don't need to trust any special certs or change the endpoint it sends traffic to.
But not everyone wants to, or can afford to run a proxy for credential management. I started looking into this mostly to regulate API usage, especially burning through tokens when calling LLM apis, the credential benefit only occurred to me afterwards. Great work with it, no idea how the eBPF magic is making it work, I'll have to find out.
Assuming I hijack a production pod, can I not just make an http call to myself with the `kloak:...` secret and get back the real secret? Is there a way to validate destination?
It's not perfect though, see Host Filtering | https://getkloak.io/docs/guides/host-filtering.html
The few who agreed were rigorously testing our product and asked for code SBOMs before even a pilot.
Infiscals agent vault might be the best middle ground for this kind of setups I feel sometimes
Such a sick idea, and incredibly useful. Would be nice if it integrated directly with secrets managers RE: ESO
We are planning to integrate with external secret operators, like AWS secret manager or Openboa/Vault so users can benefit from an end to end secrets protection. secret encryption/sealing at rest (through secrets managers) and protecting secrets from in-memory exfiltration attacks with kloak.
The idea is to let the ESO handle the secret at rest and delivering it to Kloak that then would continue to do the kloaked secret rewrite so the secret will only be available in a non encrypted form in Kloak. We can even push the concept further and do KMS decryption just in time to reduce the window where the secret is available.
Also, does the replace op happen only for specific fields in HTTP, or for every matching string in the request? I can imagine the latter if you want to support non-standard authentications methods, though there's always the edge case where the secret string placeholder is not used as a secret and should not be replaced.
TEE aim to protect a certain workload from the host to avoid another workload on the same host from steeling secrets. Kloak aim is to protect the secret from the workload itself not the host.
The main thing I wonder is how well supported is it in cloud environements? AKS/EKS/etc?
The main hurdle is that we can't rewrite secrets in any of the user buffers as this will defy our threat model and signing is usually done in user space.
Generally speaking, if you're running Kubernetes in GCP (likely via GKE), and you control how your applications retrieve their secrets, you're likely better off with a combination of Workload Identity Federation, tight IAM to Secrets Manager, and a smart secrets retrieval strategy which likely involves lazy loading secrets and attempting a reload in case of a permission denied so it can deal with secrets rotation.
For applications where that's not an option, the state-of-the-art has been ensuring etcd is actually encrypted (as opposed to the default Base64), and relying on Kubernetes Secrets, usually either mounted in the filesystem or passed to environment variables.
Both these approaches have weaknesses since they're immediately available to all processes in the container.
OP seems to solve that by never exposing the secrets to the application, by sitting between the application and the service and replacing the secret on the wire, outside of the application's reach.
Please have a look at the demo if you can ; there is a webhook that abstract changing the secret resource name for you. You just "annotate" the secret resource and kloak admission controller will rewrite secrets of your deployment resource for you after that. This means the app never actually see the secret (accidental or not).