I hope someone can do me a ELI5.
> (i.e. if someone were to gain access to a running Kubernetes container)
right? Since those would still be secrets available in the env.
I get that if someone has access to read your envvars, its a foregone conclusion already (about how compromised you are).
However IIUC, the part of the point of doing things in memory with reading secrets (like with a Secrets Manager, is to eliminate having to keep secrets around as envvars/secret files in the runtime?
Instead, they both beat the alternative (which is writing secrets and environmental config directly in the source code). And they’re both part of defending in depth.
Defense in depth is hard to explain to a five year old, so think of candy. You’re five and you’re obsessed with candy. If I don’t want you to eat pounds of it every single day, I’ll likely do different things to stop you.
1.) I’ll give you a reasonable amount of candy.
2.) I’ll explain that if you eat too much candy, you’ll face health consequences like tooth decay or childhood obesity.
3.) I’ll put the candy somewhere both out of sight and out of reach.
Software security is like that too. Instead of relying upon just one method, you’ll do a number of different things. On their own, few of them are really very useful. But when you combine them all together, you can end up with a reasonably secure system.
In the case of vaults, it’s just a slightly safer tradeoff with its own problems. We’ve already established that it’s bad to write secrets and environmental configuration info in source code. One way around that is to put secrets in a .env file but then distributing that file becomes the weak link. Maybe you Slack them around, or email them or maybe you write all the secrets on a whiteboard in your office? A vault has a lot of flaws, but it’s better than writing them on a whiteboard. In some threat models, it’s better than Slack or email.
It adds depth but it’s far from perfect.