This article spends a good deal of time conflating two things: putting stuff in .env and using environment variables.
The application-parsed .env file is one of the most poorly thought-through ideas that has taken hold in modern application development. It takes something you can do in literally a couple lines of shell (as a container entrypoint) and adds a bunch of complexity for something that is actually just worse.
In local dev scenarios, app-parsed .env files suck because you often end up with some kind of dev-specific secret that you don't want committed to the app repo. In my experience this means developers figure out how to pass a .env file around.
If you use an actual shell instead, the local-dev .env can shell out to something like the AWS CLI to get secrets from parameter store. Or you could grab them from Hashicorp Vault if you run that.
And because a shell fetches it at run time, secret updates are seamless and properly access-controlled in one spot.
In proper deployment scenarios, .env sucks because your deployment system (container orchestrator, Lambda, etc) will need to set those values appropriately for their current environment anyway. And by having a .env file the app loads, now you have two places for configuration.
Applications simply should not have any involvement in setting values for their own environment variables. They are typically used for core infrastructure-level configuration. The source of truth for this is probably going to be available via something like Terraform. So the application should ultimately inherit they configuration through Terraform.
Additionally, this article is simply wrong on environment variables being readable by any user on a Linux system. On Linux, a process's environment can be read by the superuser and the user who owns the process. That's it.
export DB_PASSWORD="op://app-prod/db/password"
Calling the script with `op run scriptname` replaces the secret path with the actual secret after authentication during runtime.
This way you can commit the file but people still can use their own passwords locally without saving them in plaintext.
Inspiration here: https://gist.github.com/bmhatfield/f613c10e360b4f27033761bbe...
Then you can use it like this:
export OPENAI_API_KEY=$(keychain-environment-variable OPENAI_API_KEY)
$ export DB_PASSWORD=foo
$ sh
sh-5.1$ cat /proc/self/environ
SHELL=/bin/mksh DB_PASSWORD=fooIt talks about, say, restarting servers(implying downtime) and going to each one of them and updating one by one.
If in 2024 you are still treating servers as pets, you are still subject to outages if a single machine dies, and you are still manually configuring them, you are doing everything wrong.
And that's before we consider that most of the advice would not apply or have to be done very differently if the app was running in, say, Kubernetes.
> and here's how to do it better
And then basically just says go use a vendor solution. Cool.
I know vendors secret stores are better.
But what do I do that isn’t vendor-locked and is remotely as simple as environment variables?
While I'm not the biggest fan of where we are right now, it's better than the complexity of integrating with a secret engines.
What I really think is ideal is to dump individual secrets into files and only load the decryption key into memory. That way at runtime secrets can be read and decrypted without polluting ENV and avoiding them from lingering around.
Another integration, that would be great if it existed, would be if more web frameworks (the perspective I'm seeing this problem from) would integrate with the Linux memfd_secret api. Which would work great with implementations such as php-fcgi, but no so much with greeanthreaded systems.
We were just moving into AWS secrets when I left. The API was simple enough, other than the fact that now everyone had to use aws login instead of just the few of us working in terraform, and the terribly chosen session timeout, which I don’t know if our people chose or AWS chooses.
But the management of the namespaces/roles for secret visibility looked Stone Age to me. I did not envy the OPs team the problems they were volunteering to deal with.
It feels like the same brand of tedious bookkeeping that led to the Arc security hole.
For many companies this is a myth.
Once you reach a critical mass of complexity and scale, you figure out that simple database seed files you can bundle into an app repo are not sufficient to test your application.
So what solution do people look to? Taking data from prod and feeding it into lower environments. Depending on what regulation you are subject to, some amount of data scrubbing may be required. But even if it isn't, leaking users' data is a bad look.
Is data scrubbing easy to do perfectly? The answer is 100% unequivocally no.
You mentioned intranet too, and the thing about that is making dev services available externally is a common enough problem that Ngrok is financially viable.
TL;DR: the assertion that dev secrets are low value is often not true.
I now understand why some developers tell me they think using environment variables for secrets is a bad idea.
I don’t think this is true. You’d rotate all secrets in the store, as they could be accessed/compromised.
Things like vault which was suggested still requires you to pass the vault token in to your app somehow. And even then if your application does not have direct vault support you will still be using vault to supply secrets via environment variables, its even the recommended way with Nomad and their template system.
I really dislike these sort of articles. Because it has a catchy phrase "Do not use secrets in environment variables", and that is all that will be remembered. And next thing you know you will be at a company submitting a PR and some guy will say "Do not use secrets in environment variables", and then advise you to pass them as arguments on the command line (this happened to me).
Environment variables are today, the most safe way to pass secrets to a program.
.env files ARE NOT Environment variables, they are files. A better title and write up would be "Do not store secrets in file". Once you do that all the weird problems described, with exception of printing them as logs, go away. Then you need a new article "Do not print secrets in your program".
But that is all moot, because you should already be filtering out secrets by configuring your local log system to do so. I my self write wrappers and log systems that handle filtering out secrets from logs within the same context of the application I run. Its super simple, it s fast (if you know what a trie is) and you can worry free print secrets.
Edit: This article in fact has more damaging impact to security as a whole, mostly because of the conflation and 99/1 on problem/solution. This entire domain will now be blocked on all networks I have control over.
Indeed. But ideally the "somehow" that you pass that in is not an environment variable. Maybe something like a secure in-memory file that only your container/systemd service/whatever has access to that's injected by something like kubernetes/docker secrets, or systemd-creds, etc.
Consul template can render a config in memory. That is safer. But my gripe is the phrase. The phrase leads to worse situations.
Not much use to an attacker if the token / approle is restricted to a specific IP / EC2 instance id.
Auditable too
$ curl http://your-website.com/public/../../../../proc/12345/environ
If your server is serving up your whole filesystem, you likely have a lot of big problems.Similar examples that involve passing paths as e.g. query strings? When you’ve written an application that takes a real file path as a parameter and haven’t done any of several things that can prevent that from pointing to files you don’t want it to.
http://example.com/viewPost.php?post=../../../whatever
(where the server side code has the bug, not the web server configuration itself)Famously IIS had some bugs like this. Not surprised that PHP has problems. What a clown car.
You'd be scared how often this works (Though not in the exact method you described). It might not even be a path traversal vulnerability [0] that lets you read files, it could be a shell injection [1] that allows you to `cat` the file.
[0] https://en.wikipedia.org/wiki/Directory_traversal_attack
[1] https://en.wikipedia.org/wiki/Code_injection#Shell_injection
def canIStoreMySecretsHere(location):
return False
Basically, for any location you might store a secret, a hacker might get access to it. Therefore, it is not safe there.You might think I'm being sarcastic, but... perhaps less than you'd think. It has often seemed to me that secret management is a game of temporal arbitrage, where you stick them in some new sort of place and just pretend that that new place must be secure, until you realize some time later it is not, and then you stick it in a new place, a new "secrets manager" that is safe, until that gets popped, then you stick it somewhere else....
(Note this is about symmetric secrets, and things like passwords. Asymmetric things admit more interesting possibilities of bundling some computation with the storage with things like secure enclaves. One can debate the physical security of a secure enclave, but assuming its software is correctly implemented, a secret store where there simply is no API in theory or in practice to extract the secret back out is an actual improvement in secret storage that I am not sarcastic about.)
But they keep people from walking off with private keys, they don’t stop people from borrowing them. HSMs are great for SSL keys but not so great for code or cert signing.
The arguments here are mostly the same logical fallacy that makes most DRM a joke: In order for the application to function (the end user to view the movie) they must be able to use the secret (the decryption key(s) of the movie). You can play as many games of indirection as you want, but it's all basically security by obscurity.
If your application is required to fetch its secrets from AWS or Google, then someone with the type of access needed to dump the environment vars (i.e. root access on the box) can likely also modify your application code, for instance, to dump out those fetched secrets to a file.)
The answer _feels_ like it should be no: zero-trust-by-default, etc. but you're fooling yourself - a compromised dependency isn't _just_ going to look at process.env. It's going to be installing a backdoor and having an agent login and poke around. It's going to be netcatting for 3306 and finding out where the credentials are eventually. Security thru obscurity is no security at all.
Last thought, Kubernetes' `envFrom` is such a salve of simplicity in this day and age.
The dev community needs to find a better default than .env files for secrets. While there are plenty of alternatives, they generally all require knowledge of some third party system, which most people, for many reasons do not have the time or interest to learn, and some 3rd party secret to unlock the rest.
We need better default abstractions around secrets management. The authentication step to fetch secrets should be pushed to something ephemeral, probably biometrics. Ideally, devs should almost never interact with secrets in any way. They should use secure and convenient MFA methods to authN/Z their access to services, and secrets management happens out of sight. And this should all happen automatically with default tooling.
It is fairly easy to authenticate between services without secrets in the context of a single platform like AWS using IAM policies and roles, but I think we need to solve the more general case for secrets management abstraction across platforms and services. OSs, browsers, and dev tooling are becoming more mature with respect to auth methods. Secrets management should be mostly the domain of a select group of people, like any number of other complex computer systems details.
This is simply bad advice, let's get this off the frontpage.
EDIT: Or it's just small time developers who don't care about security and ship whatever works.
I will continue to use .envs. The alternatives invoke much more friction.
The thing is, in a properly run engineering organization, zero developers should have any production secrets to put into .env files in the first place. Development secrets should always be either dummy values created for this purpose (e.g. one's local development database password should be nothing or 1234 or something because you need no 'security' on a local DB that listens on 127.0.0.1 only), or the keys to entirely separate testing environments, developer sandboxes, and free-tier accounts on your 3rd party dependencies (e.g. an identity provider, Salesforce, etc.)
I'd be uncomfortable with any .env files anywhere having creds in them that I would be upset about someone accidentaly posting even in our public repositories. Sure I'd rotate them just because I don't want someone poking around our local developer Auth0 account or local developer OpenAI account, but there should be zero or close to zero of value that's possible to exploit using those secrets.
At my company we've settled on dotenvx to manage these .env files, which has the neat feature of asymmetric encryption so that 99.9% of the time devs don't even need the private key for the production one because adding a new value for a new thing can be done with the public key. Values that are actually secret are encrypted in the env file, and the correct private key for the environment is passed to the app as an env var, which it uses to decrypt any encrypted values. As others have pointed out, many claims in the article about how anyone can access any application's environment are inaccurate.
https://www.freedesktop.org/software/systemd/man/latest/syst...
> Note that environment variables are not suitable for passing secrets (such as passwords, key material, …) to service processes. Environment variables set for a unit are exposed to unprivileged clients via D-Bus IPC, and generally not understood as being data that requires protection.
> Moreover, environment variables are propagated down the process tree, including across security boundaries (such as setuid/setgid executables), and hence might leak to processes that should not have access to the secret data.
common secrets used on server side - `JWT_SECRET`, `DATABASE_PASSWORD`, `PGPASSWORD`, `AWS_SECRET_TOKEN` etc.,
Being a long time developer, this breaks the standard of backend apps which mostly uses 12 Factor App[1]. This approach introduces a new dependency for fetching secrets. I see all new open-source projects using "paid" or "hosted" solutions. It is no longer easy/simple to host a full open-source app without external dependencies. (I understand -- things are getting complicated with s3 for storage etc.,).
Instead, expose all your secrets via a public API with IP filtering, then give the credentials to this service to your app - as an environment variable - and voila!
This just seems like increasing complexity in a part of your system that should be as simple and non-dynamic as humanly possible, for the upside of.. much larger attack surface?
We inject secrets, as env vars, in each Deployment. No runtime access to any secrets store, gives the ability to generate new secrets for each deployment, and minimal complexity for such a critical aspect of the system.
But we never had a good system for sandboxes, which meant another way the local and deployment differed.
They moved to a secret store instead. I don’t know if that’s better.
Of all the examples that could have been 'called out', this is the least practical one. Jumping from the problem statement straight to hosting with a third party provider completely ignores the huge risk that comes with it. Using environment variables is risky so just give your secrets to some third party... which then provides environment variables anyway. This entire section ought to be dropped, it almost reads like a sponsored bit and there are much better and more widely used solutions used such as sops and vault.
It has nothing to do with environment variables… It is just a ephermal way how to inject variables into the process, if you do it right…
If your secret store is a set of conventions which keeps access confined to the application environment, no. Things like Ansible Vault, AWS/Azure/GCP/etc. secrets using role-based, etc. have the nice property that they are isolated from unrelated apps so an attacker can’t breach one thing and move laterally across all of your applications. You have to protect that core infrastructure anyway so there’s an argument for not doing so more times than necessary.
It loads env files and call hashicorp vault if the value is a secret.
I find it pretty neat to have an env file that describes all environments variables.
If you allow these mistakes to be possible, they are inevitable. If you take basic precautions, you'll probably be fine.
I'd rather take a well-curated and trimmed down .env over a poorly-configured secrets manager that gives away the entire farm when the single secret leaks. Security isn't a single thing nor bolstered by switching a single method of how you store your secrets.
The problem is not taking precautions to prevent leaks from happening, not how you are managing your secrets. If your threat model begins or is imminently "when the attacker is logged in as root", just post your stuff on a public bucket to get it over with.
as opposed to Nike, the Ancient Greek goddess of Victory, who contributes essentially nothing back to the OSS community
Didn't expect that from Nike.