[1] https://blog.diogomonica.com/2017/03/27/why-you-shouldnt-use... [2] https://security.stackexchange.com/questions/197784/is-it-un... [3] https://learn.microsoft.com/en-us/aspnet/core/security/app-s...
I’ve never liked making secrets available on the filesystem. Lots of security vulnerabilities have turned up over the years that let an attacker read an arbitrary file. If retrieving secrets is a completely different API from normal file IO (e.g. inject a Unix domain socket into each container, and the software running on that container sends a request to that socket to get secrets), that is much less likely to happen.
Secrets have to be somewhere. Environment variables are not a good place for them, but if you can't trust your filesystem to be secure, you're already screwed. There's no where else to go. The only remaining place is memory, and it's the same story.
If you can't trust memory isolation, you're screwed.
As a counterintuitive example from a former insider: virtually no one is storing secrets for financial software on an HSM. Almost no one does it, period.
There’s a whole class of security vulnerabilities that let you read from arbitrary files on the filesystem. So if you end up having of those vulnerabilities, and your secret is in a file, then the vulnerability lets the attacker read the secret. And on Linux, if you have such a vulnerability, you can use it to read /proc/PID/environ and get the environment variables, hence getting secrets in environment variables too.
However, the same isn’t necessarily true for memory. /proc/PID/mem isn’t an ordinary file, and naive approaches to reading it fail. You normally read a file starting at position 0; reading /proc/PID/mem requires first seeking to a mapped address (which you can get from /proc/PID/maps); if you just open the file and start reading it from the start, you’ll be trying to read the unmapped zero page, and you’ll get an IO error. Many (I suspect the majority) of arbitrary-file read vulnerabilities only let you read from the start of the file and won’t let you seek past the initial unreadable portion, so they won’t let you read /proc/PID/mem.
Additionally, there are hardening features to lock down access to /proc/PID/mem, such as kernel.yama.ptrace_scope, or prctl(PR_SET_DUMPABLE)-that kind of hardening can interfere with debugging, but one option is to leave it on most of the time and only temporarily disable it when you have an issue to diagnose
Also, memfd_secret supports allocating extra-special memory for secret storage, which the kernel can’t read, so it shouldn’t be accessible via /proc/PID/mem
It strikes me that those envs might be particularly prone to corporate inertia, ieg "the current way passed security audit, don't change it or we need to requalify"
It's possibly also harder to rely on a HSM when your software is in a container? ( I'm guessing here tho )
Unless I'm missing something, there are three scenarios where this comes up:
1. You are using a .env file to store secrets that will then be passed to the program through env vars. There's literally no difference in this case, you end up storing secrets in the FS anyway.
2. You are manually setting an env var with the secret when launching a program, e.g. SECRET=foo ./bar. The secret can still be easily obtained by inspecting /proc/PID/environ. It can't be read by other users, but so are the files in your user's directory (.env/secrets.json/whatever)
3. A program obtains the secret via some other means (network, user input, etc). You can still access /proc/PID/mem and extract the secret from process memory.
So I'm assuming that what people really want is passing the secret to a program and having that secret not be readable by anything other than that program. The proper way to do this is using some OS-provided mechanism, like memfd_secret in Linux. The program can ask for the secret on startup via stdin, then store that secret in the special memory region designed for storing secrets.
If we are running under something like K8S or Docker, then I think there should be some component that runs on the host, that provides access to secrets over a Unix domain secret, and then we mount that socket into each container. (The reason I say a Unix domain socket, is so then the component can use SCM_CREDENTIALS/SO_PEERCRED/etc to authenticate the containers). I’d also suggest not using HTTP, to reduce the potential impact of any SSRF vulnerabilities (although maybe that’s less of a risk given many HTTP clients don’t work with Unix domain sockets-or at least not without special config). (Can we pass memfd_secret using SCM_RIGHTS?)
For desktop and native mobile, I think the best practice is to use the platform secret store (Keychain on macOS/iOS, Freedesktop Secret Service for desktop Linux, Android Keystore, Windows Credential Manager API, etc). But for server-side apps, those APIs generally aren’t available (Windows excepted). Server-side Linux often lacks desktop Linux components such as Freedesktop APIs (and even when they’re present, they aren’t the best fit for server-side use cases)
You have a .env file that is in the same directory as your code and you just copy to to env vars at some point. This does not even meet the security principles that dotenv is supposed to implement!
I think people are blindly following the advice "put secrets in env vars" without understanding that the point of it is to keep secrets outside files your app can read - because if you do a vulnerability or misconfiguration that lets people read those files leaks the secrets.
What you can do is have environment vars set outside your code, preferably by another user. You do it in your init system or process supervisor. Someone mentioned passing them in from outside a docker container in another comment.
On the other hand, using .env vars can leak in different ways like a developer mistakenly committing secrets to git or making this file available to the world wide web.
Building a lot of assumptions into your containers about where and how they are being deployed kind of defeats the point of using containers. You should inject configuration, including secrets, from the outside. The right time to access secret stores is just before you start the container as part of the deploy process or vm startup in cloud environments. And then you use environment variables to pass the information onto the container.
Of course that does make some assumptions about the environment where you run your containers not being compromised. But then if that assumption breaks you are in big trouble anyway.
Of course this tool is designed for developer machines and for that it seems useful. But I hope to never find this in a production environment.
So how do you rotate secrets without bouncing app servers..?!
I like secret stores but only when the value of something regularly changes in a way that redeploying becomes unacceptable.
Then I killed myself and was reborn. Now I just use an env file.
Environment vars propagate from process to process _by design_ and generally last the entire process(es) lifetime. They are observable from many os tools unless you've hardened your config and they will appear in core files etc. Secrets imply scope and lifetime - so env variables feel very at odds. Alternatively Env variables are nearly perfect for config for the same reasons that they are concerning for secrets.
Tl/Dr; in low stakes environments the fact that secrets are a special type of config means you will see it being used with env vars which are great for most configs but are poor for secrets. And frankly if you can stomach the risks, it is not that bad.
Storing secrets on the filesystem - you immediately need to answer where on the filesystem and how to restrict access (and are your rules being followed). Is your media encrypted at rest? Do you have se Linux configured? Are you sure the secrets are cleaned after you no longer need them? Retrieving secrets or elevated perms via sockets / local ipc have very similar problems (but perhaps at this point your compartmentalizing all the secrets into a centralized, local point).
A step beyond this are secrets that are locked behind cloud key management APIs or systems like spiffe/spire. At this point you still have to tackle workload identity, which is also a very nuanced problem.
With secrets every solution has challenges and there the only clear answer is to have a threat model and design an architecture and appropriate mitigations that let you feel comfortable while acknowledging the cost, user, developer and operator experience balancing act.
It handles task running (wipe local test db, run linting scripts, etc), environment variables and 'virtual environments', as well as replacing stuff like asdf, nvm, pyenv and rbenv.
Still somewhat early days, tasks are experimental. But looks very promising and the stuff I've tried to far (tasks) works really well.
I’ve switched recently from asdf for managing language & tool versions and the ergonomics are much nicer (eg one command vs having to manually install plugins, etc., more logical commands) It’s also noticeably faster.
Regarding the env vars features, a couple of relevant Mise issues around people trying to integrate env var secrets using SOPS, 1Password, etc.
I haven't felt the need to use it as a task runner yet, but that's probably because I'm used to having a bunch of shell and Python scripts in a `scripts` folder.
If now someone has to read docs to figure out how to configure the app, I’d rather have them read docs for some other safer and more powerful configuration scheme.
As an example, once I changed a .env fikr and unit tests started failing. Digging deeper into it, and lots of code was checking for .env to load its configurations, and would break without it. I'd prefer this not to happen, as our tests were executing based on outside, not version controlled, configurations.
After removing dotenv as a library and using it only as a command, we were able to separate configuration and logic, and not have .env files affecting our unit tests - we simply ran the application with dotenv command, and the unit tests without.
Sops also integrates easily with AWS and other existing key management solutions, so that you can use your existing IAM controls on keys.
I mentioned in another comment, but I've been using it over five years at two jobs and have found it to be great.
I'd prefer an integration between dotevnx and sops where dotevnx handles the UX of public env and injection, while leveraging sops for secret management and retrieval. Additionally, being able to have multiple keys for different actors is important.
Having a single `.env.keys` file feels risky and error prone. dotenvx encourages adding your various env files, such as `.env.production`, to vcs, and you're one simple mistake away from committing your keyfile and having a bad day.
If sops is not to be integrated, dotenvx could take some inspiration where the main key is encrypted in the secrets file itself, and you can define multiple age key recipients, each of which can then decrypt the main key.
One reason I can think of is that normally with secrets I actually don't keep any copies of them. I just set them in whatever secret manager my cloud environment uses and never touch them again unless I need to rotate them. Meaning there is no way to accidentally expose them other than by the secret vault being hacked or my environment being hacked.
With this approach if someone gets access to the encryption key all secrets are exposed.
1. Create secret v1
2. Code v1
3. Deploy
4. Secret v2 (rotation)
5. Code v2
6. Deploy
7. Oops, need to roll back to v1 (from step 2)
8. Outage, because the secrets in step 2 are not the secrets from step 4Typically, developers can’t change production secrets in vaults and need to follow some other protocols.
Encrypted secrets mean you deploy everything along side the secrets.
The developer experience is great, but the biggest issues I have faced while using Kubeseal were
1. Developers HAVE the secret in order to encrypt it. This can be not ideal as then they can use these secrets in production or leak them
2. The secret encryption key change causes the need to re encrypt everything.
3. People don’t understand the concept.
It’s a learning curve, but I think it’s best to just bite the bullet and use a vault rather than trusting developers to know and manage secrets properly.
git-crypt is easy, the master key doesn't rotate, so don't leak it. (Secret encryption key rotation is kind useless; it's nice that if you leak an old key newer secrets aren't leaked, but it depends on your underlying secret rotation policy as to whether or not that saves you any work. I have tended to do them in bulk in the past.)
On my last project we did disaster recovery exercises every 6 months, sometimes with the master key intentionally lost, and it wasn't that big of a deal. Restoring your infra-as-code involves creating the secret encryption service manually, though, which is kind of a pain, but not like days of downtime pain or anything. Of course, if the secrets encrypted your database or something like that, then losing the master key equals losing all your data. Hopefully your database backup has some solution for that problem.
If the vault is password protected, aren't you just adding one more indirection and nothing more? How is that helpful, since now I have to write the vault password in clear-text somewhere such that my application can read the env file from the vault?
At no point does the application have access to the vault itself, and access to read the vault is guarded by IAM role permissions.
I hope someone can do me a ELI5.
Instead, they both beat the alternative (which is writing secrets and environmental config directly in the source code). And they’re both part of defending in depth.
Defense in depth is hard to explain to a five year old, so think of candy. You’re five and you’re obsessed with candy. If I don’t want you to eat pounds of it every single day, I’ll likely do different things to stop you.
1.) I’ll give you a reasonable amount of candy.
2.) I’ll explain that if you eat too much candy, you’ll face health consequences like tooth decay or childhood obesity.
3.) I’ll put the candy somewhere both out of sight and out of reach.
Software security is like that too. Instead of relying upon just one method, you’ll do a number of different things. On their own, few of them are really very useful. But when you combine them all together, you can end up with a reasonably secure system.
In the case of vaults, it’s just a slightly safer tradeoff with its own problems. We’ve already established that it’s bad to write secrets and environmental configuration info in source code. One way around that is to put secrets in a .env file but then distributing that file becomes the weak link. Maybe you Slack them around, or email them or maybe you write all the secrets on a whiteboard in your office? A vault has a lot of flaws, but it’s better than writing them on a whiteboard. In some threat models, it’s better than Slack or email.
It adds depth but it’s far from perfect.
This is mildly complicated, but the alternative is storing config in a configuration server somewhere, which comes with its own can of worms.
[local]
API_KEY=local-key
API_SECRET=local-secret
DB=postgresql://username:password@localhost:5432/database_name
[production]
API_KEY=prod-key
API_SECRET=prod-secret
DB=postgresql://username:password@prod-db:5432/database_name
[staging]
API_KEY=stg-key
API_SECRET=stg-secret
DB=$(production.DB)
It makes it easier to update all env at once, compare, and share.
It's not much help, but it helps me avoid a few annoyances.On an unrelated note, I always find it a real headache to keep the naming convention of the environments throughout the project. It always ends up like a mixed bag:
* Juggling production/prod, staging/stg, and develop/dev,
* Inconsistent placement of env, e.g. prod-myproject or myproject-stg,
* Skipping env name sometimes, e.g. myproject-bucket for dev S3 bucket but prod-myproject-bucket for prod (though it's okay to emit env name for user facing places like URL),
* Inconsistent resource sharing between envs, e.g. same S3 bucket for local and dev but different DB, or same Kubernetes cluster with different labels for dev/stg but different cluster without a label for prod.
These inconsistencies often result from quick decisions without much thought or out of necessities,
and everyone is too scared to fix them anyway.
But it bothers me a lot and sometimes causes serious bugs in production.Fix: format
Also any system as described needs security audit and analysis to truly understand it strengths and weaknesses (or glaring compromises).
Alternatively - secrets via environment vars weaknesses and mitigations are well understood.
I suppose if you don’t want it to stay after execution i believe you can:
> $(source .env; my command)
I’m sure there is a fairly straightforward way to encrypt and decrypt a local file #!/bin/bash
set -o allexport
. .env
set +o allexport
cmdBut I do agree that at some point you want a tool to orchestrate these things and guide your usage so you don’t have to reinvent the same lines of code all the item
You don't need to encrypt your keys, with what keys are you going to do so? Will you encrypt those?
if someone is in your server you are pwned anyways.
It's ok if you identify yourself as a cybersecurity dude and hold a cybersecurity role and you need to justify your livelihood.
But do it in a way where you don't bother people. It's ok if you bother devs, but then you go on and bother users with 4FA, 5 rule passwords, systems that can't answer subpoenas because you have encrypted your sense of self.
When you are improving security at the expense of every other variable, that's annoying, but when you keep "improving security" at the expense even of security, is the point where people will start ignoring and hiding shit from you
if the "secure" methods aren't being used because of 4FA and 5 rule passwords and 30 day expiries (don't get me started on this), then it is by default insecure because the devs will find more convenient ways, and thereby, less secure ways.
it's like storing passwords, i can't tell u how many times i've seen people use the same passwords everywhere because the rules are too restrictive. or just write it down somewhere public because it's too much work to get into the password manager and properly add it
i'd be willing to put big money down that a LARGE chunk of passwords for apps that require at least 1 number or symbol largely end in `!` or `1` at the end.
luckily i do think passkey is a step in the right direction with good convenience and overall ux
This is false and also a symptom of an all-or-nothing approach to cybersecurity, which isn't feasible in the real world.
I'm assuming the parent intended to say "if someone gained access to your user you are pwned anyways", which is true, unless you actually go to the effort of storing the secrets securely using OS-provided mechanisms. Env vars are not that.
> which isn't feasible in the real world
Well of course it isn't, how would you justify those sweet cybersecurity experts' paychecks otherwise? Not saying cybersecurity isn't important, but there's way too much snake oil in the industry nowadays (always has been?).
So if a single dev machine is compromised, all of your prod secrets are exposed?
I wish this were closer to sops with support for gpg and or ssh keys. Because sops is a great idea locked in a questionable codebase.
Note that you don't have to leave the key "lying around" as you can secure it the same way you would an asymmetric key. And it certainly beats leaving the plaintext secrets themselves lying around in a .env file or similar.
EDIT:
I see you were saying "dev machine" exposes "prod secrets" but that's not the case. The protocol is designed so you would have secrets.json and secrets.prod.json, encrypted with different keys and (necessarily) managed separately but with the same tools and api. Dev machines being compromised compromises dev keys, not prod keys.
Read the last section in the README on GitHub for more on the dev/prod split.
It also means I can do things like seal them to a key that is stored in KeyVault and then allow the transparent retrieval of that key at runtime on Instances that have been given an identity with access.
This means that production secrets are sealed in place and only openable by effectively authenticated workloads.
And if you use sops-nix, this becomes a "setup once and never think about it ever again, ever" kind of operation.
I've used it at two jobs now over about 5 years and have had zero issues.
SecureStore was launched in 2017 (initial version was .NET only): https://neosmart.net/blog/securestore-a-net-secrets-manager/
And where do I keep the key? In a secret store?
Whether it's a symmetric key or an asymmetric key, you have the same problem. Someone overriding your secrets is definitely not high on the list of concerns, and if they're committed to git then they can never be truly overwritten.
Doubly the case now that env is natively supported by node now.
I don't know if it's a problem for Rust (or other platforms like Python, .NET, or Java afaik).
As someone who primarily writes TypeScript to run in browsers and on node.js, this kind of threat requires an extra level of vigilence, and often nudges me toward writing my own things rather than importing them.
In Rails, the entire file is encrypted unlike here where only the secrets are
So we can have
FOOPW=pw1
when testing locally, but
FOOPW="{vault1:secret1}"
in production. Env vars are processed simply by running a regex with callback that fetches secrets from vaults. This is quite flexible and has the advantage of being able to inject the secrets in the same place as other configuration, without actually having the secrets in environment variables or git etc (even encrypted)
SOME_CONFIG_OPTION = @AWS::some_config_option
And I've written a config library that knows that when a config value starts with `@AWS::' it needs to resolve the config option to an actual value by reaching out to AWS's Secrets Manager service and looking it up there, in which case it receives the value and caches it locally so that subsequent references to this configuration option don't require an additional call out to the cloud.
It works surprisingly well.
The URL in `dotenvx` points to https://gitub.com/dotenvx/dotenvx (gitub without the h)
It would only break in cases where people's values specifically started with "encrypted:"
I've never used it (knowingly) but if I did and wanted to use this new version/project even the CLI name change to append 'x' would be annoying (I'd probably alias /symlink it).
The previous (IMHO superior) version was generating a .env.vault and a .env.keys from a .env file. Leaving the .env plain text and .env.vault encrypted.
Is there a good primer on using vaults? I know how to query and insert into Azure Key Vaults, but architecting around it is unclear to me.
Things that come up for me:
- As (azure) key vaults don't support per secret access rights, where do I store secrets between deployments?
- Should I store connection strings to cloud resources, or just ask the resource for the connection string at deployment time (for Azure, a cloud function pretty much needs a connection string for most basic things. They say they are moving away from this but ...)
- A security warning is send if a key is accessed more then x/times per hour. Does that mean I should pull in the key from vault at deployment? Cache it after first call during runtime?
- Most of our 3rd party vendors gives us 1 and only 1 key. How do I manage that key between development, production and several developers? Right now we mostly forward the e-mail from the vendor with the key ...
For example AWS gives you multiple ways of injecting secrets as env vars into your containers when they boot up (ECS + secrets manager, EKS, etc)
Typically the application instance sessions are automatically rotated very frequently, AWS’s sessions are limited to 6 hours for example.
I am far too clumsy to trust myself to push secrets in encrypted form, personally
`start: dotenvx run -f .env.local -f .env -- node index.js`
Instead of the -f flag, which now cannot be overriden, one could invoke it with
`DOTENV=.env.staging npm run start`
For example
DOTENV_PRIVATE_KEY_PRODUCTION
Would provide it with the information it needs to read .env.production
The correct fix for “it’s too easy to accidentally commit .env files with secrets” is to not function (panic/throw) if there isn’t a suitable .gitignore/.dockerignore, not a specialized cryptosystem for .env files. This just creates a different problem.
I simply use an envdir outside of the project and update all my run scripts to use “envdir $CONFIG_PATH <whatever>”. Simpler and safer.
I do like to keep a .env.example that you can rename to .env and adjust as desired. I tend to have defaults for running a compose stack locally that close to "just works" as possible.
I doubt I'd ever want to use this in practice.
This is a great tradeoff: easy way to share configuration, easy way to edit non-encrypted config values, reasonable security for the private values.
Doesn't solve key rotation of course, but for small teams this is a great solution.
Can’t I somehow do this in the script itself so “ruby index.rb” is enough? I know I’m only saving a couple of characters in the command line but I’m asking out of curiosity.
Does dotenvx support secrets managers?
The ability to use arbitrary filename for.env is quite nice though!
And the attackers will be after this file not the .env anymore.
It looks great nonetheless, especially the cross-language feature.
A=$B
B=$A
Anyway, I hope they don't do command interpolation on top of that (like Ruby dotenv does), because then you can inject code via environment variables (like in the Ruby version).I recently looked into various dotenv implementations just for fun. They're all different. No unified syntax at all. A lot don't do proper parsing either, but just use some regular expressions (like this one), which means they just skip over what doesn't matches. I started to document all the quirks I could find and wrote my own dotenv dialect just for fun. Nobody use it! Anyway, here it is: https://github.com/panzi/punktum
Direct link to the quirks of the JavaScript dotenv implementation: https://github.com/panzi/punktum?tab=readme-ov-file#javascri...
I've also tried to write a parser compatible to JavaScript dotenv (no x) in C++: https://github.com/panzi/cpp-dotenv
Environment variables are great for configuration because:
- you can inherit them from a previous application or application(s)
- you can override them in each environment you run your app in
- you can pass them on to other applications
- they are globals that can be loaded by libraries
- they're not hardcoded in the code, so easier to change things without rebuilding, easier to reuse in different ways/environments/configurations
- the OS has primitives for them
- they're simple
Environment variables are bad for configuration: - because (by default) when set in application, they are passed on to all future applications/forks/execs
- they are often dumped as part of troubleshooting and aren't considered confidential
- they can often be viewed by external processes/users
- there are restrictions on key names and values and size depending on the platform
- typical "dotenv" solution doesn't necessarily handle things like multi-line strings, has no formal specification
- no types, schemas
What we actually need that environment variables are being used for: - configuration information passed at execution time that can change per environment
- loading or passing secret values
- development environments
- production environments
So what would be a good alternative? - an application library ("libconfig") that can load configuration of various types from various sources in various ways
- support for configuration types: key-value, file/blob, integer/float
- support for confidentiality (require specific function to unseal secret values; in programming languages the intent would be you can't just print a stringified version of the variable without an unseal function)
- support for schema (application defines schema, throws exception if value does not match)
- support allowing a configuration to be overloaded by different sources/hierarchies
- support passing a configuration on to other applications
- support tracing, verbose logging
- truly cross-platform and cross-language with one specification, behavior for all
How would it work? - devs can create a .env file if they want
- devs load 'libconfig' into app, use it to load their configuration values during development. library can have default sources, and even set env vars or an object internally, so no code needs to be written to use it
- in production, same code causes libconfig to look at cloud-native and other sources for configuration
- when debugging, secret confidentiality is maintained, tracing communicates sources of configuration, what was loaded, from where, etcGranted our solution is more javascript/typescript focused - and the config schema will be defined using TypeScript and piggyback on npm for sharing plugins/extensions. But the config will be usable in any language (and generate types for multiple languages) with deeper integrations coming soon.
The pluggable nature of our architecture also means you can encrypt secrets in your repo if you want to, or sync with other backends. Shouldn't be too hard to keep everything away from env vars either if that's what some folks want.
Would love your input, and to hear what you think!
- Supporting a particular repo type is an engineering smell (over-complicated/over-opinionated/tightly-integrated); this should be repo-design-agnostic
- The "dmno service" is also biting off too much, this should not be a concept inherent to the configuration library
- Schemas should be optional
- Data types are fine, but get complicated, especially when mapping between different data formats/containers/transports; it's probably better to start with only a couple types and grow them over time if needed
- Inter-service dependency management is also too complex for this solution
- Plugins are a good idea
- Three different package types? Complexity...
- The security features are great
I get that you're trying to sell a product, and so having a big kitchen sink makes it more attractive to buy, but it makes for more complicated solutions which then annoy users.This is the only goal and this tool archives it. In the simplest way. While keeping you as secure as you were before, manually setting envs on heroku, railway, aws, jenkins etc.
GitOps FTW
I wonder if dotenvx ensures that .env is in .gitignore and yells loudly if it is not.
I encrypt my dotenvs with gpg, but that's hella esoteric and everyone shouldn't be forced to do that.
For this kind of encryption to work, you need to supply the decryption key from some outside system (e.g. via env vars, AWS SSM, etc.). And if it can supply the key, then why not just use it for other important secrets directly?
while developers can move around their .env file across systems without worrying that they left plaintext secrets somewhere.
also it allows adding new secrets without knowing decryption key - I think it is important for collaboration
also most importantly: plaintext decrypted secrets are never stored on disk, and only kept in memory. I think it is also an improvement towards the regular doting
"Devops people know" means that the key must be some secret property. Or you need to use the key during the deployment artifact building pipeline, and then deploy the artifacts with clear-text secrets.
> vs storing hundreds of secrets.
Then serialize them to JSON or whatever.
> also it allows adding new secrets without knowing decryption key - I think it is important for collaboration
So basically, you want developers (who don't have access to prod) to add random properties that your peers can't see during the code review? Ok...
Sorry, there's just no way the encrypted secrets in git are a good idea for general-purpose software.
like what is the alternative you propose? storing plaintext secrets on disk and hope that your runtime is secure and hardened enough and free from vulnerabilities??
as if directory traversal, path injection vulnerabilities, shell command injection, etc vulns that allow reading file from disk, don't exist?
This is basically a simplified version of Hashicorp's Vault, GCP key vault etc. with some less granularity on user authentication.
It solves the issues around .env.example and is perfect for gitops. You have all your secrets for all your envs ready, while you only need to set a single env var (the private encrpytion key) on your specific hosting environment.
You could even use separate keys per env, eg.: to give access to a developer to staging only.
Mozilla's https://github.com/getsops/sops is another contender but with a more complicated (and perhaps more flexible) key management.
(author of rot)
If your env-data is compromissed you have to set new in all services and restart your app / container.
But the enc-env file could be shared in you team or published to server without any problem. In the past this was a problem, when publishing accidently with plain text passwords.
Env vars are prone to leaking and best practice moves the goal post further. Devs love to dump envs to log files, child processes inherit them, admins can very easily sniff them.
there are costs associated with adding additional layer in regards to maintenance of such layer.
easiest way to bring down your entire distributed infrastructure and cause large scale outage is when your vault is down...
I think you have a fair point that dotenvx doesn't get the implementation right, but it does at least seem to recognize where the problem lies and is trying to fix it from that angle. You have to start somewhere. Almost never do we get solutions right the first time. It takes iteration and experimentation and perhaps this (and others like it) can pave the way towards a better solution.
In the best case your app sends (something like gRPC) to the OS-key-system, that adds decrypted keys and executes the function. So you app will never have direct access to the decrypted keys. Like fingerprint-system in smartphones.
Using dotenv-like constructions is, in my eyes, an antipattern.
How is that different from a dotenv, other than location of where the k/v persists?
It wouldn't surprise me if many VPS use .env files.
https://nodejs.org/en/blog/release/v20.6.0#built-in-env-file...