Terraform makes this redundancy especially obvious because you already include the type in any references to resources eg.
google_container_cluster.payments_cluster.something
Which was so frustrating because we all work here, we know what it’s called, it doesn’t add anything, there’s no sister-company we share resources with, all it does is make it harder to distinguish between 2 similarly named, but entirely too long internal applications.
Personally I think applications should be blind to environment and have the relevant configs passed to them and said environments should be as well-separated as possible. Ideally in different accounts. With different URL’s. That are co-inaccessible.
that's not a fault with the article, though.
Coupling the notion of "environment" with your workload (be it in their name or their configuration files) is an anti-pattern that I wish people stopped following.
If you have 3 environments, dev staging and prod, you want resources in these environments to be named _exactly the same_ in each environment.
Whatever your workloads are, wherever they run, they themselves should never _be aware of the environment they run in_. From a workload's point of view the "environment" and its label (dev, staging or prod) do not matter. What makes a workload a "dev" or a "production" workload are their respective configuration and the way they differ, _not_ the name of the "environment" they run in.
What makes a workload a "dev" workload is dictated by its configuration (which database host it talks to for example).
When the environment is being coupled in the configuration of your workloads, inevitably a developer will end up writing code like:
if env == 'dev' then use_database("dev.example.com")
This won't work at all at scale as people start adding new environments (imagine, "qa","test", "dev_1", "alphonso's_test", "etc") as developer will start adding more and more conditions: if env == 'dev' then use_database("dev.example.com")
if env == 'qa' then use_database("qa.example.com")
if env == 'dev_1' or env == 'alphonso' then use_database("dev_00.example.com")
// ... add more and more and more conditions
Instead, if your "dev" environment must talk to a "dev.example.com" database, create a variable called "DATABASE_HOST".And for each environment, set the "DATABASE_HOST" to the value of the database this specific environment needs to talk to.
For example, for your "dev" environment DATABASE_HOST = "dev.example.com", and in your prod environment DATABASE_HOST = "prod.example.com". Here we clearly have a "dev" and a "prod", yet "dev" and "prod" are merely a "labels" for us humans to differentiate them, but the _configuration_ of these environments is really what defines them.
The code above then simply becomes:
use_database(DATABASE_HOST)
and _this_ ^ will scale with an infinite amount of environments._Configuration_ defines the "environment" _not_ the name of the environment.
edit: I realize the article is talking about your cloud provider resources and people might be running multiple "environment" resources in a single account. The above applies to "workloads" talking to these "cloud provider resources", not to the resources themselves, since, of course, you can't have 2 DBs named the same under one single account (obviously the names would collide).
So your production database server has the same resource name as your dev database server?
Good luck running that on Azure.
Re your edit: When people have strong views ("absolutely not") and rants about it and yet does not seem to grasp what the article about, I think the opinion of those people should be ignored. Consider that.
Just preface my block of text with a "tangentially, when it comes to 'workloads' ... [the rest of the block of text]", and now you have a generic comment, not about the article, but about something related.
There is a story of a bank who sent out cancellations in production of all their trades because of a mistake due to something like that. That was a costly mistake.
Production and test environments should not even be on the same network. And, ideally, in my opinion, whoever has acces to a production server should not have access to a test server, and the other way around.
The naming conventions are for the humans to reason about the system, and help the new hire not trigger an outage.
Getting the "proper" amount of information in there is the acme of skill.
With Kubernetes / Helm you can have all of your resources named the same in each environment, each with their own set of same named env vars that follow what you've described but whenever you do anything that interacts with your cluster you can add a `-n prod` namespace flag so that each environment runs in its own isolated namespace.
Also for good measure it's not a bad idea IMO to add _dev, _test, _prod to the name of your database just as a double identifier. It still meshes well with the strategy of using a DATABASE_URL. I like using the full URL instead of just using the DATABASE_HOST since the password will be different across environments and I'd rather only have to set 1 env var instead of 2+.
In a sibling comment someone mentioned this pattern doesn't work for monitoring in different environments but it does work. You can set an APP_ENV env var and then filter based on that. The same thing applies for logging. You can tag / filter your logs on the APP_ENV too.
> As usual, there’s no silver bullet and the actual naming convention should always be tailored to your environment. The main point is having one! And I hope this post gives you a head start.
When looking at log files you need to know which node is having a problem. It's also helpful to know what environment was responsible for a security alert at a glance. It's also helpful to know if the instance that is running is the same one that had the incident, or whether it's a brand new node and the old one is gone.
Naming servers the same name loses a ton of valuable information and provides almost no benefit. It is just inviting people to make mistakes, and creating a nightmare for your noc/soc and siem response teams.