I would say a better reason is that while both are Linux distributions, they are distinct dialects and ecosystems. It isn't impossible to switch, but for institutions that have complex infrastructure built around the RHEL world, it is a lot of work to convert.
Consider an appliance that will be shipped to a literal cave for some mining operation. Do you want to build that on something that you would have to keep refreshing every year, so that every appliance you ship ends up running on a different foundation?
This.
A decade ago I was technical co-founder of a company [0] that made interactive photo booths and I chose CentOS for the OS.
There are some out in the wild still working and powered on 24/7 and not a peep from any of them.
We only ever did a few manual updates early on - after determining that the spotty, expensive cellular wasn't worth wasting on non-security updates - so most of them are running whatever version was out ten years ago.
Rock solid.
[0] https://sooh.com
You either need to upgrade or unplug (from the internet).
There are still places out there that are running WindowsNT or DOS even. Because they have applications which simply won't run anywhere else or need to talk to ancient hardware that runs over a parallel port or some weird crap like that. These machines will literally run forever, but you wouldn't connect it to the internet. Your hypothetical cave device would be the same.
Upgrading your OS always carries risk. Whether it's a single yum command or copying your entire app to a new OS.
Besides, if you're on CentOS 8 then wouldn't you also be looking at Docker or something? Isn't this a solved problem?
Why Docker has anything to do with this discussion?
How would I run our SAP ERP apps/databases without "running servers that long"?
And while it may be cumbersome or cause some downtime or headaches if that isn't the case, I find the very need of doing it once every 1-3 years forces your hand to get your shit together, rather than a once per decade affair of praying you migrated all your scripts manually and that everything will work, as your OS admins threaten your life because audit is threatening theirs.
The main reason people choose Linux is due to its stability.
It is totally ok to run servers in 2020.
Also your statement seems the default cloud vendor lingo used to push people to adopt proprietary technology with high vendor lock-in.
Not everything can be highly ephemeral or a managed service, so running servers yourself is totally okay like you said.
But I agree I also get the tone of "servers should be cattle and not pets, just kill them and build a new one". Which can also be done on bare metal if you're using vms/containers. It seems like most people forget these cloud servers need to run on bare metal.
We have about 40. The oldest is around 17 years old. Our newest server is 9 years old. Our average server age is probably around 13 years old.
The most common failure that completely takes them out of commission is a popped capacitor on the motherboard. Never had it happen before the 10 year mark.
Never had memory failure. Have had disk failures, but those are easy to replace. Had one power supply failure, but it was a faulty batch and happened within 2 years of the server's life.
That's mental.
Ideally you'd never upgrade your software in the usual way. You'd simply deploy the new version with automated tooling and tear down the older.
Also, "running a server for ten years" does not need to mean that it has ten years of uptime. I think that wasn't meant.
Do you have any idea how much effort it is to change everything over to "treating your servers as disposable"?! It's going to eat up a third (to half) of my "fun time" budget for the foreseeable future!
the 'usual way' is automated tooling