It's really only Linux where you have to ship a complete copy of the OS (sans kernel) to even reliably boot up a web server. A lot of that is due to coordination problems. Linux is UNIX with extra bits, and UNIX wasn't really designed with software distribution in mind, so it's never moved beyond that legacy. A Docker-style container is a natural approach in such an environment.
Yes obviously if you control the whole stack then you don't really need containers. If you're distributing software that is intended to run on Linux and not RHEL/Ubuntu/whatever then you can't rely on the userspace or packaging formats, so that's when people go to containers.
And of course if part of your infrastructure is on containers, then there's value in consistency, so people go all the way. It introduces a lot of other problems but you can see why it happens.
Back in around 2005 I wasted a few years of my youth trying to get the Linux community on-board with multi-distro thinking and unified software installation formats. It was called autopackage and developers liked it. It wasn't the same as Docker, it did focus on trying to reuse dependencies from the base system because static linking was badly supported and the kernel didn't have the necessary features to do containers properly back then. Distro makers hated it though, and back then the Linux community was way more ideological than it is today. Most desktops ran Windows, MacOS was a weird upstart thing with a nice GUI that nobody used and nobody was going to use, most servers ran big iron UNIX still. The community was mostly made up of true believers who had convinced themselves (wrongly) that the way the Linux distro landscape had evolved was a competitive advantage and would lead to inevitable victory for GNU style freedom. I tried to convince them that nobody wanted to target Debian or Red Hat, they wanted to target Linux, but people just told me static linking was evil, Linux was just a kernel and I was an idiot.
Yeah, well, funny how that worked out. Now most software ships upstream, targets Linux-the-kernel and just ships a whole "statically linked" app-specific distro with itself. And nobody really cares anymore. The community became dominated by people who don't care about Linux, it's just a substrate and they just want their stuff to work, so they standardized on Docker. The fight went out of the true believers who pushed against such trends.
This is a common pattern when people complain about egregious waste in computing. Look closely and you'll find the waste often has a sort of ideological basis to it. Some powerful group of people became subsidized so they could remain committed to a set of technical ideas regardless of the needs of the user base. Eventually people find a way to hack around them, but in an uncoordinated, undesigned and mostly unfunded fashion. The result is a very MVP set of technologies.
The dumpster fire at the bottom of that is libc and the C ABI. Practically everything is built around the assumption that software will be distributed as source code and configured and recompiled on the target machine because ABI compatibility and laying out the filesystem so that .so's could even be found in the right spot was too hard.
The "C ABI" and libc are a rather stable part of Linux. Changing the behaviour of system calls ? Linus himself will be after you. And libc interfaces, to the largest part, "are" UNIX - it's what IEEE1003.1 defines. While Linux' glibc extends that, it doesn't break it. That's not the least what symbol revisions are for, and glibc is a huge user of those. So that ... things don't break.
Now "all else on top" ... how ELF works (to some definition of "works"), the fact stuff like Gnome/Gtk love to make each rev incompatible to the prev, that "higher" Linux standards (LSB) don't care that much about backwards compat, true.
That, though, isn't the fault of either the "C ABI" or libc.
Good platforms allow you to build on newer versions whilst targeting older versions. Developers often run newer platform releases than their users, because they want to develop software that optionally uses newer features, because they're power users who like to upgrade, they need toolchain fixes or security patches or many other reasons. So devs need a "--release 12" type flag that lets them say, compile my software so it can run on platform release 12 and verify it will run.
On any platform designed by people who know what they're doing (literally all of the others) this is possible and easy. On Linux it is nearly impossible because the entire user land just does not care about supporting this feature. You can, technically, force the GNU ld to pick a symbol version that isn't the latest, but:
• How to do this is documented only in the middle of a dusty ld manual nobody has ever read.
• It has to be done on a per symbol basis. You can't just say "target glibc 2.25"
• What versions exist for each symbol isn't documented. You have to discover that using nm.
• What changes happened between each symbol isn't documented, not even in the glibc source code. The header, for example, may in theory no longer match older versions of the symbols (although in practice they usually do).
• What versions of glibc are used by each version of each distribution, isn't documented.
• Weak linking barely works on Linux, it can only be done at the level of whole libraries whereas what you need is symbol level weak linking. Note that Darwin gets this right.
And then it used to be that the problems would repeat at higher levels of the stack, e.g. compiling against the headers for newer versions of GTK2 would helpfully give your binary silent dependencies on new versions of the library, even if you thought you didn't use any features from it. Of course everyone gave up on desktop Linux long ago so that hardly matters now. The only parts of the Linux userland that still matter are the C library and a few other low level libs like OpenSSL (sometimes, depending on your language). Even those are going away. A lot of apps now are being statically linked against muslc. Go apps make syscalls directly. Increasingly the only API that matters is the Linux syscall API: it's stable in practice and not only in theory, and it's designed to let you fail gracefully if you try to use new features on an old kernel.
The result is this kind of disconnect: people say "the user land is unstable, I can't make it work" and then people who have presumably never tried to distribute software to Linux users themselves step in to say, well technically it does work. No, it has never worked, not well enough for people to trust it.
[1] Here's a guide to writing shared libraries for Linux that I wrote in 2004: https://plan99.net/~mike/writing-shared-libraries.html which apparently some people still use!
[2] Here's a script that used to help people compile binaries that worked on older GNU userspaces: https://github.com/DeaDBeeF-Player/apbuild
Yup, and I vendor a good number dependencies and distribute source for this reason. That and because distributing libs via package managers kinda stinks too, it's a lot of work. Id rather my users just download a tarball from my website and build everything local.