The power and flexibility of Linux comes with the cost of a much lower floor for stability than proprietary operating systems. You can make a rock solid system using Linux, but that requires nontrivial effort and knowledge. I have yet to see an issue on my personal Windows and macOS machines that approached e.g. package management hell on Linux, in terms of complexity and frustration.
For examples of what open source projects look like when they have the backing and resources of large companies, look at LLVM versus GCC, BoringSSL versus OpenSSL and CentOS versus Ubuntu. "Better" would be too broad a description, but if you qualify it to the narrow scope of stability, then yeah I think proprietary resources generally come out ahead. I also don't think that is a controversial observation.
> In terms of out of the box stability and support?
How many devices have you used that came with Linux "out of the box?"
> package management hell on Linux
This is actually amusing to me. Windows has had this problem for a long time, they call it DLL Hell. Their solution was to duplicate everything and also to have multiple libcs that are incompatible with each other. Developers seem to forget this exists until they actually try to ship something on Windows. Nowadays, companies will just ship all of their dependencies they need.
Reasonably, developers are reticent to do this on open-source platforms, but this is what results in package friction. Semantic versioning would also fix this, but it's unrealistic to think we could get the tens of thousands of packages in a distribution to follow this. I personally don't think Snap or Flatpack is the answer here either, but Nix is.
Sidenote, I'm curious how you got into "package hell." I generally think things from outside the distro-provided stuff should be installed into /opt. Would you install things into System32 on Windows? My guess is that you either added an additional repo, or you installed a packaged generated by some open source code outside of any repos.
I was merely poking holes on parent's theory that more paid engineers and a bigger market cap, somehow justify better technical decisions. That's a dubious thing to say and if extend the logic to the OS level it should be apparent.
[0] I use Windows, Mac OS, Linux (and Android, iOS) all the time, and honestly they all get the job done, but all suck in particular and annoyingly unique ways. So I don't really agree with you either.
Support, maybe, if you equate support to "You bought a corporate license so you're entitled to talk to a human being with something more than a script and a mandate to act as a meat-shield."
Stability? That's a term with two different definitions: Doesn't crash, and stays the same over time, with no "breaking changes" even to the UI. Windows... kinda got better at the first, over time, and actively isn't interested in the second, given how much change its UIs have gone through. Remember how Windows 95 worked and acted? Want to replicate that on Windows 10? Remember COMMAND.COM? Is PowerShell a drop-in replacement? I'm not saying COMMAND.COM is better, I'm just saying your old batch files won't run if you only have PowerShell, innit? Ditto COM versus .Net versus... whatever they'll have in five years.
Meanwhile, I've been running WindowMaker on Linux with zsh for over a decade now.
I meant that the logic fault should be apparent even for a free software advocate, that more resources and paid programmers do not necessary mean better solutions.
So the excuse that red hat has more resources therefore a better process than Debian should not really mean anything for someone I suspect is a free software advocate (the GP)