OTOH, things that update too often seem to be more than slightly broken on an ongoing basis, due to ill-advised design changes, new bugs and regressions, etc.
Apple routinely holds back changes for a .0 release for advertising reasons. This means that they routinely have big releases that break everything at once. Bugs could come from 4 or 5 different sets of changes. But if they spread out changes… bug sources would be way more easy to identify.
And bug fix velocity going up could mean people stop treading water on bugs, and actually get to making changes to avoid entire classes of bugs!
Instead, people think the way to avoid bugs is to avoid updates, or do it all at once. This leads to iOS .0 releases being garbage, users of non-rolling release Linux distros to have bugs in their software that were fixed upstream years ago, and ultimately to make it harder to actually fix bugs.
I do regularly install updates on my (Linux) desktop/laptop because guess what? It consistently works exactly the same afterward. Occasionally new formats like jxl images just start working everywhere or something. But otherwise it has just continued to work unchanging with no fanfare for the last decade or so. It's amazing to me how much higher quality in that way volunteer software is compared to commercial software.
If you want things not to break, you must slow down.
It isn’t reasonable to ask for these two things at once:
* lots of change
* stability
The current milieu seems dramatically skewed toward churning out low-value changes without sufficiently considering the impact to stability, causing frequent breakage, and resulting in net negative value.