This is patchworked around in more easygoing languages with dependency management systems, docker containers, etc. etc. but if you can enforce living at head from the start it makes everyone's life easier.
https://abseil.io/about/philosophy#we-recommend-that-you-cho...
I can no longer count the number of times we had an issue with a "supposedly" minor release that ended up breaking major things in our stack. Most of them were things that could have been detected using unit tests or some kind of basic regression testing.
If you have a 1000 dependency packages, and at any point in time 0.1% of them are broken, then odds are you will always have something broken.
Having 1000 dependencies with versions pinned means you are living alone and will run into fewer issues, but when they do come, they will be absolute nightmares that no one else is dealing with and no one can help with. And one day you'll have to do the game of begging someone else to upgrade their version of a downstream thing to fix the issue, and they won't, so you'll try to get the other group to backport the fix in their thing to the version you can't upgrade off. And they won't. etc. etc.
Full versioning is the worst of all approaches, IMO, for large complex interconnected codebases (especially ones that are many-to-many from libraries to output binaries) but it absolutely is sometimes the only viable one (for example, the entire open-source-ecosystem is a giant(er) version of this problem, and in that space, versioning is the only thing that I can imagine working).
But you do say “okay, C++20 is released, let’s fix all the build errors and deploy it company-wide.”
Also, living at the HEAD of your language standard is quite a bit different from living at the HEAD of other dependencies.
A lot of companies/products do not work that way. Some have physical products out there that have to be updated, some have on-premises deployments, some sell user software of which there are multiple versions under support. Each of these live versions have to have their own source branches and dependency trees. A single `:latest` can render future bugfixes unbuildable.
Definitely not the end of the world. Said SDKs & build scripts are available here: https://github.com/ossia/sdk for anyone interested (I'll be honest though: the scripts are a mess!)
Assuming you have these lockfiles, you have the typical option, which would be to make a lockfile for each released entity, record it for later reproduction, and update it at least every release.
The "live at head" approach would be to instead have a main shared lockfile for every project in the company in a recorded sequence. All projects pick a version of that lockfile to release from. Practically speaking, all projects probably just take the latest version (the head) of that lockfile and everyone works hard to make sure that lockfile always works for everyone.
The main advantage here is pretty straightforward combinatorial math. Maintaining and validating unique combinations of dependencies for every release in a codebase is NP-silly, whereas sharing one set of dependencies across as many applications as possible isn't easy but it has a much nicer cost curve. In theory at least, but a lot of large organizations claim practice backs the theory up as well.
[1] Versioning doesn't have to work this way. Putting all code into one big source repository (vendoring) has the same effect.
In contrast, those third-party projects like boost would probably consider the latest commit on their HEADs of their git repos to be "head". So "live at head" is a statement about how each organization should version its dependencies. It doesn't really make sense in the context of boost maintainers deciding their support surfaces since they have all the "heads" to worry about, inherently -- all the organizations using boost libraries.
People point out RedHat provide newer compilers (and newer patched C++ standard libraries depending on the default one!) and he says "It's not us, it's our customers. And no, they won't use any compiler which isn't the system default".
Either you only use the system defaults, and that includes the old Boost included with RHEL, or you don't. I guess he includes a private Boost copy as part of his project. Well, include a private GCC too then. A dependency is a dependency
Those customers have silly requirements for no reason. The whole world shouldn't make efforts to accommodate them.
I am going to bet those customers are going to argue about "stability". All this comes from people reading "stable" as "rock solid, never crashes, always works as expected" when in the case of RHEL it actually means "it doesn't change, no new bugs introduced, and you can relly on the old bugs staying there". Those customers just need educating.
When not, others will.
Library A uses Boost and requires an older version of GCC. Library B uses Boost and requires the newer version of Boost. You want to use libraries A and B in the same project, what now?
but the same question applies to Library B: if your regulations state that you can't update your compiler version past the default distro one, why can you bring in some random recent libraries that are definitely not part of the distro since they depend on a Boost version that is more recent than the one your distro provides?
Of course this all very stupid when you can install GCC 11 and compile in C++20 mode on RH 7 with the official devtoolsets...
But the core problem is as always to tie compilers to linux distros, like a C++ compiler version is relevant in any way to your operating system's stability...
Or pay RedHat to backport the library B to the old Boost for you :)
The vast majority of the time it is just not worth it.
Build each one as a separate shared library and wrap each one with a C interface?
this greatly extends of what can be considered supported by RHEL 7 (unless we require - "don't install any packages")
And then IBM’s OS/400 ILEC++ support isn’t even up to C++11.
It is hard to live in Enterprise software land where you support everything for a lot of years, but end up stuck on older third party libraries when they move to new standards to keep parity across your product line’s feature set.
There was a Linux migration project to move the company to Linux. I haven’t heard anything from people inside, but I’m willing to bet that there are still some stragglers that haven’t moved yet.
With .NET 8 and Java 21 around the corner,....
It feels like living the Python 2 / 3 transition.
The wonderfull enterprise land.
Backward compatibility is good to have, but C++ needs for alternatives that allow it to drop support for old things, because the language needs to evolve and backward compatibility is preventing it, and it turns into very very long compile time.
And even if backward compatibility is out in C++, it would break in the language, not in the ABI, so old C++ and new C++ could still easily cohabit together, quite like C has always lived with C++ for a long time now.
I like C++, but nobody denies that C++ carries a lot of weight for being disliked.
[0]: https://github.com/seanbaxter/circle/blob/master/new-circle/...