I think library authors should be more relentless and break compatibility every few years. We just need some conventions to not do so very often. Like new major version every year, deprecate API on the next major version, remove deprecated API on the following major version. So you have 1 year to rewrite your app if necessary.
And supporting old versions for those enterprises who would rather pay than upgrade might be a good source of income.
Multiply this by the thousands of dependencies modern apps have and the only thing you will ever do is rewrites.
Hard disagree. API churn is (one of) the real cost of using libraries / external dependencies, so people would just rather reimplement them themselves or copy the library code directly into their project.
I indeed did this years ago---I'm the original author of Chrono [1]---and it wasn't well received [2] [3] [4]. To be fair, I knew it was a clear violation of semantic versioning but I didn't see any point of strictly obeying that until we've reached 1.0 so I went ahead. People complained a lot and I had to yank the release in question. By then I realized many enough people religiously expect semantic versioning (for good reasons though) and it's wiser to avoid useless conflict.
[1] https://github.com/chronotope/chrono
[2] https://github.com/chronotope/chrono/issues/146#issuecomment...
[3] https://github.com/chronotope/chrono/issues/156
[4] https://github.com/chronotope/chrono/blob/main/CHANGELOG.md#...
There are libraries out there (such as FFMpeg iirc) that will do a yearly major version with breaking changes. This is a good approach imo. FFmpeg consumers know what to expect and when to expect it.
It slows the industry when you're spending all your time rewriting code that already works.
The question is, who is slowed down: API creators or API users? If you make regular breaking changes to APIs, it's API users who get slowed down, if you don't it's API creators who get slowed down.
Given the entire point of things that have APIs (libraries, frameworks, centralized services, etc.) is that there are many users and few creators, it's pretty clear which slows down more people.
Additionally, with good API design, you can often maintain namespaced APIs in tandem with very little additional cost. I've got a /v1/blah API and a /v2/blah API on one of my clients' websites--the v1 directory hasn't been touched in 7 years, because all the bugs anyone cares about have been fixed. It still has users (at least officially, I haven't looked at the reporting to see how often they're actually hitting those APIs). The users simply don't care about the new features in the new API, and it's not our place to force them to care.
You can do similar things with libraries (think sqlite vs. sqlite3) but this is obviously harder with frameworks (which is one of the reasons to not like frameworks). It doesn't work everywhere but it works often.
IMO, if you think distros are a thing of the past, or 3 years support of the biggest base distro is slowing things down, you're living in a bubble.
I know you prepended this with a statement saying you love LTS too, but to many, LTS is a decade or more.
And really, I have no interest running 'new shiny'. That is the absolute opposite of stable. That is where horrible, life altering mistakes live. If you want to increase your workload 100x, run bleeding edge.
And bleeding edge is anything that has any code change, outside of bug fixes and security fixes.
I know my position is not popular, but that doesn't make it wrong.
With you having an attitude like you do, why should i even bother with reading your documentation? Give it 2 years and it will all be obsolete. Waste of time.
Fuck innovation. I want tools that exist long enough that i can master them.
That's not true. You can simply put any breaking changes into separate namespaces. Now you have limitless backwards compatibility and yet users can selectively upgrade whenever they want the new features.
Maintain a non-trivial node project for a while and you will see why some people like stability.
Bigger problems are demons with API, where it is usually not possible to run multiple versions alongside (as they would compete on the same resources or data), so one code have to offer multiple API versions internally.
...isn't it just how most LTSs work? LTS is long-term support, not life-time support.
When you break compatibility, you force out abandoned crap. I agree you don't want to do it too often; but not doing it at all is (IMHO) worse.
I always thought you just need two numbers, a.b
You increment b when you change something in a backwards compatible way.
You increment a when you make a breaking change.
If you are used to semver, it is like ditching the minor version and calling it a patch.
a.b is if course isomorphic to the 0.a.b system mentioned here.
The disadvantage is downgrading patch-only in semver may now be breaking change in twover but that is a rare edge case IMO.
Problem is that 'backwards compatible' is not a black-and-white criterion. Most non-trivial development could lead to changes in behavior (or at least in performance) that, while not part of API contract, could still be relevant for users.
For that reason, it makes sense to have a.b.c scheme, where 'b' is for regular backwards-compatible development, while 'c' is for targeted bugfix releases, which are hopefully devoid of such behavioral changes.
Some fixes only come with larger changes. Like the recently posted rust regex 1.9 release. Only a rewrite of the library fixed some long-standing issues.
Currently on version -2.-0.-61 of my social media network for dogs. It's getting there!
In TeX the version approaches Pi, every new version adds a decimal. Elegant, will hold forever!
TeX 3.141592653 is 45 years old. Its companion Metafont has version number 2.71828182, you can see where this is going.
1. version numbers sort numerically and lexicographically in a sensible way, including across projects and packages which use the same format
2. users get educated that these preciously-held ideas they have about software version numbers are complete superstition. Like "something with a zero major number means not production ready", "something with a zero minor number means I should wait until there's a patch", "something with a major number increase means backwards-incompatible", "something with a minor number increase means backwards-compatible"
3. You know when a particular version (of everything) came out. "We started seeing a wierd bug on X date" no longer is impossible to figure out.
(These are self-contained projects. I suppose semver does make some sense for libraries that you link with.)
Professionally, it's been 99% Perforce for about 15 years, so it's routine to use the submitted changelist number, submitted changelists being numbered in the order they were subsequently committed. Sadly not fixed-width, but at least Explorer sorts them sensibly.
Two difficulties I have had doing this with git:
- there doesn't seem to be a way to get git to enforce UTC, so the dates are in my local time zone (for my projects this is not really an issue, and my timezone is almost UTC anyway)
- the CI system runs separate builds for different targets, and using the git commit timestamp ensures all builds get the same time stamp. But it's then possible to end up with timestamps significantly different from the actual release time, or (worse) out of order. I could probably do something better about this than my current "solution" of doing nothing, but this has only happened a couple of times
1) major public API change. If you don't have a public API, this should never be anything but 1 and can be hidden from the user. End users don't want these they scare them.
2) minor: any planned release that doesn't break API. End users love these and plan around them.
3) revision: unplanned emergency hotfixes. Naming this way means the "next minor" we were talking about with all stakeholders is still the next minor. It also means our version numbers look like our git dag, since this one would be a branch from the last tag instead of main.
4) release: sometimes something goes wrong during the release itself. The first 3 numbers are public, this is internal-only and only appears in git tags and internal deployment notes. This way every push to prod has a unique version number, but all our change management documents are still accurate even if we had to push 2 or 3 times for a single release.
Why? First digit is about compatibility. All the other digits are about planning.
"We're working towards 1.3"
"we think this feature will be in 1.4".
"We had to release 1.2.1 because of an emergency somebody put Arabic text in their profile picture filename and that brought down the site."
"Turns out that trick with the release pipeline didn't work in prod so we had to make 1.2.1.1 while deploying".
I think there needs to be better project definitions around what constitutes a major change.
Projects need to be able to define things like dropping support for old versions of the underlying language in minor versions. So that the last version of support that some people might get is "3.2" and "3.3" may not install at all for them. That means that technically they are in a state where they need to do work to upgrade and are "broken" in a sense, but the actual public API of the software has not changed between "3.2" and "3.3". Supported O/S distro versions should also be able to be abandoned in minor releases. Toolchain updates can also happen in minor releases. Pulling in major versions of dependencies which are technically breaking for anyone who hits a diamond dependency issue, but which produce no major breaking API changes should be able to happen in minor versions.
That means that the contract isn't "I can pull in minor versions and you can never force me to do work" but more strictly that the public API the software exposes won't update.
There's also the problem with semver pinning that projects do where they put hard floor and ceiling pins on all their dependencies, even though their software may be fine with a 5-year old version of the dep (they've just never tested) and it may work fine with the next major release of the dep without any changes at all. Ideally for that last problem, the compatibility matrix fed into the dependency solver should really be a bit more malleable, so that the engineer can realize that the next version of dependency breaks everything and they can retconn the compatibility of their software to pin to the last working version of that dependency. This breaks the perfect immutability of literally everything about a software release, but allows for not being able to predict the future.
Version numbers just denote a change happened and you want them to roughly resemble some sort of chronological ordering. Everything else is gasoline for flame wars and company policies.
I mean just look at the project show cases. Included are the usual colossal cluster^Wframeworks that power our decaying software infrastructure.
Personally, I don't trust anything that either stays perpetually under v1.0 or exceeds v10-15.
https://tvtropes.org/pmwiki/pmwiki.php/Main/SatireParodyPast...
The most obvious and nefarious example is that the most severe and painful kinds of backward incompatibilities are superficially permissible under SemVer: behavior changes. To confuse the issue even more these behavior changes might be to fix a bug and restore the original or intended behavior of a feature! There's no single best way to communicate that to users through a version number: if libfoo v1.4.3 broke a behavior from 1.4.{0,1,2}, should the fix be in v1.4.4 or v1.5 because technically you're creating a backward incompatibility! Does the answer change if the buggy behavior has been around multiple patch releases or multiple minor releases? Does the scale of the behavior difference impact the versioning scheme chosen? Does the approximate number of users impacted impact the versioning scheme chosen? Probably!
ZeroVer is, in my opinion, a hacky but fine solution to this: no guarantees! The developers just want to develop and it's up to the consumers of the project to figure out what release they want to use. ZeroVer is when a project chooses not to try to communicate very much through version numbers. I think that's often better than some strict adherence to SemVer that falls apart under any sort of reasonable scrutiny.
I like how browsers have gone: basically give up on the traditional Major Version Number. A Chrome 2 or Firefox 2 that is a radical redesign would probably be an entirely new product with new branding and versions. So just bump the first number a lot to communicate feature releases to users, and bump the other numbers for basically internal build reasons. The minor, patch, and build numbers are free to be used and abused for a lot of complex purposes incredibly complex and popular projects like browsers (and operating systems) have.
I think a lot of projects, Nomad included, would probably be best represented by BrowserVer. Nomad is deeply committed to incremental improvements and backward compatibility, so any "Nomad 2.0" efforts are more likely to happen under a new project. Frankly Nomad 1.0 was more about marketing than any sort of meaningful feature or compatibility promise: we wanted to communicate Nomad was stable and reliable. Going from 0.x -> 1.x is an easy way to communicate that even if nothing more significant happened from 0.12 -> 1.0 than had happened from any 0.X -> 0.Y.
So if a project does a major update of one of its deps, without changing its own API, or deprecates support for an old language version or distro, it should be able to ship those in a minor version.
That means that consumers who aren't keeping up with the times may be cut off in a minor update and have to do work to consume the next update. There needs to be less of an expectation that "it isn't a major update, so I won't have to lift a fucking finger and its your fault if I need to" which is what SemVer has socially turned into.
(1) data analysis oriented code (2) code to run experiments (e.g. psych paradigms)
I often find that in such code, forking is rather more common. That is, the code bases become wider rather than deeper. For example, we might run several experiments that have a strong resemblance to each other, but have any number of (experimentally relevant) tweaks. Within each fork, I rename, and restart the semantic versioning.
- Uvicorn is 0.22
- httpx is 0.24
- starlette is 0.28
And so on and on. More generally, the quality of Python's tooling and ecosystem is astonishingly low compared to the investment that every day pours into it.
Edit to add: If I was an opensource package maintainer again, my first action would be to massively bump the major version number of my packages. HUGE increase in quality right there.
They are widely used in production. Maybe some of them are de-facto production ready. But the developers don't want to make commitments to API stability.
And Python being Python it is very hard to statically enforce that you're upholding SemVer promises.