Lots of things not using semvar that I always just assumed did.
As an example, I always knew urllib3 as one of the foundational packages that Requests uses. And I was curious, what versions of urllib3 does Requests pull in?
Well, according to https://github.com/psf/requests/blob/main/setup.cfg, it's this:
urllib3>=1.21.1,<3
That is exactly the kind of dependency specification I would expect to see for a package that is using semver: The current version of urllib3 is 2.x, so with semver, you set up your dependencies to avoid the next major-version number (in this case, 3).So, it seems to me that even the Requests folks assumed urllib3 was using semver.
Glory to 0ver: https://0ver.org/
The industry has a solution for the exact problem the urllib is having (semver). Urllib just actively refuses to use it.
Making you distrust updates is absolutely the correct versioning method. Pin your versions in software you care about and establish a maintenance schedule. Trusting that people don't break things unintentionally all the time is extremely naive.
It was dumb and user-hostile to remove an interface for no good reason that just makes it more work for people to update, but everyone not pinning versions needs to acknowledge that they're choosing to live dangerously.
> You have released version 1.0.0 of something. Then you add a feature and fix a bug unrelated to that feature. Are you at version 1.1.0 or 1.1.1? Well, it depends on the order you added your changes, doesn't it? If you fixed the bug first you'll go from 1.0.0 to 1.0.1 to 1.1.0, and if you add the feature first you'll go from 1.0.0 to 1.1.0 to 1.1.1. And if that difference doesn't matter, then the last digit doesn't matter.
It depends on the order you released your changes, yes. If you have the option, the most useful order is to release 1.0.1 with the bugfix and 1.1.0 with both changes, but you can also choose to release 1.1.0 without the bugfix (why intentionally release a buggy version?) and then 1.1.1 (with or without 1.0.1), or just 1.1.0 with both changes. You’re correct that the starting point of the patch version within a particular minor version doesn’t matter – you could pick 0, 1, or 31415. You can also increment it by whatever you want in practice. All this flexibility is a total non-problem (let alone a problem with the versioning scheme, considering it’s flexibility that comes from which releases you even choose to cut – semver just makes the relationship between them clear), and doesn’t indicate that the patch field is meaningless in general. (Obviously, you should start at 0 and increment by 1, since that’s boring and normal.)
Sure, it’s impossible to classify breaking changes and new features with perfect precision, and maintainers can make mistakes, but semver is pretty clearly a net positive. (It takes almost no effort and has no superior competitors, so it would be hard for it not to be.)
IMO telling "we deprecate now and let's see when we remove it" is counterproductive.
A better way: deprecate now and tell "in 12 (or 24?) months this WILL be removed".
After 12/24 months, cut a new semver-major release. People notice the semver-major through the dependency management tools at some point, an maybe they have a look at changelog.
If they don't, at some point they may want to use a new feature, and finally be incentivised to update.
If there's no incentive other than "do the right thing", it never gets done.
Having said that, I think LLMs are really going to help with chores like this, if e.g. deprecations and migration steps are well documented.
Alternative option: create a codemod CLI that fixes deprecations for the users, doing the right thing automatically. If migration is painless and quick, it's more likely people will do it.
... and the right answer to that is to make it entirely their problem.
> Part of the problem is probably lack of pressure to do so it if the timeline is unclear. What if this is actually never removed?
In this case, the warnings said exactly what release would remove the API. Didn't help.
> Why going through the pain?
Because you're not a feckless irresponsible idiot? I don't think it's an accident that the projects they said didn't react were an overcomplicated and ill-designed management layer for an overcomplicated and ill-designed container system, a move-fast-and-break-things techbro company, and what looks to be a consolation project for the not-too-bright.
You probably get an extra measure of that if you're operating in the Python ecosystem, which is culturally all about half-assed, 80-percent-right-we-hope approaches.
The right answer is to remove it when you say you're going to remove it, and let them pick up the pieces.
It also helps if you design your API right to begin with, of course. But this is Python we're talking about again.
The urllib3 package doesn't use SemVer.
The best I have seen is a heavy handed in-editor strike through with warnings (assuming the code is actively being worked on) and even then it’s at best a 50/50 thing.
50% of the developers would feel that using an API with a strike through in the editor is wrong. And the other 50% will just say “I dunno, I copied it from there. What’s wrong with it??”
If you are a good developer, you'll have extensive unit test coverage and CI. You never see the unit test output (unless they fail) - so warnings go unnoticed.
If you are a bad developer, you have no idea what you are doing and you ignore all warnings unless program crashes.
https://docs.phpunit.de/en/12.5/configuration.html#the-failo...
So, I do often see deprecation warnings in CI output, and fix them. Am I a bad developer?
I think the mistake here is making some warnings default-hidden. The developer who cares about the user running their the app in a terminal can add a line of code to suppress them for users, and be more aware of this whole topic as a result (and have it more evident near the entrypoint of the program, for later devs to see also).
I think that making warnings error or hidden removes warnings as a useful tool.
But this is an old argument: Who should see Python warnings? (2017) https://lwn.net/Articles/740804/
In hindsight it actually helped us, because in frustrations we ended up setting up our own Python package repo and started to pay more attention to our dependencies.
In my opinion test suites should treat any output other than the reporter saying that a test passed as a test failure. In JavaScript I usually have part of my test harness record calls to the various console methods. At the end of each test it checks to see if any calls to those methods were made, and if they were it fails the tests and logs the output. Within tests if I expect or want some code to produce a message, I wrap the invocation of that code in a helper which requires two arguments: a function to call and an expected output. If the code doesn't output a matching message, doesn't output anything, or outputs something else then the helper throws and explains what went wrong. Otherwise it just returns the result of the called function:
let result = silenceWarning(() => user.getV1ProfileId(), /getV1ProfileId has been deprecated/);
expect(result).toBe('foo');
This is dead simple code in most testing frameworks. It makes maintaining and working with the test suite becomes much easier as when something starts behaving differently it's immediately obvious rather than being hidden in a sea of noise. It makes working with dependencies easier because it forces you to acknowledge things like deprecation warnings when they get introduced and either solve them there or create an upgrade plan.A developer setting up CI decides to start an ubuntu 24.04 container and run 'apt-get install npm'
This produces 3,600 lines of logging (5.4 log lines per package, 668 packages) and 22 warnings (all warnings about man page creation being skipped)
Then they decide "Nobody's going to read all that, and the large volume might bury important information. I think I'll hide console output for processes that don't fail."
Now your CI doesn't show warnings.
The Business doesn't care about warnings, they want working software NOW.
So it was entirely possible to keep the software working with these. Why change/remove them in the first place? Is the benefit of of the new abstraction greater than the downside of requiring everyone using the software to re-write theirs?
Of course, the reality is hardly ever as bad as that, but I'd say having to deal with trivial API changes is a reasonable basis for a user to dislike a given project or try to avoid using it. It's up to the maintainers how friendly they want to be toward existing user code, and whether they pursue mitigating options like bundling migrations into less-common bigger updates.
Has this ever been considered?
The problem with warnings is that they're not really observable: few people actually read these logs, most of the time. Making the deprecation observable means annoying the library users. The question is then: what's the smallest annoyance we can come up with, so that they still have a look?
Yes.
"give me all updates to my core version that's still compatible"
Semver simply puts a 'protocol' to this - define your major version and off you go.
While in practice you could go and search for each and every library you use to check when/how they do breaking versions, but semver just allows matching a single number across the board - it makes it more consistent and error proof.
And if after reading that you think to yourself, "But BugsJustFindMe, I have unit tests that will catch semver mistakes!" I think you need to ask yourself what your unit tests tell you about semver code that they don't also tell you about non-semver code.
In small teams where you're focused on building features, being able to rely on semver for non-breaking versions has been very helpful/successful. I personally haven't encountered a breaking feature added to something that shouldn't be.
This also applies to our docker containers, using python 3.13-slim-trixie for example, and letting the patch version sort itself out.
If you deprecate something in a popular library, you're forcing millions of people to do work. Waste time that could be used for something better, possibly at a time of your choice, not theirs. It was emitting warnings for 3 years... so you think everyone should have to rewrite their software every 3 years?
Especially for something like this. Only document it in a footnote, mark it as deprecated, etc - but don't remove the alias.
Don't break stuff, unless, to quote a famous work, you think your users are scum. Do you think your users are scum? Why do you hate your users?
The devil is in the details. It seems `getHeaders` v. `headers` is non-security, non-performance related issue. Why people should spend time fixing these?
Otherwise, I'd say that's a very brave comment you are making.
Instead of a warning that says, "The get_widget method in libfoo is deprecated and will be removed by November 30", the warning could say:
"This app uses the deprecated get_widget method. Please report this bug to the app developer. Ask them to fix this issue so you can continue using this app after November 30."
# Recommended APIs resp.headers.get("Content-Length")
Why inflict this change on tens of millions of users? It's such a nonsense tiny busywork change. Let sleeping dogs lie.
@deprecated("Use response.headers.get(name) instead")
def getheader(self, name):
return self.headers.get(name)
Like sure — deprecate it, which might have _some_ downstream cost, rather than having two non-deprecated ways to do the same thing, just to make it clear which one people should be using; but removing it has a much more significant cost on every downstream user, and the cost of maintenance of the old API seems like it should be almost nothing.(I also don't hate the thought of having a `DeprecationWarning` subclass like `IndefiniteDeprecationWarning` to make it clear that there's no plan to remove the deprecated function, which can thus be ignored/be non-fatal in CI etc.)
Users don't notice a deprecation warning. But they might notice adding a "time.sleep(10)" immediately at the top of the function. And that gives them one last grace period to change out their software before it breaks-breaks.
A breaking change causes a full-stop to a service.
An intentional slowdown lets the service continue to operate at degraded performance.
I concur that it's less clear for debugging purposes (although any reasonable debugging infrastructure should allow you to break and see what function you're in when the program hangs; definitely not as clear as the program crashing because the called function is gone, however).
In addition: if CI is the only place the issue shows up, and never in a user interaction... Why does that software exist in the first place? In that context, the slowdown may be serving as a useful signal to the project to drop the entire dependency.
ETA: To be clear, I don't do this as a substitute for a regular deprecation cycle (clear documentation, clear language-supported warnings / annotations, clear timeline to deprecate); I do it in addition before the final yank that actually breaks end-users.
> I could ask for more Python developers to run with warnings enabled, but solutions in the form of “if only we could all just” are a folly.
I get where he's coming from. But the facts are: the language provides a tool to warn users, the library is using that tool, and users are choosing to turn that tool off. That's fine, but then they don't get to complain when stuff breaks without warning. It is the user's responsibility at that point, not the library maintainers'.