story
It's much much more useful to the users to say, 2.0 introduced generics, it's distinct. If it's like other languages, generics changes the code people generate a lot, libraries start looking significantly different. It's very distinct, and if that is simply in version 1.18.0 or whatever, that is super bad usability from a language perspective.
A language or API (things you program against) are pretty much the things for which SemVer makes sense.
> The major version should represent major language changes, not whether its a breaking change or not
I don’t care if changes are “major”, I care if the code I wrote for version X is expected to need modification to work correctly in version Y. SemVer gives me that, Subjective Importance Versioning does not.
https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...
I like this because it emphasizes the community's commitment backwards compatibility, which I greatly value. I've spent a good deal of time writing Javascript, where library developers seem to have very little respect for their users and constantly break backwards compatibility. In ecosystems like that, upgrading fills me with dread. When I see a library on version 4, I have learned to keep looking - if they weren't thoughtful enough about their API design for the first 3 major releases, I shouldn't expect it to be much better going forwards.
For an application, I'm pretty open to version numbers signifying big features - Firefox and Chrome do this, and it's helpful with marketing. But for a programming language? A programming language is a tool, and when upgrading you need to carefully read the changelog anyways. A programming language is no different from a library (in Clojure it literally is a library), and backwards compatibility is /literally/ the main thing I care about. Is my tool going to intrude on /my/ schedule, and force me to make changes /it/ wants instead of being able to spend my time making changes /I/ care about? I want to know that.
[0]This is apparently an awful example as I've just learned that Java is actually doing the major version only thing. It still sort of works because the only reason they can do that is because they Will Not Break Compatiblity.
17 -> 1.17, 11 -> 1.8, this is bothering me way to much for no good reason.
* Java 8 is Java 1.8.0
* Java 11 is Java 11.0.11 (at the moment)
* Java 17 is Java 17.0.1 (at the moment)
SunOS/Solaris is what I use when I want to get nerd-rage mad about minutae: https://en.wikipedia.org/wiki/Oracle_Solaris#Version_history
https://docs.oracle.com/en/java/javase/17/language/java-lang...
https://docs.oracle.com/en/java/javase/17/migrate/getting-st...
The x.y versioning with y being a synonym for major version was abandonend in Java 9.
I don't agree. I usually don't care so much when a particular feature was introduced into a language (and if I do, it's usually a Wikipedia search away). I mostly care whether or not code written assuming version X can be compiled with version Y of the compiler. Semantic versioning can tell me the latter. Making versioning arbitrarily depend on what someone considers a "big" feature doesn't help me.
I care very much when a feature was introduced into a language, because maintaining compatibility with earlier versions of the language determines what features may be used. If I'm working on a library that needs to be compatible with C++03, then that means avoiding smart pointers and rvalues. If I'm working on a library that needs to be compatible with C++11, then I need to write my own make_unique(). If I'm working on a library that needs to be compatible with C++14, then I need to avoid using structured bindings.
If a project allows breaking backwards compatibility, then SemVer is a great way to put that information front and center. If a project considers backwards compatibility to be a given, then there's no point in having a constant value hanging out in front of the version number.
> I mostly care whether or not code written assuming version X can be compiled with version Y of the compiler.
Semantic versioning can only tell that for the case where X < Y (old code on new compiler). In order to determine it for X > Y (new code on old compiler), you need to know when features were introduced.
I think this is a deliberate reduction of dimensionality. Go says that you don't need to worry (for long) about this case, because the toolchain must be updated regularly - and promises that it will be as pain free as possible. This simplifies for the Go team, for library authors, and library users in most cases, at the expense of maintaining a recent toolchain.
Not saying this tradeoff is for everyone, and I've never used C++ professionally so I'm probably ignorant. But are you saying it's common with production projects that use a compiler from 2003 or earlier? What's the use case?
Modern C++ compilers are not necessarily available on all platforms. For example, Solaris, AIX or old RedHat versions. Go doesn't have this problem yet, but it will.
Let's start with the fact that newer doesn't mean better. With already deployed compiler you have tested it and know that it works good enough (code it generates, bugs you have workarounds for, etc). Where with new compiler you are on step one. You must do work again.
Or vendors just support particular version they have patched.
Or you are scared of GPL3.
The first difference is that there isn't just a single compiler, but rather a standard that gets implemented by different compiler vendors. It's gotten better since then, but typically it would be a while between the updated standard being released and the standard being supported by most compilers. (And even then, some compilers might not support everything in the same way. For example, two-phase lookup was added in C++03, but MSVC didn't correctly handle it until 2017 [0].)
The second difference is that the C++ compiler may be tightly coupled to the operating system, and the glibc version used by the operating system. Go avoids this by statically compiling everything, but that comes with its own mess of security problems. (e.g. When HeartBleed came out, the only update needed was for libopenssl.so. If a similar issue occurred in statically compiled code, every single executable that used the library would need to be updated.) So in many cases, in order to support an OS, you need to support the OS-provided compiler version [1].
As an example, physics labs, because that's where I have some experience. Labs tend to be pretty conservative about OS upgrades, because nobody wants to hear that the expensive equipment can't be run because somebody changes the OS. So, "Scientific Linux" is frequently used, based on RHEL, and used up until the tail end of the life-cycle. RHEL6 was in production use until Dec. of 2020, and is still in extended support. It provides gcc 4.4, which was released in 2009. Now, gcc 4.4 did support some parts of early drafts of C++11 (optimistically known at the time as C++0x), but didn't have full support due to lack of a time machine.
So when I was writing a library for use in data analysis, I needed to know the language and stdlib feature support in a compiler released a decade earlier, and typically stay within the features of the standard from almost two decades earlier.
[0] https://devblogs.microsoft.com/cppblog/two-phase-name-lookup...
[1] You can have non-OS compilers, but then you may need to recompile all of your dependencies rather using the package manager's version, keep track of separate glibc versions using RPATH or LD_LIBRARY_PATH, and make sure to distribute those alongside your library. It's not hard for a single program, but it's a big step to ask users of a library to make.
The PR version doesn't even have to be numeric. You can give them proper names.
It was confusing.
A language update comes with the most fundamental set of libraries and APIs: the standard library (doubly so in Golang, which has a lot of batteries included).
It also potentially affects the behavior (if there are breaking changes) of all other third party libs.
The "silliness" part is a non sequitur from what proceeded it (and the following arguments don't justify it either).
>Your version is not really telling you the main things you care about.
The main thing (nay, only thing) I care about (for my existing code) from a language update is whether there were breaking changes.
I could not care less to have reflected in the version number whether a big non-breaking feature was introduced.
I can read about it and adopt it (or not) whether there's a accompanying big version number change or not.
>It's much much more useful to the users to say, 2.0 introduced generics, it's distinct.
That's quite irrelevant, isn't it?
It's not useful to users that follow the language (page, forums, blogs, etc.) and would already know which release introduced generics.
And it's also not useful to new users that get started with generics from day one of their Go use either.
So who would it be useful to?
Such a use would make the version number the equivalent of a "we got big new feature for you" blog post.
Why?
Old code still work and unless you are purposefully maintaining an old system you are expected to use the last version anyway. What does it actually change that generics were introduced in version 1.18 rather than 2.0? From now on, Go has generics. As there is no breaking change, it’s not like you had to keep using the previous version to opt out.
If semantic versioning is used correctly, like here, that's actually a reasonable-ish attitude.
Since backwards compatibility is already a given for languages, you can then have the major version number indicate feature additions, rather than always being a constant value as semantic versioning would require.
Languages are software; they are dependencies of other software (the only unavoidable dependency!) and as such should absolutely be versioned.
Versioning isn't for marketing or providing easy ways for users to remember when features were released. It's a tool for change management. Exciting features often come with breaking changes, but not vice versa.
Semantic versioning is an approach to versioning. It's an approach which, as GP stated, was designed specifically to help with dependency updating.
GP isn't proposing that languages shouldn't be versioned, they're saying that semantic versioning is the wrong approach to versioning for a language.
This is actually very important. If something is a major change or not is pretty subjective.
I'm afraid that expectation isn't entirely warranted. Especially around standard library issues.
Why?
Additive changes can be breaking changes quite easily, as those additions are adopted within a minor version range, as automated tooling needs to distinguish their presence, as documentation fragments.
My next biggest gripe with semver—that 0.y.z has entirely different semantics from any other major version—may actually be semantically better if adopted wholesale. If your interface changes, major version bump. Else you’re fixing bugs or otherwise striving to meet extant expectations.
Major language changes almost implies breaking changes, like Python 2 to 3 was major changes that break things everything from how modules were changed, where they were, and some syntactic and fundamental changes as well.
1. Min version in go.mod 2. Add a build tag for what to do for new/old version of go (These tags are automatic, you just need to set them in the files)
When a language adds any features, if your dependencies (whether real library dependencies or just things you're copying from Stack Overflow) start using the new features, you must upgrade to the new language version. That is an inherent usability constraint, and every time a language designer chooses to add a feature, they're making a tradeoff. But if upgrading to the new language version is trivial, then it's generally a worthwhile tradeoff.
For instance, suppose I find some code that uses Python's removeprefix() method on strings. I need to use Python 3.9 or newer to use that code. It doesn't matter that this is a very small feature.
However, I can generally expect to upgrade my Python 3.8 code to Python 3.9 without trouble. It's different from, say, code that uses Unicode strings. For that code, I need to upgrade from Python 2 to Python 3, which I can expect to cause me trouble. The version numbers communicate that. It's true that Python 3 was a "big" change - but "big" isn't really the point. The point is that I can't use Python 2 code directly with Python 3 code, but I can use Python 3.8 code directly with Python 3.9 code. There are plenty of "big" changes happening within the Python 3 series, such as async support, that were made available in a backwards-compatible manner.
As it happens, Python does not use semantic versioning. But they have a deprecation policy which requires issuing warnings for two minor releases: https://www.python.org/dev/peps/pep-0387/ It's technically possible, I think, that a change like Unicode strings could happen within the Python 3.x series, but that's okay, provided they follow the documented versioning policy. This policy addresses the same question that semantic versioning does, but it provides a different answer: you can always upgrade to one or two minor versions newer, but at that point you must stop and address deprecation warnings before upgrading further.
You are, of course, free to also have a marketing version of your project to communicate how big and exciting the changes are. Windows is a great example here: Windows 95 was 4.0 (communicating both backwards incompatibility with 3.1 and major changes) and Windows 7 was 6.1 (communicating backwards compatibility with Vista but still major changes).
Or alternatively, complain loudly.
that's why semver works; the definition of major change is defined, and that's when you update the major version number.
The Go people can just make up reasonable version numbers without having an all encompassing theory with definitions, and they only have to convince themselves, not everyone on earth.
but "breaking change" IS the criteria for reasonable version numbers that they have chosen.
"breaking change" is easily tested and well defined.
"big change" is as far from well defined as you can get, because "big" is unquantifiable and subject to judgement and interpretation; i.e. a poor candidate for drawing boundaries.
I'm just saying that they could have done something different, if they had wanted to, without working out a complete theory for that different thing.
Also they may have sneaked it in because they're implicitly acknowledging fault in their previous design decision to exclude it.