My impression is that the dep folks understand that too. The problem is that there is no consensus on what was learned from it.
The dep folks seem to have come away convinced that a SAT-solver approach is the better approach. rsc is clearly convinced of the opposite.
Everyone knows it is ultimately rsc's call, so I don't think talking about the power dynamics is very interesting. What I am more interested in is whether or not it's the right call. A good faith interpretation is that the dep folks aren't sad that their solution lost, it's that what they believe is a better solution lost.
Satisfiability problems of this kind appear in a ridiculous number of fields and applications (and not just by reduction).
The vast majority of them, in practice, are approximated rather than exactly solved.
Most of the ones that are exactly solved are in software verification, model checking, etc. Areas where having an exact answer is very critical.
Outside of that, much like you see in MVS, they approximate or use heuristics. And it's fine. You don't notice or care.
The idea that "package management" is one of those areas that absolutely must be exactly solved to generate acceptable results seems to me to be ... probably wrong.
There are much more critical and harder things we've been approximating for years and everyone is just fine with it.
(IE not clamoring for faster exact solvers).
Thus i have trouble saying a sat solver is a better solution. It certainly would be a more "standard" one in this particular instance, but that's mostly irrelevant. It's also a very complex one that often fails in interesting ways in both this, and other, domains.
The logical conclusion of your statement is that minimal version selection is the wrong approach! Minimal version selection is an "exact" solution, in contrast to the traditional one. You would only arrive at MVS if you considered the problem of selecting precise dependencies to be so important that it's worth making it the user's problem instead of having the tool solve it. The philosophy of the traditional package management solution is that it's best to have the tool do the right thing--select most recent versions that satisfy constraints--so that the user is free to worry about more important things.
> There are much more critical and harder things we've been approximating for years and everyone is just fine with it.
Yes! That's why MVS, and by extension vgo, is oriented around solving a non-problem!
> It's also a very complex one that often fails in interesting ways in both this, and other, domains.
I cannot name one example of a single time SAT solving has failed in Cargo.
"Minimal version selection is an "exact" solution, in contrast to the traditional one. "
It's not an exact solution to SAT, it's an exact solution to a simpler problem than SAT (2-SAT). A problem that admits linear time solutions, even.
That is in fact, what a lot of approximations actually are - reduction of the problem to a simpler problem + exact solving of the simpler problem.
Some are heuristic non-optimal solvers of course, but some are not.
Certainly you realize the complexity and other differences between "an exact solver for SAT" and "an approximation of a SAT problem as a 2-SAT problem + an exact solver for 2-SAT"
I can write a linear time 2-SAT solver in about 100 lines of code and prove it's correctness. It's even a nice, standard, strongly connected component based solver.
Here's a random one: https://github.com/kartikkukreja/blog-codes/blob/master/src/...
So if i have an SCC finder implemented somewhere, it's like 20 lines of code.
Past this, your argument about "taking the user's time" is so general you could apply it to literally any problem in any domain. You can just plug in whatever domain you like and whatever solution you happen to like into this argument.
Here it's backed by no data - you have surfaced zero evidence of your premise - "that it is taking user time". This entire thread in fact has exactly no evidence that it's taking any appreciable amount of user time, so it definitely fails as an argument.
(in fact, the only evidence presented in this thread is that the algorithm simply works on existing packages)
If you actually have such evidence, great, i'm 100% sure that go folks would love to see it!
Because right now the main time spend, in fact, seems to be people arguing in threads like these.
The package managers for many successful languages and distributions use lockfiles and constraint solvers. Not only is that empirical evidence that it works technically, it is evidence that it works socially — users are able to understand and work with it, and the package ecosystems for those languages have evolved with those rules in place.
Empirical data from Go's own package ecosystem is useful too, but you can only learn so much about package management from a corpus that does not have sophisticated package management. The ecosystem has already learned to work within the restrictions so you'll mostly see packages that confirm the system's own biases.
It's like countering passers-by on a bike trail and concluding that the only vehicles users need are bikes.
I'm not saying vgo isn't better. But it's an unproven approach where lockfiles and constraint solving are proven, multiple times over. The burden of proof lies on vgo.
The empirical data from Go's package ecosystem is drawn from a corpus with sophisticated package management: dep. The argument is that dep is unnecessarily powerful and that a simpler approach will suffice. The evidence supports that argument. Note that this is not an argument about Cargo, or Bundler, or any thing else. Right now, the ecosystem is using dep, and there is evidence that it can be done simpler.
To stick with your analogy, I think it's fair to conclude that the only vehicles users need on bike trails are bikes.
Additionally, I have done an analysis of two Rust projects that have been brought up in my discussions on this issue. Specifically, LALRPOP and exa. In both cases, throughout the entire history of the project (hundreds of changes over 4-5 years), Cargo only had to select the largest semver compatible version [1]. Again, I would love to find examples of projects where this strategy was not sufficient.
[1] There is one complication: in Cargo, a ^ constraint (the default kind) on a v0 dependency is only good up to the minor version. In other words, ^0.1.0 means >=0.1.0 and <0.2.0, where ^1.0.0 means >=1.0.0 and <2.0.0. Selecting the largest semver compatible version is meant in this way because of the community norms around breakage in v0. In an MVS world, any breaking change is a major version bump, and would have the same properties, but with different version strings.
https://en.wikipedia.org/wiki/ZYpp
I think using a SAT solver for package installation arose out of dealing with much more complex requirements than are likely to arise in a Go project. The Smart package installer used heuristics to find a solution depending upon the operation:
https://bazaar.launchpad.net/~smartpm/smart/trunk/view/head:...
Poetry, Cocoapods, and Dart are all apparently using this SAT solver:
https://github.com/dart-lang/pub/blob/master/doc/solver.md
FWIW, I wrote a package installer that worked with RPMs, Solaris packages and AIX packages about 15 years ago and ended up with a minimal version selection similar to vgo. I wasn't a genius or anything... I just wasn't aware of SAT solvers at the time and it was the simplest thing that worked.