Using "go tool" forces you to have a bunch of dependencies in your go.mod that can conflict with your software's real dependency requirements, when there's zero reason those matter. You shouldn't have to care if one of your developer tools depends on a different version of a library than you.
It makes it so the tools themselves also are being run with a version of software they weren't tested with.
If, for example, you used "shell.nix" or a dockerfile with the tool built from source, the tool's dependencies would match it's go.mod.
Now they have to merge with your go.mod...
And then, of course, you _still_ need something like shell.nix or a flox environment (https://flox.dev/) since you need to control the version of go, control the version of non-go tools like "protoc", and so you already have a better solution to downloading and executing a known version of a program in most non-trivial repos.
No, it doesn't. You can use "go tool -modfile=tools.mod mytool" if you want them separate.
However, having multiple tools share a single mod file still proves problematic occasionally, due to incompatible dependencies.
Until that time when they realize everyone else was right, and they add an over simplified, bad solution to their "great" language.
Personally, I think the way PHP handles dependencies is vastly preferable to every other ecosystem I've developed in, for most types of development - but I know its somewhat inflexible approach would be a headache for some small percentage of developers too.
Heh, were the people who made 'go tool' the same people who made Maven? Would make sense :P
The `go tool` stuff has always seemed like a junior engineering hack — literally the wrong tool for the job, yeah sure it gets it working, but other than that it's gross.
And didn't quite understand the euphoria.
I've done the blank import thing before, it was kinda awkward but not _that_ bad.
> user defined tools are compiled each time they are used
Why compile them each time they are used? Assuming you're compiling them from source, shouldn't they be compiled once, and then have the 'go tool' command reuse the same binaries? I don't see why it compiles them at the time you run the tool, rather than when you're installing dependencies. The benchmarks show a significant latency increase. The author also provided a different approach which doesn't seem to have any obvious downsides, besides not sharing dependency versions (which may or may not be a good thing - that's a separate discussion IMO).
Yeah I want none of that. I'll stick with my makefiles and a dedicated "internal/tools" module. Tools routinely force upgrades that break other things, and allowing that is a feature.
you got to have isolation of artefact and tools around to work with it.
it is bonkers to start versioning tools used to build project mixed with artefact dependencies itself. should we include version of VSCode used to type code? how about transitive dependencies of VSCode? how about OS itself to edit files? how about version of LLM model that generated some of this code? where does this stop?
From my experience at Google I _know_ this is possible in a Megamonorepo. I have briefly fiddled with Bazel and it seems there's quite a barrier to entry, I dunno if that's just lack of experience but it didn't quite seem ready for small projects.
Maybe Nix is the solution but that has barrier to entry more at the human level - it just seems like a Way of Life that you have to dive all the way into.
Nonetheless, maybe I should try diving into one or both of those tools at some point.
The main argument for not overly genericizing things is that you can deliver a better user experience through domain-specific code.
For Bazel and buck2 specifically, they require a total commitment to it, which implies ongoing maintenance work. I also think the fact that they don't have open governance is a hindrance. Google's and Meta's internal monorepos make certain tradeoffs that don't quite work in a more distributed model.
Bazel is also in Java I believe, which is a bit unfortunate due to process startup times. On my machine, `time bazelisk --help` takes over 0.75 seconds to run, compared to `time go --help` which is 0.003 seconds and `time cargo --help` which is 0.02 seconds. (This doesn't apply to buck2, which is in Rust.)
Running bazel outside of a bazel workspace is not a major use-case that needs to be fixed.
Because being cross-language makes them inherit all of the complexity of the worst languages they support.
The infinite flexibility required to accommodate everyone keeps costing you at every step.
You need to learn a tool that is more powerful than your language requires, and pay the cost of more abstraction layers than you need.
Then you have to work with snowflake projects that are all different in arbitrary ways, because the everything-agnostic tool didn't impose any conventions or constraints.
The vague do-it-all build systems make everything more complicated than necessary. Their "simple" components are either a mere execution primitive that make handling different platforms/versions/configurations your problem, or are macros/magic/plugins that are a fractal of a build system written inside a build system, with more custom complexity underneath.
OTOH a language-specific build system knows exactly what that language needs, and doesn't need to support more. It can include specific solutions and workarounds for its target environments, out of the box, because it knows what it's building and what platforms it supports. It can use conventions and defaults of its language to do most things without configuration. General build tools need build scripts written, debugged, and tweaked endlessly.
A single-language build tool can support just one standard project structure and have all projects and dependencies follow it. That makes it easier to work on other projects, and easier to write tooling that works with all of them. All because focused build system doesn't accommodate all the custom legacy projects of all languages.
You don't realize how much of a skill-and-effort black hole build scripts are is until you use a language where a build command just builds it.
For vendored open source projects that build with random other tools (CMake, Nix, custom Makefile) it's a pain but the fact that it's generally possible to get them building with Blaze at all says something...
Yes, the monorepo makes all of this dramatically easier. I can consider "one-build-tool-to-rule-them-all isn't really practical outside of a monorepo" as a valid argument, although it remains to be proven. But "you fundamentally need a build tool per language" doesn't hold any water for me.
> That makes it easier to work on other projects, and easier to write tooling that works with all of them.
But... this is my whole point. Only if those projects are in the same language as yours! I can see how maybe that's valid in some domains where there's probably a lot of people who can just do almost everything on JS/TS, maybe Java has a similar domain. But for most of us switching between Go/Cargo/CMake etc is a huge pain.
Oh btw, there's also Meson. That's very cross-language while also seeming extremely simple to use. But it doesn't seem to deliver a very full-featured experience.
If your "one build system to rule them all" was built in, say, Ruby, the Python ecosystem won't want to use it. No Python evangelist wants to tell users that step 1 of getting up and running with Python is "Install Ruby".
So you tend to get a lot of wheel reinvention across ecosystems.
I don't necessarily think it's a bad thing. Yes, it's a lot of redundant work. But it's also an opportunity to shed historical baggage and learn from previous mistakes. Compare, for example, how beloved Rust's cargo ecosystem is compared the ongoing mess that is package management in Python.
A fresh start can be valuable, and not having a monoculture can be helpful for rapid evolution.
True, but the Python community does seem to be coalescing around tools like UV and Ruff, written in Rust. Presumably that’s more acceptable because it’s a compiled language, so they tell users to “install UV” not “install Rust”.
It's been such a frustration for me that I started writing my own as a side project a year or two ago, based on a using a standardized filesystem structure for packages instead of a manifest or configuration language. By leaning into the filesystem heavily, you can avoid a lot of language lock-in and complexity that comes with other tools. And with fingerprint-based addressing for packages and files, it's quite fast. Incremental rebuild checks for my projects with hundreds of packages take only 200-300ms on my low-end laptop with an Intel N200 and mid-tier SSD.
It's an early stage project and the documentation needs some work, but if you're interested: https://github.com/somesocks/dryad https://somesocks.github.io/dryad/
One other alternative I know of that's multi-language is Pants(https://www.pantsbuild.org/), which has support for packages in several languages, and an "ad-hoc" mode which lets you build packages with a custom tool if it isn't officially supported. They've added support for quite a few new tools/languages lately, and seem to be very much an active project.
I think you can fix these issues by using a package manager around Bazel. Conda is my preferred choice because it is in the top tier for adoption, cross platform support, and supported more locked down use cases like going through mirrors, not having root, not controlling file paths, etc. What Bazel gets from this is a generic solution for package management with better version solving for build rules, source dependencies and binary dependencies. By sourcing binary deps from conda forge, you get a midpoint between deep investment into Bazel and binaries with unknown provenance which allows you to incrementally move to source as appropriate.
Additional notes: some requirements limit utility and approach being partial support of a platform. If you require root on Linux, wsl on Windows, have frequent compilation breakage on darwin, or neglect Windows file paths, your cross platform support is partial in my book.
Use of Java for Bazel and Python for conda might be regrettable, but not bad enough to warrant moving down the list of adoption and in my experience there is vastly more Bazel out there than Buck or other competitors. Similarly, you want to see some adoption from Haskell, Rust, Julia, Golang, Python, C++, etc.
JavaScript is thorny. You really don't want to have to deal with multiple versions of the same library with compiled languages, but you have to with JavaScript. I haven't seen too much demand for JavaScript bindings to C++ wrappers around a Rust core that uses C core libraries, but I do see that for Python bindings.
Rust handles this fine by unifying up to semver compatibility -- diamond dependency hell is an artifact of the lack of namespacing in many older languages.
I've been working on an alternate build tool Mill (https://www.mill-build.org) tries to provide the 90% of Bazel that people need at 10% the complexity cost. From a greenfield perspective a lot of work to try and catch up to Bazel's cross-language support and community. I think we can eventually get there, but it will be a long slog
https://gist.github.com/terabyte/15a2d3d407285b8b5a0a7964dd6...
So I think a general solution would work better, and not be limited to Go. There are plenty of tools in this space to choose from: mise, devenv, Nix, Hermit, etc.
but are there really that many tools you need in a go project not written in go?
In Python’s uv, the pyproject.toml has separate sections for dev and prod dependencies. Then uv generates a single lock file where you can specify whether to install dev or prod deps.
But what happens if I run ‘go run’ or ‘go build’? Will the tools get into the final artifact?
I know Python still doesn’t solve the issue where a tool can depend on a different version of a library than the main project. But this approach in Go doesn’t seem to fix it either. If your tool needs an older version of a library, the single go.mod file forces the entire project to use the older version, even if the project needs—or can only support—a newer version of the dependency.
No. The binary size is related to the number of dependencies you use in each main package (and the dependencies they use, etc). It does not matter how many dependencies you have in your go.mod.
Plus, `go tool <tool-name>` is slower than `./bin/<tool-name>`. Not to mention, it doesn’t resolve the issue where tools might use a different version of a dependency than the app.
If you want, you can have multiple ".mod" files and set "-modifle=dev-env.mod" every time you run "go" binary with "run" or "build" command. For example, you can take what @mseepgood mentioned:
> go tool -modfile=tools.mod mytool
Plus, in last versions of the Go we have workspaces [0][1]. It is yet another way to easily switch between various environments or having isolated modules in the monorepo.
1. it is single tree
2. BUT tools will not propagate through the dependency tree downstream due to go module pruning
check this comment: https://github.com/golang/go/issues/48429#issuecomment-26184...
official docs: https://tip.golang.org/doc/modules/managing-dependencies#too...
> Due to module pruning, when you depend on a module that itself has a tool dependency, requirements that exist just to satisfy that tool dependency do not usually become requirements of your module.
I really dislike that now I'm going to have two problems, managing other tools installed through a makefile, e.g. lint, and managing tools "installed" through go.mod, e.g. mocks generators, stringify, etc.
I feel like this is not a net negative on the ecosystem again. Each release Golang team adds thing to manage and makes it harder to interact with other codebases. In this case, each company will have to decide if they want to use "go tool" and when to use it. Each time I clone an open source repo I'm going to have to check how they manage their tools.
old design give people option to use `tools.go` approach, or other, or nothing at all. now they are enforcing this `tools.go` standard. Go looks to be moving into very restrictive territories.
what about surveying opposing views? what about people who did not use `tools.go`
what is going on in Google, Go team?
basically it go tool relies heavily on go module pruning, so transitive dependencies from tools are not propagated downstream.
also, official docs say this: https://tip.golang.org/doc/modules/managing-dependencies#too...
> Due to module pruning, when you depend on a module that itself has a tool dependency, requirements that exist just to satisfy that tool dependency do not usually become requirements of your module.
after digging deeper, it is alright
Another great blog post [2] covers performance issues with go tool
``` //go:build tools
package tools
import ( _ "github.com/xxx/yyy" ... ) ```
Being able to run tools directly with go generate run [1] already works well enough and I frankly don't need see any benefits compared to it in this new approach.
you can still leave comments in discussion issue: https://github.com/golang/go/issues/48429
When I use `go tool`, it uses whatever I have in go.mod; and, in the opposit way, it will update my go.mod for no real reason.
But some people do this right now with tools.go, so... whatever, it's a better version of tools.go pattern. And I can still do it my preffered way with `go install <tool>@version` in makefile. So, eh.
Likewise, the standard library NodeJS http stack will not be as performant as a performance optimized alternative.
That said, if raw performance is your primary concern, neither Go nor NodeJS will be good enough. There's many more factors to consider.
> There's many more factors to consider.
Erlang / Elixir?
Go being garbage collected and still beating v8 is one hell of an achievement.
If you need faster Go Http you're using the wrong tool.
I'm not sure how GC is relevant here at all given that JS is also a garbage-collected language.
We went back on forth on this a lot, but it boiled down to wanting only one dependency graph per module instead of two. This simplifies things like security scanners, and other workflows that analyze your dependencies.
A `// tool` comment would be a nice addition, it's probably not impossible to add, but the code is quite fiddly.
Luckily for library authors, although it does impact version selection for projects who use your module; those projects do not get `// indirect` lines in their go.mod because those packages are not required when building their module.
I'm not a library author and I try to be careful about what dependencies I introduce to my projects (including indirect dependencies). On one project, switching to `go tool` makes my go.mod go from 93 lines to 247 (excluding the tools themselves) - this makes it infeasible to manually review.
If I'm only using a single feature of a multi-purpose tool for example, does it matter to me that some unrelated dependency of theirs has a security issue?
Did you consider having tool be an alias for indirect? That would have kept a single dependency graph per module, while still enabling one reading one’s go.mod by hand rather than using ‘go mod’ to know where each dependency came from and why?
I know, a random drive-by forum post is not the same as a technical design …
I don't understand your comment when you should know copying proven features is a good thing.
* not using posix args
* obscure incantations to run tests
* go.mod tooling is completely non-deterministic and hard to use, they should have just left the old-style vendor/ alone (which worked perfectly) and wrapped a git-submodules front-end on top for everyone who was afraid of submodules. Instead they reinvented this arcane new ecosystem.
If you want to rewrite the golang tooling, I'll consult on this for free.
Everyone downvoting obviously doesn't understand the problem.
That said, if it helps people do “their thing” in what they believe is an easier (more straightforward) way, then I welcome the new changes.
It did, but if you recall it came with a lot of "We have no idea why you need this" from Pike and friends. Which, of course, makes sense when you remember that they don't use the go toolchain inside Google. They use Google's toolchain, which already supports things like code generation and build dependency management in a far more elegant way. Had Go not transitioned to a community project, I expect we would have seen the same "We have no idea why you need this" from the Go project as that is another thing already handled by Google's tooling.
The parent's experience comes from similar sized companies as Google who have similar kinds of tooling as Google. His question comes not from a "why would you need this kind of feature?" in concept, but more of a "why would you not use the tooling you already have?" angle. And, to be fair, none of this is needed where you have better tooling, but the better tooling we know tends to require entire teams to maintain it, which is unrealistic for individuals to small organizations. So, this is a pretty good half-measure to allow the rest of us to play the same game in a smaller way.
The digraph problem of build tooling is hardly new, though the ability to checksum all of your build tools and executables and mid-outputs to assure consistency is relatively new to feasibility. Bazel is a heavy instrument and making it work as well as it does was a hard problem even for Google. I don't know anyone making the same investment, and doubt it makes the slightest hint of sense for anyone outside the fortune 500.