I'd love to have cachable builds but I wonder how much effort it takes to maintain such a setup - especially in a small team very far away from silicon valley or google were nobody saw bazel before. Would be a perfect fit for my former company but even there I couldn't convince my colleagues - they build a monster using maven which would be more elegant in bazel but maven integration worked different than the maven algorithm at least 2 years ago...so my PoC turned into reading source code of dependency resolution algorithm in the used scala tool coursier... in the end I had to give up after patching maven issues in several projects we build upon - something nobody is willing to do I guess - to be fair I tried to incorporate Alfresco ECM which is quite an enterprise Java monster on it's own.
Also I couldn't find a good approach to use multiple git repositories almost all tutorials use a monorepo.
Tools like Bazel or Nix or <whatever you enjoy> are incredible for many reasons but they are, to a first approximation, very, very high-effort and complex tools that require care. It's like a racing car. You need an engineer on hand to keep the car running, and someone to drive it too. Maybe you're both of those people, but only a driver isn't enough.
They also suffer from another problem which is that most people don't care as much as you or I do. :) So to make it a slam dunk, you have to make a pretty clear case that it's not "just" 2x better, but actually 10x better. And it's actually even harder than that: it can't just be 10x better in practice, it has to appear 10x better at a glance. Like, at the window shopping phase, it needs to catch their eye. That's difficult to nail and basically impossible to do perfectly.
Personally? I'm all-in on "whole sale" build tools like this that hit many nails with one hammer. They solve a tremendous amount of issues. But they aren't always good and I can really see why they can fail to gain traction for many teams: they simply aren't necessary for them, even if they're really a lot better in many ways. I get it.
Exactly that - for one it was a plugin for Alfresco ECM that disobeyed every Alfresco SDK rule :), then a custom Angular frontend that needs to be build according to the plugin - furthermore the business was of that kind that every customer had a specialized build with customization - Bazel would have solved that in a very neat way - including overriding a few source files for a specific customer.
We tried to build then docker images with maven which was just a shitshow considering our problems - however I'm not there anymore but I'd like to have the exact setup that you describe - the big hammer to don't have to deal with all the stupid things - especially as my background is sysadmin and every problem I don't have to solve anymore is good for me :)
I find it interesting that Bazel and Nix don't seem to complement each other as much as I'd think given that they're both 'high effort, high reward tools which deal with hermetic building'.
They certainly both suffer from issues related to upstream packages doing their own thing.
> They also suffer from another problem which is that most people don't care as much as you or I do. :) ...
I use Nix. I haven't use Bazel.
I think Nix is wonderful for enthusiasts. It takes some time to learn. However, Nix is excellent at dealing with packages. And developers often do things which involve packages. -- e.g. Nix is good for declaring a shell with programs that can be made available.
If you have a small team and repo it's probably not worth the effort.
When everything works it's amazing. When it doesn't or additional work is required the hope your local expert is online and willing to sort it out.
Otherwise: you can pull in repos as bazel-file-containing additions. I forget the details, but I believe it's something like https://bazel.build/extending/repo . The troubleshooting-while-developing experience is not great though imo, I'd try submodules first.
I find it hard to buy into one of these "mega build system" ecosystems because there are so many competing tools that descend from Blaze: Bazel, Pants, Buck, Buck2, etc. The migration path to these systems takes substantial effort, so I wouldn't want to bet on the wrong one and have to do it all again a few years later. I'm content to wait patiently for Bazel or Buck2 to clearly win before I invest.
The Bazel docs are admittedly a bit challenging, but the easiest way to learn things when you're first getting started is to just look at other projects using Bazel and see how they do things.
Using multiple git repositories is easy, you just use the git_repository() rule which has been in Bazel since pretty much forever. Nowadays you can also use bzlmod (introduced in bazel 5.x) which is a slightly different approach to this problem. There are a bunch of rules in https://github.com/bazelbuild/bazel-central-registry which you can look at to see how to write your own bzlmod registries.
Basically, I wanted to conditionally use sanitizers for a specific target and propagate the sanitizers compiler/linker flags to the whole dependency tree. For code you own, from what I could find, you need to duplicate each target and each dependency (and their own dependencies) for each possible sanitizer and then select the right one. And for external dependencies, you need to use "Aspects", if I understood correctly, to modify the build dependency graph and inject your flags. There's a question on stackoverflow about it, but the answer is very high level (presumably by a Bazel dev), and unfortunately I didn't have time to dig it further and just abandoned.
I also had a lot of issues on an ARM Mac, including the linker issue that is apparently fixed (hooray) and recurrent crashes.
Given we do not use any of best features of Bazel (remote/distributed compilation, multiple languages, and caching (because we basically spawn a new docker container for each build)), I think CMake would have made more sense.
This is a huge change as external dependencies used to be one of the big pain points with the Bazel pipeline.
1. Does it do linking to a mix of static and dynamic libraries yet?
2. Anyone try buck2, sort of bzl-like?
- Link everything static.
- Link everything dynamic.
- Link user libs as static and system libs as dynamic.
There is no easy way to link a single user lib static/dynamic without resorting to hacks/workarounds like re-importing the shared library or defining weird intermediate targets. It's completely broken.
I think what I struggle with the most is the opinions of the rules / plugins. Either I use it as a core task runner (easy, but lots of config the team has to maintain setting up script entries in packages etc) or I have to write custom plugins for anyting not on the "happy path".
I think thats my general frustration with these tools, they aren't terribly well built for SDK type consumption, they don't expose a core set of APIs directly, instead relying on a plugin system that can sometimes be very opaque
> One of Bazel's most powerful features is its ability to use remote caching and remote execution.
What's the value proposition of Bazel when build systems like cmake support this out of the box with third-party tools like distcc and ccache?
I think that this is a problem with branding which unfortunately Bazel is using. I think remote caching and remote exec are not the best features that Bazel has to offer.
First to answer your question: distcc and ccache are great but they offer a limited amount of flexibility in what can be distributed. Bazel's approach is generalized and operates on this basic set of steps:
1. Configure a "target" (define inputs and outputs) 2. Load the deps for the target into a sandbox. 3. Run a command in that sandbox. 4. Copy the outputs out of the sandbox.
With this approach you can execute anything as it's just running a shell program with some args. For example: lets say you are building a game and you have a process where you take raw models from your designers and compress them into an efficient storage format. This can be a `genrule` in your BUILD file and instantly get rerun whenever anyone submits a new change to the models. Now that it's been run in CI all of your devs will not need to rebuild this on their machines. That's the goal. Basically: it is more generic than caching a specific language's data. You can cache everything.
My favorite part of Bazel is the layer of abstraction. Rather than thinking of build steps, what args to gcc are getting passed, etc you are thinking about libraries, binaries, tests, and other artifacts.
Also, this abstraction of inputs and outputs makes it so it's easy to eventually build higher level abstractions. For example, this should be possible at some point:
cc_binary(
name = "foobar",
srcs = [...],
hdrs = ["foobar.h"]
)
# Automatically generates jni binding
# for a C++ library just by reading the
# header file and binding all types.
java_cc_binding(
name = "foobar_java",
deps = [":foobar"],
namespace = "::foobar",
generate_class = "foobar.FoobarBinding",
)
This doesn't yet exist but it could and it would be pretty amazing. Or, another example, imagine AWS publishing a `lambda_binary` rule which you could pass any `*_binary` and run your binary into which automatically generates a packaged and ready-to-go artifact which you could upload to AWS.I think the framework to allow others to build on this abstraction and make things take less effort overall is the huge value add.
I agree that the linked issue is legitimate, but I'd argue that this isn't a problem Bazel itself needs to solve--you should fix your build to be fully hermetic.