cargo > Bazel > autotools > "the IDE" > handwritten Makefiles >>>>>> build.sh >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CMake
It might be stockholm syndrome, but I'd rank it worse than Meson and better than Makefiles. Autotools can go die in a fire.
Cargo is wonderful, but having something that user-friendly and well designed in C would feel incongruous with the rest of the ecosystem.
You really did say this.
Cargo does get some flak from C and C++ build systems. It is definitely not perfect, but which build system is?
The coolest thing about cargo is that it can be and is significantly improved over time. The whole dependency resolution was improved to resolve some long standing issues lately in a backward compatible way and that’s just astounding that they managed to ship this.
Do I like cargo? No. Do I like it better than CMake Meson Bazel Auto tools and make files? Hell yeah, by far, I’d rather get shot than go back to cmake or Auto tools .
I'd use Bazel but afaik it mostly uses a rebuild-the-world approach.. I'm already getting flak from Linux distros maintainers because I use some vendored header-only libraries and not system ones so I don't see how that is even supposed to work.
> I'm already getting flak from Linux distros maintainers because I use some vendored header-only libraries and not system ones so I don't see how that is even supposed to work.
That's where you use autotools. It has been around long enough that every package manager knows how to deal with it. For most C/C++ programs, autotools is all you need. You should learn it because it's not going away.
"the IDE" (like VS project files) is definitely not cross-platform (although it can be), but not everything needs to be cross platform. For game devs building on DirectX, for example, it's completely pointless to support other platforms when the runtime depends on a single platform.
I don't think that I have ever been able to successfully compile a project that uses CMake. Its code is horrifying too, for example cmake-3.21.0-rc3/Modules/CheckFunctionExists.c contains
#ifdef CHECK_FUNCTION_EXISTS
# ifdef __cplusplus
extern "C"
# endif
char
CHECK_FUNCTION_EXISTS(void);
# ifdef __CLASSIC_C__
int main()
{
int ac;
char* av[];
# else
int main(int ac, char* av[])
{
# endif
CHECK_FUNCTION_EXISTS();
if (ac > 1000) {
return *av[0];
}
return 0;
}
#else /* CHECK_FUNCTION_EXISTS */
# error "CHECK_FUNCTION_EXISTS has to specify the function"
#endif /* CHECK_FUNCTION_EXISTS */> I don't think that I have ever been able to successfully compile a project that uses CMake.
That's quite the statement. In practice, I've found cmake -h. -Bbuild && cmake --build build
to work about 90% of the time. Far more luck than I've had with autotools.
> Its code is horrifying too, for example:
1) I'm sure I could find some horriffic code in meson too if I went digging. 2) The alternative to this is you having to write something equivalent in your own code, meaning that in my code I don't need to do stuff like [0] in my code to detect features; my build system handles it for me. 3) CMake supports more platforms and targets than I've ever seen in my life, and likely supports more compilers than are necessary. that's a blessing and a curse, but it means that if I write simple program to run on some crufty microcontroller with a bastardised gcc toolchain from the 90s, it's fairly likely that cmake supports it out of the box. Code like that is the price to pay for that level of support.
[0] https://github.com/boostorg/beast/blob/b7344b0d501f23f763a76...
Afaik Cargo does it out of the box, based on Cargo.lock.
UPDATE: This doc page seems to confirm that: <https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lo...>
Two reasons to avoid this:
- Performance, an item in the store for each object file seems very appealing but would be horrifically slow for anything large.
- Ironically, it would make it tricky to build a Nix package for the result, because recursive Nix is not a thing.
Once you stray out of that niche, anyone is better suited if they just automate all the boilerplate generation and compiler checks.
And that's what CMake does.
It's also dumbfounding how Bazel is ranked so high when it doesn't even support integrating system libraries as part of it's happy path.
The main reason why CMake, with all it's flaws, is the undisputed build system for c and C++ projects is the uncomfortable fact that all other alternatives are awful in comparison, even and specially in very basic happy path scenarios such as putting together a lib that any user can pick whether its static of shared and install it in the system folder or anywhere without even bothering with which compiler tool chain you're using. In CMake, anyone can get a complete project up and running with less than a dozen lines of code, and that project will assuredly work on multiple OSes as-is.
You'd be hard pressed to find another C++ build system that comes close to doing the same.
Pieter Hintjens, http://hintjens.com/blog:79
I'll point out that the article was written "2430 days ago" and uses a CMakeLists.txt example that looks like it.
The article then goes on to announce Yet Another Build System that doesn't seem to have gotten any traction.
But you're right it got almost zero traction outside of zeromq.
It completely blows my mind that this demoware has made inroads anywhere.
One thing I *hate* with build systems is having to enumerate all of my files. I have a file system. It knows about the files. If I organize my code appropriately (say a lib, inc, and prog directory) the organization says how to build the source code.
I often make my build system support finding all the files in directories, and enumerating them and using them for the build targets.
Some times that scheme bites me when projects get very large, since it can take a while to `find` all the files, but those projects suck to enumerate all the files in too.
I'm not seeing a big downside, but maybe I'm missing something obvious.
It's pretty powerful. It'd be pretty nice if it didn't have such a god awful DSL.
a) they have to do a very niche/specialized/extremely custom extension to CMake, or
b) they have no idea what they are doing.
More often than not, b) is the case.
How do you differentiate which headers should go where in your script?
(1) build automatically scans and adds all files in a directory
(2) I write a quick script foo.py in the directory to check something
(3) Boom, binary contains foo.py
I try to manually enumerate all files whenever I touch a build file.
It's the single biggest difference I've noticed. C++ is certainly more popular in Europe, but you see plenty of C projects in Europe and plenty of C++ projects in the US.
Or GUIs on medical devices using Qt?
Cause there are some, so I would expect US companies to also having a go at it.
It demos beautifully, but quickly becomes an outrageous collection of side quests to find the secret key.
What collection of hidden methods, global constants, environmental variables and insane incantations must I assemble to cross compile this software?
None. The answer is, None.
The best I was able to find, was get the whole artifice running on your actual workstation, then get it (and an entirely different tool chain, including IDE's?!!) up and running on each target platform, dust off your sneakers to go sit in front of another computer, fire up an IDE and find it's build button.
I know it's not, but CMake somehow manages to feel like a solution created by hand wringing, cat petting, volcano living, mustachioed, cigar-smoking proprietary OS and IDE vendors.
OTOH, zig cc leaves me with a single tear of joy and wonder sliding down my cheek like a framed Velvet Elvis.
Update: Also, premake isn't terrible.
Has anyone else found this to be the case? All the resources I've seen in the last five years or so have been pretty consistent about encouraging modern CMake style, which in my mind encompasses:
- declare targets and set properties
- generator expressions
- support the default workflow
- use find_package to import targets
I do see some misinformation from time to time about using commands like `include_directories` when `target_include_directories` is clearly the better style now, but I guess I don't consider "good style" and "modern CMake" to be the same thing anymore.
I think most people who recommend alternatives completely underestimate the degree and extent of specific compiler, tool, library, IDE, and general native-build environment knowledge that CMake has swallowed and incorporated over its years of existence and continues to do so. Including all these quirks & particularities of the endless number of components, software artifacts and tools it handles. This is the whole reason for his success and the one thing no swift, elegant, new-paradigm new player can surpass short-, mid- or - in some cases - even long-term.
Most of the time you can tell the real experience of someone judging CMake simply by his kind of troubles with it. Admittedly, the syntax is ugly and often inconsistent. Consider a list 'alist' and depending on context you can or must reference it as either alist, ${alist} or "${alist}" - terrible, true. The most complex data type is aforementioned list, often as under pain bearable nested variant. Math is cumbersome, no unicode support for string manipulations like positions, length calculations, the list is going on seemingly ad infinitum.
But you can learn this rather quickly and when you write CMake code for some weeks you will become accustomed to it. In the meantime, other levels of annoyances begin to appear. For example, the allowed context of generator expressions is inconsistent, incomplete and sometimes - from a cmake writers point of use - almost artificially limited. Take add_custom_command: It allows for GE's in his DEPENDS and also COMMAND sections - but not as argument for OUTPUT. But wait, starting with CMake 3.20 it does, but:
" Arguments to OUTPUT may use a restricted set of generator expressions. Target-dependent expressions are not permitted. "
Unfortunately, those are often exactly what the developer is looking for. Reason here is as in many cases the deeply ingrained two-pass configure-generator nature of CMake.
For any real project, state becomes quickly important. Diving through many levels of sub directories and maintaining/conveying information between the associated CMake projects becomes far from trivial in no time. And no, cache variables are not the solution.
Another issue is the interaction with higher languages in CMake code. Most people start quickly with execute_process(${MY_PYTHON} ...) in order to handle more challenging topics. Problem is the lack of smooth communication of the results without workarounds like temp files, whatever else.
Also, any dependency requirements not covered by the standard cases, might it affect rebuilds, reconfigures or regenerations, requires deeper knowledge of CMake's actual dependency resolution mechanisms. Often only inspecting his time-stamping bookkeeping or exploiting his trace / graphviz / file-api output will help here (neglecting source code inspection, this is rather rarely required).
In principle, the task requires a fully-fledged programming language - but containing all the accumulated knowledge of CMake about his subjects.
Not an advertisment, but for any beginner I can only recommend Craig's book
https://crascit.com/professional-cmake/
It continously integrates changes from new versions. He is also always helpful in CMakes own discourse forum and gitlab issue tracker.
CMake has been around for a few decades, and perhaps a dozen alternative makefile generators and higher-level build systems already popped up, but still each and every single one of them failed miserably in gaining any traction.
Why is that, given that CMake is indeed far from perfect?
The truth of the matter is that CMake is, by far, the best buildsystem/makefile generator for C and C++ projects that there is right now. And this has been the case for a couple of decades. Not only does it work reliably but it is also extremely easy to setup and use in the happy path. With cmake anyone can easily create a project that includes multiple libraries and executables that consume system libraries in a matter of minutes right on their first try as a "hello world" onboarding project, and that project will work on multiple platforms and on any CICD pipeline.
I would very much prefer to see a fraction of the energy wasted in hating CMake being channeled into making a cmake alternative. But for some reason, all we see is hate.
Why is that?
Say a group of smart engineers start designing the perfect build systesm.
Usually there is some sort of design constraint added for correctness that works for, say, 99% of projects, which seems like a good tradeoff. Until it turns out that openssl is in that other 1%.
Or maybe the build system assumes everything can build from source, which maybe works for 90% of cases, forgetting that proprietary vendors often ship prebuilt binaries.
Or maybe the build system is written in <actual scripting language>, which means <actual scripting language>, written in C, now needs a different build system. Also, <legacy OS> doesn't have a compatible version of <actual scripting language> available.
Or maybe the build system requires a lot of boilerplate to support the typical project structure of <large organization>. Instead of having verbose build recipes in hundreds, thousands, or tens of thousands of projects, <large organization> just sticks with its legacy custom build scripts.
The fact of the matter, CMake is pretty good. It lets you do basically anything you need to. It has basically no dependencies to build and use it, so it works anywhere. And it has extensibility that's actually fairly rare in build systems.
Anyway, my theory is folks see the downsides of some tradeoffs CMake made (awkward basic DSL) without appreciating the upsides (available everywhere), especially because they don't realize "works on my project and box" doesn't cut it for maintaining projects in the C and C++ ecosystem. I'll be bold enough to predict that the build systems of Go and Rust will be just as complicated if they ever need to start supporting things like juggling BLAS versions.
But if you think cmake can easily consume libraries that work on multiple platforms, please help me use libpng in MozJPEG's cmake, because it's a fucking stupid nightmare:
-- Could NOT find ZLIB (missing: ZLIB_LIBRARY) (found version "1.2.8")that liklely means that the headers were found but not the .so. For instance maybe you have a stale zlib.h in /usr/local, but not zlib-dev installed (thus no libz.so).
You can use cmake's --debug-find first to have more info, and if that's not enough, --trace / --trace-expand ; for instance the person who wrote the FindZLIB.cmake used in that case could have done something like:
find_library(ZLIB_LIBRARY z)
if(ZLIB_LIBRARY)
if(NOT /* logic to detect if the library has a specific symbol, for instance gzopen */)
unset(ZLIB_LIBRARY)
endif()
endif()
which would result in the above error message. The main problem being that the person who wrote that script did not add a small log output to indicate why a given .so was not considered valid.