But Musl is only available on Linux, isn't it? Cosmopolitan (https://github.com/jart/cosmopolitan) goes further and is available also on Mac and Windows, and it uses e.g. SIMD and other performance related improvements. Unfortunately, one has to cut through the marketing "magic" to find the main engineering value; stripping away the "polyglot" shell-script hacks and the "Actually Portable Executable" container (which are undoubtedly innovative), the core benefit proposition of Cosmopolitan is indeed a platform-agnostic, statically-linked C standard (plus some Posix) library that performs runtime system call translation, so to say "the Musl we have been waiting for".
Really, what would the world look like if this problem had been properly solved? Would the centralization and monetization of the Internet have followed the same path? Would Windows be so dominant? Would social media have evolved to the current status? Would we have had a chance to fight against the technofeudalism we're headed for?
The trick? It's not statically linked, but dynamically linked. And it doesn't like with anything other than glibc, X11 ... and bdb.
At this point I think people just do not know how binary compatibility works at all. Or they refer to a different problem that I am not familiar with.
tl;dw Google recognizes the need for a statically-linked modular latency sensitive portable POSIX runtime, and they are building it.
I don't want Lua. Using Lua is crazy clever, but it's not what I want.
I should just vibe code the dang thing.
I have a devcontainer running the Cosmopolitan toolchain and stuck the cosmocc README.md in a file referenced from my AGENTS.md.
Claude does a decent job. You have to stay on top of it when it’s writing C, easy to turn to spaghetti.
Also the fat binary concept trips up agents - just have it read the actual cosmocc file itself to figure any issues out.
The things I know of and can think of off the top of my head are:
1. appimage https://appimage.org/
2. nix-bundle https://github.com/nix-community/nix-bundle
3. guix via guix pack
4. A small collection of random small projects hardly anyone uses for docker to do this (i.e. https://github.com/NilsIrl/dockerc )
5. A docker image (a package that runs everywhere, assuming a docker runtime is available)
7. https://en.wikipedia.org/wiki/Snap_(software)
AppImage is the closest to what you want I think.
A "works in most cases" build should also be available for that that it would benefit. And if you can, why not provide specialized packages for the edge cases?
Of course, don't take my advice as-is, you should always thoroughly benchmark your software on real systems and choose the tradeoffs you're willing to make.
I wonder though, if I package say a .so file from nVidia, is that allowed by the license?
You can change the rpath though, which is sort of like an LD_LIBRARY_PATH baked into the object, which makes it relatively easy to bundle everything but libc with your binary.
edit: Mild correction, there is this: https://sourceforge.net/projects/statifier/ But the way this works is that it has the dynamic linker load everything (without ASLR / in a compact layout, presumably) and then dumps an image of the process. Everything else is just increasingly fancy ways of copying shared objects around and making ld.so prefer the bundled libraries.
It works surprisingly well but their pricing is hidden and last time I contacted them as a student it was upwards of $350/year
https://appimage.github.io/appimagetool/
Myself, I've committed to using Lua for all my cross-platform development needs, and in that regard I find luastatic very, very useful ..
But you can't take .so files and make one "static" binary out of them.
Yes you can!
This is more-or-less what unexec does
- https://news.ycombinator.com/item?id=21394916
For some reason nobody seems to like this sorcery, probably because it combines the worst of all worlds.
But there's almost[1] nothing special about what the dynamic linker is doing to get those .so files into memory that it can't arrange them in one big file ahead of time!
[1]: ASLR would be one of those things...
mkdir chroot
cd chroot
for lib in $(ldd ${executable} | grep -oE '/\S+'); do
tgt="$(dirname ${lib})"
mkdir -p .${tgt}
cp ${lib} .${tgt}
done
mkdir -p .$(dirname ${executable})
cp ${executable} .${executable}
tar cf ../chroot-run-anywhere.tgz .Eg. Your App might just depend on libqt5gui.so but that libqt5gui.so might depend on some libxml etc...
Not to mention all the files from /usr/share etc... That your application might indirectly depend on.
https://github.com/sigurd-dev/mkblob https://github.com/tweag/clodl
Even worse is containers, which has the disadvantage of both.
In practice, a statically linked system is often smaller than a meticulously dynamically linked one - while there are many copies of common routines, programs only contain tightly packed, specifically optimized and sometimes inlined versions of the symbols they use. The space and performance gain per program is quite significant.
Modern apps and containers are another issue entirely - linking doesn't help if your issue is gigabytes of graphical assets or using a container base image that includes the entire world.
When dynamically linking against shared OS libraries, Updates are far quicker and easier.
And as for the size advantage, just look at a typical Golang or Haskell program. Statically linked, two-digit megabytes, larger than my libc...
No idea why the glibc can't provide API+ABI stability, but on Linux it always comes down to glibc related "DLL hell" problems (e.g. not being able to run an executable that was created on a more recent Linux system on an older Linux system even when the program doesn't access any new glibc entry points - the usually adviced solution is to link with an older glibc version, but that's also not trivial, unless you use the Zig toolchain).
TL;DR: It's not static vs dynamic linking, just glibc being a an exceptionally shitty solution as operating system interface.
LTO is really a different thing, where you recompile when you link. You could technically do that as part of the dynamic linker too, but I don't think anyone is doing it.
There is a surprisingly high number of software development houses that don't (or can't) use LTO, either because of secrecy, scalability issues or simply not having good enough build processes to ensure they don't breach the ODR.
In the era of containers, I do not understand why this is "Not trivial". I could do it with even a chroot.
I lose control of the execution state. I have to follow the calling conventions which let my flags get clobbered.
To forego all of the above including link time optimization for the benefit of what exactly?
Imagine developing a C program where every object file produced during compilation was dynamically linked. It's obvious why that is a stupid idea - why does it become less stupid when dealing with a separate library?
made hooking into game code much easier than before
if you configure binfmt_misc
>Windows
if you disable Windows Defender
>OpenBSD
only older versions
Gave up them afterwards. If I need to tweak dependencies might as well deal with the packet manager of my distro.
Here is an idea, lets go back to pure UNIX distros using static binaries with OS IPC for any kind of application dynamism, I bet it will work out great, after all it did for several years.
Got to put that RAM to use.
Without dlopen (with regular dynamic linking), it's much harder to compile for older distros, and I doubt you can easily implement glibc/musl cross-compatibility at all in general.
Take a look what Valve does in a Steam Runtime:
- https://gitlab.steamos.cloud/steamrt/steam-runtime-tools/-/blob/main/docs/pressure-vessel.md
- https://gitlab.steamos.cloud/steamrt/steam-runtime-tools/-/blob/main/subprojects/libcapsule/doc/Capsules.txtHow do I do that? Is there a documented configuration of musl's allocator?
I eventually decided to keep the tiny musl app and make a companion app in a secondary process as needed (since the entire point of me compiling musl was cross platform linux compatibility/stability)
Some might appreciate a concrete instance of this advice inline here. For `foo.nim`, you can just add a `foo.nim.cfg`:
@if gcc:
gcc.exe = "musl-gcc"
gcc.linkerexe = "musl-gcc"
passL = "-static -s" @end
There is also a "NimScript" syntax you could use a `foo.nims`: if defined gcc: # nim.cfg runs faster than NimScript
switch "gcc.exe" , "musl-gcc"
switch "gcc.linkerexe", "musl-gcc"
switch "passL" , "-static -s"The documentation to make static binary with GLibc is sparce for a reason, they don't like static binaries.
Honestly, it was the kind of bug that is not fun to fix, because it's really about dependency, and not some fun code issue. There is no point in making our life harder with this to gatekeep proprietary software to run on our platform.
Binary compatibility solutions mostly target cases where rebuilding isn't possible, typically closed source software. Freezing and bundling software dependencies ultimately creates dependency hell rather than avoiding it.
Adobe stuff is of the kind that you'd prefer to not exist at all rather than have it fixed (and today you largely can pretend that it never existed already), and the situation for games has been pretty much fixed by Steam runtimes.
It's fine that some people care about it and some solutions are really clever, but it just doesn't seem to be an actual issue you stumble on in practice much.
Basically the way for the year of the Linux desktop is to become Windows.