I have been trying to get my docker to be more distributable. Right now it's just a simple python script in a python env inside a docker container inside a QEMU container to automate a click and then netcat something.
Pretty sweet. It's only like 20GB, so pretty lightweight by modern standards.
Awesome project though.
I suppose dockerc could be convenient when you already have a Docker image on hand, especially if it was a PITA to create or its creation is a lost art.
Besides fat executables, `nix bundle` also lets you create Docker images, AppImages, and images/executables in a few other formats.
--
1: https://github.com/matthewbauer/nix-bundle
2: https://nixos.org/manual/nix/unstable/command-ref/new-cli/ni...
(^I'm a collaborator on awesomecv, a latex CV/resume template, and we get a lot of PRs from people trying to edit their own information & experience into the example. I keep meaning to set up a template repo of how to use it properly as a latex class, maybe include rendering in Actions, the way my own use of it is basically, and more strongly point people towards that in the readme, maybe even remove the example. But I've been saying that for years, since before 'template repo' existed in fact.)
The executables bundle crun (a container runtime)[0], and a fuse implementation of squashfs and overlayfs. Appended to that is a squashfs of the image.
At runtime the squashfs and overlayfs are mounted and the container is started.
dockerc works by using: Zig + crun + squashfs/overlayfs. Nils (the author) posted a bit more in this thread for anyone interested: https://news.ycombinator.com/item?id=39621573
Could also be a skill issue.
https://github.com/ericbeland/ruby-packer https://github.com/YOU54F/traveling-ruby
ruby-packer is what I use to distribute a paid CLI, although on macOS only because my product is specifically for customers on macOS.
The advantage of ruby-packer is that it is much simpler, but you need to have access to each OS where you want to distribute your executable. OTOH, with traveling-ruby, you can build executables for all OSes from the same machine.
At this point you might as well compile statically and include a virtual filesystem. Pretty much what Sun created in the 90s..
Next rant pic- When I RUN a F*ING EXE it should open a Window with the application in it you smelly nerds!!!!
We can't be doing this "GIVE ME THE .EXE" anymore. Those days are gone.
Even building from source is questionable, but at least there it requires that the installer be part of (or at least adjacent to) a community of developers who can be expected to have the expertise to notice and recognize bad actors pushing code.
We can and should still do that for people who want it (i.e. most people). The security conscious can decline to use them, but that doesn't mean the rest of us should have a worse experience.
Perhaps some barriers to use are good sometimes.
As a solution to the problem of app distribution, I do have some concerns, though:
How do you deal with resource sharing? This starts with just filesystem mounts, but also concerns ports, possibly devices, and probably many other things I'm forgetting. Is this somehow configurable?
How does this compare to AppImage? IIRC that also puts everything into a squashfs.
If a user without CAP_SYS_USER_NS executes one of the binaries built by dockerc, do you handle that gracefully in any way?
I'm not too sure what resources you're talking about in general. Mounts are in a temporary location so they shouldn't conflict. Each container uses 2 when it is running. In terms of ports, you won't be able to have multiple applications using the same port (whether they are built with dockerc or not). As for devices I don't think there's any issues there.
> How does this compare to AppImage? IIRC that also puts everything into a squashfs.
It's very similar to AppImage in spirit. I haven't looked at the AppImage implementation but I suspect a lot of things are similar.
The difference with AppImage is that this makes it trivial to convert existing docker images into something that can run as an executable. It also offers stronger hermeticity guarantees as the application runs inside of a container.
> If a user without CAP_SYS_USER_NS executes one of the binaries built by dockerc, do you handle that gracefully in any way?
It's not something I've paid much attention to. This falls back to the container runtime which currently outputs "clone: Operation not permitted" when ran with `sudo sysctl -w kernel.unprivileged_userns_clone=0`.
Cosmopolitan libc allows compiling of a single binary that runs on multiple OS platforms without modification – maybe Dockerc could use this to create a more universally portable container binary?
WASM binaries could run in the browser, or another WASM VM, including securely sandboxed environments that spin up fast.
I understand that newer Docker builds can use WASM under the hood so WASM in WASM would be funny, it seems if that were the case, maybe extracting the WASM with a more thin wrapper would be better?
As for WASM, this is already possible using container2wasm[0] and wasmer[1]'s ability to generate static binaries.
[0]: https://github.com/ktock/container2wasm
[1]: https://wasmer.io/
Edit: This was already answered here https://news.ycombinator.com/item?id=39622184
-----
$ ./dockerc --image docker://oven/bun --output bun
FATA[0000] Error loading trust policy: open /etc/containers/policy.json: no such file or directory ⨯ open CAS: validate: read oci-layout: invalid image detected Cannot stat source directory "/tmp/dockerc-fbObho/bundle" because No such file or directory error: FileNotFound
-----
Btw does this also solve the last line in the original user's complaints?
An alternative is to extract the image to disk but that has quite a bit of overhead.
Currently using docker as a way to easily distribute and run an open project[1], this would be great to use on top of docker
Will this run ok on a Mac? (I see the feature is pending, any tests done yet?)
> Will this run ok on a Mac?
I've managed to make it work but unfortunately not in a way that produces portable binaries. I just need to figure out how to selectively statically link some of the QEMU dependencies or write a runtime that uses Apple's VirtualizationFramework.
Nice to see Zig in action, btw. What about using Zon for deps, instead of a git submodule? Just curious if the author tried it, I honestly didn't have time to use Zon deps yet
> What about using Zon for deps, instead of a git submodule?
I couldn't find documentation quickly enough (dockerc was initially during a hackathon and was my first time using Zig). I plan to fix this eventually.
About Zon, I was just curious if you had any issues with it. I also need to finally start using it in my small projects.
Good luck with Zig!
I’m glad that I know the author. He won 2 tracks of this year’s Stanford TreeHacks with it:)
However having written it in Zig I have a few retroactive reasons for why Zig was a good choice (if not the best choice):
* The build system allows to make the runtime a dependency of the "compiler". I don't think any other language has that.
* The interoperability with C/systems calls is amazing (in comparison to anything but C/C++)
* The ability to embed files
* Makes it incredibly easy to make static binaries
[1]: https://github.com/NilsIrl/dockerc/blob/68b0e6dc40e76c77ad0c...
There's quite a few folks using zig _just_ for the toolchain, while still writing C.
$ ./dockerc --image docker://docker.io/pivotalrabbitmq/perf-test --output yourmom
$ ./yourmom
2024-03-07T03:10:54.145333Z: chown `/dev/pts/0`: Invalid argumentI love the juxtaposition with the image directly above this
dockerc allows you to re-use your existing docker images without having to spend time packaging something up.
The applications running from dockerc-generated executables run inside a container so they are guaranteed to be hermetic.
I always remember having issues using the Singularity outputs for anything that needed to interact with the filesystem.
Because the image isn't copied whether the image is 2GB or 25MB the startup time will be nearly instantaneous for both.
The runtime adds 6-7MB of overhead although I expect that this can be reduced to less than 3MB with some work.
[0]: https://github.com/shepherdjerred/macos-cross-compiler [1]: https://hub.docker.com/_/python
Please don't post shallow dismissals, especially of other people's work.
# nix bundle nixpkgs#hello
# ./hello
Hello, world!
https://news.ycombinator.com/item?id=39632011Even if you path through `/dev/fuse` and `/usr/bin/fusermount3` (using -v) it fails the mount the fuse filesystems with the error message "Operation not permitted".
It might be possible to make it work if it falls back to extracting the container when failing the mount the fuse filesystems.
There's a few things that seem to make it unsuitable for the intended use case of dockerc:
* The container extracts itself which means there is quite a bit of overhead for large images. dockerc-built executables have the same startup time whether the image is 2.2GB or 25MB.
* The executables produced by enroot don't seem suitable for running standalone. At least the example doesn't seem to suggest so.
The use case is for distributing software to end users.
Let's face it, if you're using Linux, you're comfortable typing some stuff into the terminal to install software. Or if you aren't comfortable with it yet, you will be soon. That's just the reality of using Linux. Even ignoring that, snap and flatpak apps provide a generally awful user experience, and I fail to see how this tool would do a better job.
That leaves Windows and MacOS users as the primary audience for software packaged using a tool like this. It would make sense that a tool like this would prioritize MacOS/Windows support above all else. Even the angry redditeur shown in the README clearly mentions .EXE files.
Why would QEMU even necessary? Docker runs fine on Windows. Maybe it's to avoid requiring the user to install Docker? Either way, asking the user to fiddle with Hyper-V settings is bad UX.
Yep, unfortunately I've not had the time to make it work well on those platforms. I got an initial demo working on MacOS but I'm currently facing the issue that I'm unable to statically compile QEMU on MacOS. I've also started writing a VirtualizationFramework[0] based backend.
> Why would QEMU even necessary? Docker runs fine on Windows.
When docker runs on Windows/MacOS it's actually running the containers in a Linux VM. Containers rely on features provided by the Linux kernel.
> Maybe it's to avoid requiring the user to install Docker?
The main reason to use dockerc is to avoid the user having to install Docker.
> Either way, asking the user to fiddle with Hyper-V settings is bad UX.
Yep I don't think that would be nice. I expect the experience to be transparent to the user.
[0]: https://developer.apple.com/documentation/virtualization
I can't comment on MacOS as I haven't used it regularly in several years, and even then I only used Docker briefly on MacOS.
I can see how this approach would result in reliably cross-platform applications, but it immediately raises a couple of concerns: binary size, and interoperability with the underlying OS.
Docker on its own can already be very large. Shoving it inside of QEMU adds even more largeness. Are binary sizes a priority at all? If so, how do you plan to keep them reasonably compact?
I'll assume that user-facing software is the main target for this tool, which means that GUI support is really important. By hosting an application inside of QEMU and Docker, you create a very long and complicated path that stuff must travel through in order for a dockerc'd GUI application to interact with other programs. It's pretty easy to plumb an X server into WSL, but there are limitations when you get into more nuanced interactions with the rest of the machine (ex: clicking and dragging files into the window). Docker adds yet another layer that could cause trouble.
I wish you luck. I tried to make a similar tool for a game engine a while back, and it was absolutely hellish.
How static are we talking here? There's no reasonable way to not link dynamically against libSystem. Then again, that's obviously present on all Macs, so shouldn't be an issue.
> When docker runs on Windows/MacOS it's actually running the containers in a Linux VM.
True on macOS, but only partially true for Windows. There are actual Windows containers, running natively on Windows and runnihng Windows inside the containers.
But do you even want to distribute Windows binaries? Or are you looking for a way to transparently ship a Linux binary to Windows users?
> Yep I don't think that would be nice. I expect the experience to be transparent to the user.
Does this include automagically mounting filesystems?
QEMU is kind of overkill because MacOS provides the VzVirtualMachine API through the Virtualization Framework, which can initialize a VM with the host's kernel. On Windows you can use Hyper-V, which is iirc how docker on Windows gets this done.
If MacOS and Windows had pid/mount/network namespaces, overlayfs, and allowed for unrestricted chroot (MacOS requires disabling SIP for that) then you could do the same thing on all platforms. Today, you need virtualization.