- Fast networking: 30 Gbps! vs. 150 Mbps with Docker VPNKit. Full VPN compatibility, IPv6, ping, ICMP and UDP traceroute, and half-open TCP connections.
- Bidirectional filesystem sharing: fast VirtioFS to access macOS from Linux, but also a mount to access the Linux filesystem from macOS. This setup can help with performance: for example, you could store code in Linux and edit it from macOS with VS Code (which can take the performance hit of sharing), so the container runs with native FS speed.
- Not limited to Docker or Kubernetes. You can run multiple full Linux distros as system containers (like WSL) so they share resources.
- Fast x86 emulation with Rosetta
- Much lower background CPU usage. Only ~0.05% CPU usage and 2-5 idle wakeups per second — less than most apps, while Docker wakes up ~120 times per second. Made possible with low-level kernel optimizations. Also, no Electron!
- Better solutions to other problems that can occur on macOS: clock drift is corrected monotonically, dynamic disk size, and more I'm working on now. Will look into memory usage too, although I can't guarantee a good fix for that.
- No root needed.
Planning to release this as a paid app in January. Not OSS, but I think the value proposition is pretty good and there will be a free trial. Not sure about pricing yet.
If anyone is interested, drop me an email (see bio) and I'll let you know when this is ready for testing :)
Also, feel free to ask questions here or let me know if there are other warts you'd like to see fixed.
Enabling Rosetta can have a minor performance hit on memory-intensive workloads in the VM (not only x86 ones) because of TSO memory ordering, so it'll be optional. Hypervisor.framework doesn't have an API for third-party VMMs to set this and doesn't seem to let the VM modify ACTLR_EL1 either, so unless I can find a private API for it, I'm stuck with Virtualization.framework's limitation of Rosetta being either on or off for the entire VM at boot time.
Memory usage is probably the biggest uncertainty right now. It should be at least slightly better, but I'm not sure if I can improve it much more due to Virtualization.framework limitations. Still looking into it.
Networking is implemented with my custom userspace proxy for VPN compatibility. Servers are forwarded to localhost automatically, but you can't connect to the VM by IP because the network doesn't exist from the host's perspective. I've ran into too many issues with Apple's NAT setup and host-only networking is a private API, so this is postponed for now. Should be able to do better with root.
Graphics won't be supported at launch, but I could look into it later if there's interest. Not sure how feasible acceleration will be if I can't find a way around having to use Virtualization.framework.
Let me know if there's anything specific that I missed!
I'm honestly not sure how pricing and licensing will work yet, but there will be some way to try it for free. Maybe something like Docker Desktop: free for personal use, license required for companies? That seems like a risky bet as an indie dev.
There's also the whole question of one-time purchases vs. subscriptions. Subscriptions seem like the optimal model for this, so I'm not sure how to accommodate people who just don't like them.
Would love to hear if you have any thoughts on how it could be done to reach as many users as possible.
Are there some new Linux drivers involved, or is this "just" a better tuned VM?
(Also, by "fast VirtioFS", I meant the same VirtioFS implementation tested in the article because it's faster than other solutions — sorry if it wasn't clear.)
> Planning to release this as a paid app in January. Not OSS, but I think the value proposition is pretty good and there will be a free trial. Not sure about pricing yet.
> If anyone is interested, drop me an email (see bio) and I'll let you know when this is ready for testing :)
It provides docker compatibility to some extent, you don’t need a license and it’s much less heavy than docker desktop. If you need kubernetes, there is also minikube, which provides a lot of options.
Most of the things discussed in this article still apply for podman machine and minikube.
Some short-lived containers like our repo’s linter takes easily 4x as long to run in podman as it did with Docker. Immediately I have lost productivity.
It’s incredibly unreliable, every time I start my computer I have to podman machine stop then podman machine start because there’s something broken about how it gets initialised at startup. I’ve spent ages debugging random broken functionality.
It doesn’t support docker-compose. There’s a community project called podman-compose, but it’s not great because it won’t do stuff like build containers concurrently, and it has random weird quirks about volumes already existing when you do podman-compose up —-build whereas docker doesn’t complain for the same compose file.
Overall podman has been a massive regret for me, and I wish I hadn’t given up my docker desktop just to save a minuscule amount of money.
I love it on Linux. The Mac version is not as smooth yet, but, for my use case, still works a hell of a lot better than docker desktop. There is something deeply wrong with docker desktop's networking, and I literally have to restart it almost every time I make a change to one of our services. Not an issue with podman.
I thought it was because I also have docker desktop on the same machine, but probably that’s not an issue at all and it’s podman remote that is unreliable.
Just do whatever you're doing in docker natively in macos. Python, Nodejs, Ruby, Rust, Postgres, etc can all run as native macos processes.
The big advantage (aside from performance) is that you gain access to all the OS-native debugging capacities. You can just look at files, open multiple terminal sessions in the same folder, use Profiler, click the debug button without special configuration in your IDE, and so on. All without needing to think about VM images, docker containers, networking and all that rubbish.
The downside is you need to set up a second build environment (which might not match your deployment environment). Unless you're doing something truly special, setting up a macos-native build environment is usually pretty easy. Its normally just a few "brew install" / "npm" / "gem install" / "cargo build" etc commands away from working.
Mac has the best balance between coding, utility tools and "other work stuff".
Windows probably on par if not more for "work stuff" but falls badly in the coding & tooling department.
Linux is OK ish for coding and utility but falls behind for "other work stuff" and certainly a pain to just keep it updated.
So in our company, everyone in the development & support team uses Mac (except this one guy who insisted in Linux), most in the sales & marketing team use Windows.
> certainly a pain to just keep it updated.
> Linux and it's a lot of pain to setup.
If you're not using Linux, how are you justifying these claims?
> certainly a pain to just keep it updated.
Excluding the boot time for both, MacOS takes between 15 and 45min to update. Linux is a few seconds. I suspect that the only OS that has a more ridiculous update process than MacOS is Gentoo.
WSL 2 is a game changer because it makes all the Linux centric dev tools available to Windows without setting up virtual machines or other such nonsense, even running graphical applications these days. The only major pain point I've run into (that isn't "I prefer Linux") is the lack of IPv6 support within WSL 2.
If you avoid buying Nvidia hardware, Linux generally "just works", unless you use Windows-only software (which macOS also suffers from) or choose to make your life harder by installing Arch or Gentoo. Ubuntu's snap is a pain for power users who want to hack on their Linux system but if all you want to do is develop or do work stuff, it just works out of the box.
There was one developer in our team using Mac. Everyone (>20 people) else were using Linux. Mac was a lot of pain to setup.
Linux is the best balance between coding, utility tools and "other work stuff".
Windows is not even considered for development, it falls badly in the coding & tooling department.
So in our company almost every dev uses Linux (except this one hipster using Mac) & marketing team use Windows
Linux is not a pain to setup.
I am a .net developer and run Debian linux on my work laptop since ages. Keeping the OS and most of the software up-to-date is just "apt-get update; apt-get dist-upgrade". Microsoft has Debian packages for teams, skype, powershell-core, .net (core), azure-cli. Google has Debian packages for chrome. I use a lot of JetBrains' tools and keep those up-to-date using the JetBrains ToolBox. Where I work, we use Google workspace and Slack and I use those through chrome.
Just to be clear, most of my development is currently done on Linux using Rider, but I do have a Windows VM (on KVM) for older projects that run on .NET full.
The issues I come across, are related to our customers. For example when I am on location and need to connect to external hardware. One example is to connect wirelessly to a WiFi Direct display: this does not work for me and I did not investigate if there are drivers available or not. Another example is DisplayLink to use an external display through a dock: this I checked and there are drivers available and I did have it working at some point, but it broke after a kernel upgrade and it's too much bother to fix it again. Also for some customers we can connect remotely to their systems over VPN, but not all VPN solutions are available (or work out of the box) on Linux.
In any case, for my day-to-day work, I don't have any issues at all on Linux and I believe it's very very capable for coding, tooling and other work stuff. I definitely prefer it above Windows and Mac.
That maybe used to be true. But today, Windows is that OS. In one OS, I can freely develop in Windows, Docker, WSL2, including near seamless integration of apps, browsers, and even GUIs. And with VS Code, I basically have any OS except macOS (but who cares?) at my fingertips in a single interface. The dev experience is by far the lowest friction between Windows and macOS.
And Windows has superior support for external hardware. macOS refuses to work well with anything that doesn't have Apple on the box.
Given the option, I rather spend my day on VS than XCode, and C#/F#/C++ than Swift/Objective-C, but to each their own I guess.
And then there is the whole thing about where macOS Server end up.
That's more perception than reality, often from people who simply don't know how to use Windows.
Visual Studio, Visual Studio Code, and IntelliJ IDEA blow any Linux text editor out of the water for developer productivity.
For Linux workloads there is the Windows Subsystem for Linux (WSL 2), which now even supports GUIs with GPU acceleration!
Visual Studio Code can even operate in "remote" mode where it tunnels into a Docker container or Linux server and acts as-if the remote target was the local machine.
On Windows, x86 and x64 Linux Docker containers run in process isolation at full speed, unlike on Macs where there is CPU emulation required.
This is absolutely false. Windows is a perfectly fine development environment and has perfectly fine tooling. You just need to embrace powershell, windows tooling, and use cross platform tools. Too many devs put themselves in a corner by relying on posix shell or posix only tooling.
If you do user space programming your host OS should never matter. In my 10 years of programming the only time the host OS mattered was when I was writing Linux drivers.
But overall it's actually worked out surprisingly well because there's something for everyone - developers get *Nix On The Desktop but with an actual support story, and the non-technical users get a happy bubble OS that holds their hand.
Linux code churn and distro fragmentation makes it fundamentally unsupportable in the vast majority of workplaces (outside very controlled server environments/etc - talking desktop use here) and for the vast majority of users. The code churn makes the support story (polish and documentation) impossible and the distro fragmentation means that there's 50 different solutions to the same problem. The Bazaar and the Cathedral doesn't mean the bazaar is better in all situations, a random non-technical business analyst is never going to learn how to build Arch or install Gentoo and a really good streamlined, polished Cathedral Experience is much more suitable to the business environment. That's the fundamental lesson from Linux and Windows and OSX now takes its place in that too. You can keep the good things about Unix-y environments and opt out of the terrible parts of the Linux ecosystem.
Unfortunately, like BSDs, that's not what Docker is built around. Docker assumes a Linux kernel, and Linux kernel ABI is not the same as Unix kernel ABI. That's the biggest problem. Same as FreeBSD Jails or Solaris Zones... they're a decade ahead of docker in terms of capability, security, performance, and polish, but Docker is where the mindshare is. I can't install a jail from a registry with a single command and that's not where the support/development time is going even for the people who have engineered those alternative docker-registry solutions for jails.
The only "fast" option for non-linux kernels besides full virtualization is to thunk the calls to your own kernel to patch around the differences. Obviously that didn't work out with the Windows kernel, it's just too different, but FreeBSD/Solaris have implemented this functionality for a long time as part of "Branded Zones". But everyone is enthusiastically rebuilding the wheel around ubuntu (specifically - not even linux generally) so that's not going to happen.
https://wiki.freebsd.org/LinuxJails
https://docs.freebsd.org/en/books/handbook/jails/
(the freebsd handbook is a great example of the kinds of documentation that rarely gets written for linux distros - other than commercial ones - because of the overwhelming code churn and the inevitable bit-rot that entails in the rest of the user experience. It's way more fun to write a new audio pipeline or init system than to document it fully, everyone knows it.)
https://www.oracle.com/technical-resources/articles/it-infra...
https://docs.oracle.com/cd/E19455-01/817-1592/gchhy/index.ht...
https://en.wikipedia.org/wiki/Solaris_Containers#Branded_zon...
(and note the Solaris stuff almost entirely applies to OpenSolaris/Illumos as well, you don't have to use commercial solaris to get Branded Zones.)
Anyway, apropos of nothing, but with the newfound attention on OS X from developers and power-users, it'd be really nice if Apple released a M1/M2-based "toughbook". Completely against their design aesthetic but I think a lot of people don't really like the idea of wafer-thin apple laptops and would like something that can take some bumps without shattering. Power users are becoming a more core demographic for macbooks and it'd be nice to see them cater a little more.
I do miss my thinkpad and Fedora.
- backend (docker) - needs linux based machine - client app (iOS) - needs xcode on macOS
both are in one repository.
The only headache I get sometimes is because I have the GNU utils first in the path which makes compilation scripts mad sometimes.
Maybe I wasn't doing it right, but switching to Docker to sequester my projects and their dependencies has saved me so much time and hassle, especially with the amount of repos I work on throughout the year.
My biggest weakness today is that I still don't reach for Docker right away when starting work on a new project or when evaluating a new tool. Old habits…
I do often find myself wondering whether Docker saved developers or system administrators any time. Is Docker really better than building an AMI and provisioning EC2 instances on-demand?
With a docker compose file in place, all I need to do is run "docker-compose" and everything is up and running.
It's such an upgrade over what we had before.
I have the Remote SSH plugin set up in VSCode, a `vmlogin` alias set up in bash, and all container ports forwarded in the VM's config.
Mutagen also improved the experience but I prefer VirtioFS as it’s “built-in”
I also develop frontends using vue, managed by npm. In my experience this doesnt need to be dockerized since npm installs everything in a subdirectory per project. Is there a benefit to running this as a dockerized app?
Running Pylint on a Linux machine in docker: 3 hours from no cache
Running Pylint on a Mac in docker: 9+ hours from no cache unless VirtioFS is used, which makes it closer to 4 hours.
I use UTM to run Debian 11 ARM. The update-binfmts command is absolutely magical, docker images will happily run both arm and x86 binaries.
Battery lasts all day and the machine stays ice-cold.
My workflow for the past 3 years with Docker has been: set up some desktop machine somewhere, configure docker, configure ssh like normal: set DOCKER_HOST=ssh://<tailscale_ip> on my laptop.
Docker responds as if it's local, but I get absurd build/fetch speedup (since the wired connection is faster than Wifi) and it's not running inside a slow VM.
Recently I've been using colima on my Mac natively, but I keep reaching for the DOCKER_HOST option.
Many developers prefer to code in their host OS but run the image via Docker for Mac. They also want instant real-time code changes to appear inside the running Docker image. I suppose you could have some of the disk live within the VM and the code portions be memory mapped or Rsynced. I haven’t thought through the downsides.
Nowadays make sure you use their new virtual machine thing in docker for Mac and add :cached in your compose file of any mounted volumes and I found that alleviated my issues. It used to be really bad though.
https://github.com/lime-green/remote-docker-aws
Lots of benefits: speed, battery, fan noise
The money I save not paying for Apple laptops could pay for a crazy overpowered dev VM until the end of time.
I used Apple laptops for about 10 years until about 5 keyboard replacements with the butterfly switch debacle.
Just one clarification on the article: Mutagen offers Docker Compose integration, not Composer integration (Composer is a PHP package manager). However, as mentioned, DDEV is a great option if you're looking to do containerized PHP development while using Mutagen for performance improvements.
Separately, with "use virtualization" turned on, should I also enable "VirtioFS accelerated directory sharing"?
You might need to upgrade both docker desktop and macOS.
When I tried Rancher Desktop it didn’t work so well.
Bonus: paid half what a similarly spec'd M2 Air would cost.