1. Mirantis did not acquire Docker Inc., they only bought Docker Enterprise. See https://techcrunch.com/2019/11/13/mirantis-acquires-docker-e... and https://www.docker.com/blog/docker-enterprise-edition/
2. k8s didn't remove dockershim for political reasons but because containerd was refactored out of Docker long time ago and k8s wanted to get rid of the extra layer. See https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-o...
3. Rate limits have nothing to do with the container runtime. Podman also has to get images from somewhere. And Cloudfront bills starting at $0.02/GB (assuming you pump 5PB+) have to be paid somehow. The rate limits were mostly in place to deny corporate CI users access to the Hub free of charge and force them to pay or deploy a mirror.
4. RedHat offers not only packages in RHEL but also support and it makes sense they will offer packaging and support only for podman (a RH project) going forward. This does not concern us who don't pay for RH support.
Having said that, Podman is a nice evolution of Docker. Though I am not sure how much I can trust the rest of the article given how the intro twisted so many facts.
> Instead of free use of Docker Desktop until now, this software suite is now available for rent after the transition phase until the end of January 2022, starting at $5 per user/month, provided it is for professional use.
> Here, Docker Desktop includes the Docker Engine, docker-cli, docker-compose and a credential helper, among others.
At least docker-compose (and probably also docker service + cli, since it is included in Debian) is FOSS. While they might be included in Docker Desktop, they are certainly available separately, so paying for the licence is in no way obligatory when using docker.
1. Mirantis did not acquire Docker Inc., they only bought Docker Enterprise...
>> You are right. That's a mistake.
2. k8s did'nt remove dockershim for political reason..
>> That is a valid point and the official story. Imho I think the acquisition was nevertheless something that played an accelerating role in this, since it happened relatively soon after the acquisition. Mirantis acquired Docker Enterprise in November 2019, and the end of Dockershim support was announced in 2020. I've heard that from a few other people as well. BUT this is just rumor, so you might be right.
3. Rate limits have nothing to do with the container runtime..
>> This is 100% true. Nevertheless, dockerhub is part of Docker and therefore a rate limit on the official Docker registry is something that has made our customers switch registry to other registry providers or implement their own container registry. Therefore, they are getting rid of this service, which is part of the Docker-only ecosystem, so its usage in enterprises is decreasing.
4. RedHat support switched to Podman as it is on of their products..
>> It only makes sense for RedHat to support Podman since it's from their own product forge. You are right about that. That said, there are a lot of companies using RH and paying for it, which automatically leads to a decrease in Docker usage vs Podman. Less use of Docker means more use of Podman. Last but not least, RedHat would not invent Podman if there was no need for an independent tool to Docker. Podman helps in some areas where Docker lacks features, such as support for pods, rootless mode, etc.
Thanks for your criticism! It is appreciated and helps us to do better.
What I never understood is why they didn’t just properly handle this with mirrors like any package manager does; why is this a problem for docker, but not for yum / apt / etc?
I have to admit that these rate limits have accelerated my migration to alternatives like quay.io
Now, why are we still producing new package formats without mandatory signatures (containers, npm, cargo, etc) is not really clear to me. I guess everyone must think "those old crazy Unix folks signing their Deb and Rpm must have had their crazy reasons, but we have no reason to do the same" :) a more cynical thinking would say "it makes it inconvenient to mirror things and easier to build a business from the central repository" :)
"Docker the company is having trouble monetizing their products...so I'm unsure about their future"
And, so the follow on of:
"Can I use compatible tools that don't depend on Docker, the company, as much?"
Makes some sense.
I really wish rootless podman/docker was the default install now. It's still kind of annoying to setup with reading a smattering of old docs and having to think about your distro setup, cgroups settings, etc. It really should just be a "run this install script and you're done".
> Permission issues with bind mounts just totally disappear when you go rootless.
Recent kernel versions have gained uid mapping capabilities on mounts. Hopefully future docker will make use of it. Then we can run entire containers as different users.
Or just messing with a html file in the nginx docker bind mount, ugh!
If podman solves that I’m going all in tomorrow.
You can do this with Docker today without much fuss.
Here's a bunch of web app examples (Flask, Rails, Django, Node, Phoenix) that run your containers as a non-root user which ensures any volume mounted files end up being set to your Docker host's user along with running your main process as a non-root user: https://github.com/nickjj?tab=repositories&q=docker-*-exampl...
There's no hard coding of user names either. The user name created in the Docker image never directly gets mapped back to your Docker host.
This works because bind mounts happen over uid:gid 1000:1000 by default, so as long as your Docker host user's uid:gid is 1000:1000 everything works out of the box. On Windows and macOS you don't need to even think about this because Docker Desktop will fix permissions for you and on native Linux chances are your user uid:gid is 1000:1000 because it's the first non-root user on your system. For non-controllable environments like CI you'd typically disable volumes which is a good idea anyways because you're probably not using volumes in production. For single server deploys on a self managed VPS you control the environment. I covered this in a little more detail in my DockerCon talk at: https://nickjanetakis.com/blog/best-practices-around-product...
In the worst case scenario where you have no other options you can make the Dockerfile more complicated and introduce build args for the uid:gid so you can change it to satisfy the needs of a specific host but I don't like this since you'd need to rebuild a different image for a different environment, but it would technically work. I've never run into this scenario after having used Docker since 2014. I've also done contract work for dozens of companies in all sorts of different environments.
These options may look to be a bit complicated to use, but as soon as you understand how rootless Podman maps UIDs and GIDs it will be pretty straight forward.
I wrote two troubleshooting tips about how to use them:
https://github.com/containers/podman/blob/main/troubleshooti...
https://github.com/containers/podman/blob/main/troubleshooti...
I have a problem with mounting a named foo in a container (at /foo) and bindfsing the underlying directory of that volume on ${HOME}/foo with create-for parameters so that when the host user touch files in it they are owned by host 1000:1000 but inside the container it's owned by 33:33.
Volume foo really contains only a unix socket. This unix socket is shared between the host and the container for xdebug communication.
So, this doesn't work, the container process can't write/read the socket even though it can manipulate other files in the mounted volumes /foo and they appear as owned by 1000:1000 on the host and vice versa.
But if I mount the volume directly like that: ${HOME}/foo:/foo then it works and the container can write to the socket and the host and the container can communicate both ways.
Would rootless podman allow me to use a named volume ? Why doesn't it work like I think it should, is it because the unix socket lives in the kernel 'or something' ? Maybe it's a question for SO.
There’s also the ability for podman to run as a system service, and provide an OCI compatible container API. This then integrates seamlessly with the actual docker-compose.
Since Podman 4.1 came out with full Compose 2.x compatibility, I'm running Podman on Docker's socket, but using Docker's CLI to talk to it, so that I can use the buildx and compose CLI plugins. It works great, Docker's CLI doesn't seem to have any clue that it's talking to not-Docker. I even have VSCode's Docker extension and Remote Containers working this way.
I don't really care what I use, I just want to be able to develop locally without spending days setting stuff up. Rootless or whatever means nothing to me. Docker compose have made that easy for lots of otherwise complicated projects. Just compose up, point editor at the image and ready to go.
That said, I don’t even bother with that. Podman can run K8s configs, and they are yaml too, only slightly more verbose than a compose file, if you strip everything out you don’t need. The CLI is nicer than compose too, with proper commands instead of tying up a terminal until a ctrl-c.
Half the time you're jamming it into some cloud service anyway where you have no idea what GCP/fly/aws is using under the hood to actually run it.
Meaning this discussion is more relevant to the self-hosted context. In which case I'd say containerization isn't really security. So in my mind that residual risk of the daemon being root is inconsequential. (Or if not use a VM).
Personally, after dealing with Kubernetes yaml spaghetti, I rather deal with VM images, but unfortunately I don't get to dictate IT fashion.
I do both with deciding factor being whether it has internet exposure.
>VMs have the same cheapness for me
It's all kinda relative (ballooning etc), but from what I've seen LXC allows for much higher density. Lowest LXC I've got running is ~20MB used. Lowest VM is at 380MB. Both headless debians so vaguely comparable (though MQTT vs Wireguard).
Not much of a difference if you've got a 128gb server on hand, yet its nearly 20x so depending on perspective its either a big difference or doesn't matter.
The preference comes from the tool chain which is simpler for containers (even if you use Vagrant) and performance ( true or not, lots of people still have the sluggish VMs in mind.
Call it less secure than VMs if you prefer that wording.
>not running as root should be an absolutely critical first step for managing risk.
Kinda depends on what angle you come at it from risk management. You could take something less secure (containers) and try to tweak it to meet whatever level of residual risk you deem acceptable. Or you could just jump straight to VM and benefit from the inherent higher level of separation.
The fact that all the big players with their clever engineers have specifically opted for VM tech (e.g. AWS creating firecracker for lambda) tells me the later is probably the way to go.
That said I mostly do stick to containers when in a trusted environment.
Podman has a full Docker compatible API, so you just have to enable it, and then set the DOCKER_HOST to point to its socket. From there docker compose should work as if you had Docker.
Podman also is currently working on "podman machine", which can spin up a Linux VM to run Podman on macOS and Windows. I think it's still in beta or something, but it seems to be working already.
There is also things like Podman Desktop[0] and Podman Desktop Companion[1] which attempt to bring an experience similar to Docker Desktop to Podman.
> Podman also runs on Mac and Windows, where it provides a native podman CLI and embeds a guest Linux system to launch your containers. This guest is referred to as a Podman machine and is managed with the `podman machine` command.
> ..On Mac, each Podman machine is backed by a QEMU based virtual machine.
> ..On Windows, each Podman machine is backed by a virtualized Windows System for Linux (WSLv2) distribution.
I am super thankful for the team of developers that work on Podman. It has really come a long way since 2.0 and they are very responsive to issues in my experiences. If you are using Linux as your daily driver and you use Containers give Podman a try. Here are some examples of the things I have done with Podman.
https://github.com/forem/selfhost
https://github.com/jdoss/ppngx
https://gist.github.com/jdoss/25f9dac0a616e524f8794a89b7989e...
https://gist.github.com/jdoss/ad87375b776178e9031685b71dbe37...
I thought moving to Linux from Mac would make Docker better due to the lower resource usage but I would guess the fact that it’s isolated in a VM in Mac is why I never had network issues there
At the moment Podman is just more work for us because I and other devs don't have years of of experience and intuitions about Podman like we have with Docker. I'd rather just focus on business problems rather than another migration.
I see some new live in docker desktop and I also have flawless experience on M! Mac with it. I even ignored all recent hype to ditch Docker because it became more transparent tool to my workflow, I forgot I use it.
That being said, docker blows. Docker desktop blows more. Docker desktop on Windows blows the most. I always get stuck with a bind mount misbehaving, or some other issue that requires me to wipe the docker desktop data to fix it. Just use docker compose for simple stuff, and stay away from docker for Windows if possible.
You get all the things (docker CLI, docker-compose, kubernetes via k3s) but it's FOSS and it doesn't feel like they're trying to shove a premium plan down your throat.
I see your "Docker blows on Windows" and raise you "Docker blows on M1." At least you have WSL, Apple couldn't give half a fuck (and nor could Docker). Colima has been a lifesaver, but having used WSL Docker in the past and Linux Docker recently (I now favor Podman), M1 Docker is a complete shitshow. I'm slowly infesting our codebase with Podman/Buildah/Skopeo (also because they do things Docker can't), and hopefully we'll get to the point of using podman machine.
On Linux podman is significantly different from docker: It uses user namespaces. So it is much more secure (assuming that security bugs related to Linux user namespaces, which have indeed been dicovered, are still much rarer than people running untrusted container images). However, with security comes incompatibility. If the image does tricky system interaction instead of just running user space code and some standard cases like opening a socket chances are that things will not work under podman without modification.
I think Docker folks are heavily focused on providing great local development experiences, which is a niche few other products are covering.
We are very happy that this discussion took place and that it sparked a nice and lively discussion about containers, Docker, Podman and all the other stuff between you and all the other professionals. We are also very grateful for all your criticism and corrections. The article is currently under review and we want to make sure to correct the previous mistakes in a transparent way. Seems like a few valid points were missed and some wrong assumptions were made. Nonetheless, I think the discussion that came out of this helped a lot of people get some useful insights into Docker, Podman, and containerization as a whole.
We are trying to get all wrong information - based on your discussion - right and summarize them in the refactored article!
What can we take away from this? - More research next time is necessary and we need to challenge our article.
Again, thanks a lot for all your feedback! We appreciate it very much! There were a few new things that also I was able to learn in this process which is always nice.
Have a good one!
I think this is an area bearing improvement before most dev workflows can switch
This isn't a critique of your comment. Just, that I don't understand what's going on here. Maybe this whole issue is a non-Linux kind of thing?
Test stuff out with wither docker compose, or minikube, and otherwise deploy to k8s has worked great for me.
Why would it be? Unless you jump through hoops, your music and photos aren't global to the machine; rootless containers just made containers fit in the traditional unix security model.
Navigating fragmented ecosystems sure is a pain... I get that different tools serve different needs, but having to learn a bunch of different things just to figure out which one to use gets really exhausting. Especially when you have to do it for tools at every layer of your stack.
I left .NET land for this very reason. JS pales in comparison to .NET framework fatigue
https://github.com/lima-vm/lima
If memory serves me right, it works with Podman.
Why?
Docker - one of the older and most popular runtimes for OCI, with all of the tooling you might possibly want; most of the problems are known and solutions are easy to find, vs venturing "off the happy path" (not everyone has the resources to try and figure out Podman compatibility oddness)
Docker Compose - ubiquitous and perhaps the easiest way to launch a certain amount of containers on a host, be it a local development machine, or a remote server (in a single node deployment), none of the complexity of Kubernetes, no need for multiple docker run commands either
Docker Swarm - most projects out there do not need Kubernetes; personally, i'm too poor to pay for a managed control plane or host my own for a cluster; K3s and k0s are promising alternatives, but Docker Swarm also uses the Compose specification which is far easier to work with and most of the times you can effortlessly setup a docker-compose.yml based stack, to run on multiple nodes, as needed; also, in contrast to Nomad, it comes out of the box, if you have Docker installed; also, when you don't want to mess around with a Kubernetes ingress and somehow feeding certificates into it, you can instead just run your own Apache/Nginx/Caddy instance and manage it like a regular container with host ports 80/443 (setting up which might be a bit more difficult with Kubernetes, because by default you get access to ports upwards of 30000)
Kubernetes with Docker as the runtime - maybe with something like K3s, if you need relatively lightweight Kubernetes but also want to figure out what is going on with individual containers through the Docker CLI which is familiar and easy to work with, to dig down vs what something like containerd would let you do
Long story short, choose whatever is the best suited solution for your own needs and projects. Just want things to work and be pretty simple, modern technologies, hyperscalability and ecosystem be damned? Docker/Compose/Swarm. Want something with a bit more security and possibly even for running untrusted containers, with lots of scalability and projects built around the technologies? Podman/containerd/Kubernetes.
I've heard about Docker and Swarm being dead for years, yet it seems to work just fine. They even fixed the DNS weirdness on RPM distros (RHEL/Oracle Linux) in the 20.X releases i think, though personally i'm more inclined towards using the second-latest Ubuntu LTS because there's far less SELinux or other weirdness to be had (e.g. K3s clusters failing to initialize because of changes to cgroups). When it will actually die for real, i'll just use something like https://kompose.io/ to migrate over from the Compose format to Kubernetes.
Of course, none of that excuses you from having to learn Kubernetes, because that's what the industry has decided on. My approach is more akin to basing a new project on PHP 7 because you know that you don't need anything more.
On a different note, your employers asking you to setup Kubernetes and to launch Nexus, PostgreSQL and whatever else on a single node that has 8 GB of RAM, as well as run a bunch of Java services on it can be challenging to say the least, especially when the cloud is not in the cards, there are no pre-existing clusters in the org, there isn't the interest to get more resources and even if there was, then there'd also be thoughts along the lines of "why should we give this one project that many resources?" expressed. I'm slightly exaggerating, but oftentimes it can be akin to choosing to run Apache Kafka when RabbitMQ would have sufficed - someone else making the choice for you, pushing you into sub-optimal conditions and making you suffer as a result.
I recently went to Europe DevDays 2022 (https://devdays.lt/) and DevOps Pro Europe 2022 (https://devopspro.lt/) and one of the arguments expressed was along the lines of: "You should never host your own clusters, if you can. Just pay one of the big three platforms out there (AWS/GCP/Azure) to do it for you." What a crazy time to be alive, where running the full stack can be problematic and enterprise solutions are getting more and more detached from what smaller deployments and homelabs would actually need.
That said, Podman is getting closer and closer to full feature parity with Docker with every passing year and Kubernetes is also easier to run thanks to clusters like K3s/k0s/RKE and tools like Lens/k9s/Portainer/Rancher.
I could not find answer for myself why Docker is any close to be dead and why Podman is the thing I should use instead of Docker immediately.
Q 1. Docker has some policy change and your company may need to pay for it - if you have > 250 persons/10 million revenue A 1: Indi/Solo devs out of scope. Enterprises probably fine with that anyways.
Q 2. Docker has limits for pulls from Docker hub!!!! You have 100 (200 with login) downloads/single IP for 6 hours interval. A 2: It was already mentioned, switching to Podman, while using Dockerhub doesn't magically helps. Moreover, practically I find it totally fine for Indi/Solo dev. For companies, who's amount of pulls can be higher - you want and have in place your local registry anyways to ensure Business Continuity and this doesn't bother you much.
Q 3. Running no background processes, running rootless is good because of ... A 3: On dev env (your local laptop, for example) you do not care much - your goal is ease of use. On production, running rootless rises question from me: * how you expect firewall (iptables) to be updated for port forwardings? * how you expect networks and bridges organized without root? * how you expect auto restart for container to happen on failure without supervising it? * some security advises and mitigation guides mention disabling user namespaces and was/is disabled by default in some distros https://news.ycombinator.com/item?id=28054823 - your security & system administration team may have such limits in place on production * those who care for intruder gets into container and can hijack system further use FireCracker or similar approach anyways [for production]
So what is left in "pros" for Podman, have I missed anything?
The market changed and Docker changed to stay relevant. Just that.