Source: I use Podman on a workstation where I SSH in as a bunch of different non-root users, and I've never had to think about it working.
I had a few issues in the beginning, but in the end the solutions were rather trivial. I had to:
- Delete config files from previous podman versions (pre 4)
- Enable the docker socket (for my user)
- Use docker compose 2 rather than "podman compose" or an older docker compose (shipped with the distro)
We mostly use docker-compose files for our dev setups, so I can't say if I'd run into issues with more elaborate setups. But I must say that it works extremely well for me.
https://docs.podman.io/en/stable/markdown/podman-kube-genera...
It also lets you generate those manifests from existing containers or pods. No need to learn the compose spec, less friction dev and prod without stop-gap measures like Kompose
However, if your company has already highly invested into using Docker, the fact of something working OOTB in any other piece of software doesn't convince to make a decision to switch.
On top of that, our DevOps team is pretty happy with Docker :)
Anyway, surprised that so many comments wonder about the usefulness of docker rootless in a shared environment. It is my main approach for separation of concerns in my homelab. I always use docker rootless to share resources with many isolated apps. I wrote a blog post about how to host Mastodon with docker rootless [1].
[1]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...
Somebody tell me what im missing.
A more relevant use case to industry might be a CI machine that you want to get better utilization out of. Easy, just start of multiple CI runners under different users. "Just use Kubernetes" I hear someone screaming? Well, sometimes you need CI on macOS.
I guess you may be using SELinux but even so, users and groups are natural parts of expressing and enforcing such constraints.
I hate the cattle not pets analogy, do you know how well taken care of most cattle are?
If your operating at google/fb/apple scale then yes you can have this approach. There are lots of business that DON't need, want or have to scale to this level. There are lots of systems where this approach breaks down.
Docker is great if you're deploying some sort of mess of software. Your average node/python/ruby setup with its run time and packages + custom code makes sense to get shoved into a container and deployed. Docker turns these sorts of deployments into App Store like experiences.
It doesn't mean is the only, or a good approach... if your dev env looks like your production env, if your deploying binaries then pushing out as a c-group or even an old school "user" is not only viable it has a lot less complexity.
As a bonus you can SSH in, you have all the tools available on a machine, you can tweak a box to add in perf/monitoring/debug/testing tools on there to get to the source of those "it only happens here" bugs...
I used to work at Yandex which is not Google but had hundreds of thousands of servers in runtime nevertheless. So definitely cattle.
Still the CTO of search repeatedly said things like "It's your production? Then ssh into a random instance, attach with gdb and look at the stacks. Is it busy doing what you think it should be doing?"
Dealing with cattle means spending a lot of time in the barn not in the books.
Rootless docker stores images for the user executing it and that is all, same as podman.
We've used it in a single-user (a docker user) and multi-user (user for each dev) environment.
Most, if not all, containers work fine, there are some, like mailcow which don't work well with it.
If you have multiple IPs on the one machine, there is a longstanding bug that means you can't bind the same port on different IPs. Eg IP1:80 and IP2:80. The workaround for this is separate rootless docker users + runtime for each container that shares ports, nasty.
In a multi-user environment we simply setup rootless docker under each devs user, so they have their own runtime and their own containers isolated from other devs. This works really well.
Overall it works well for us.
Niche but a project we have been wanting for awhile. Base containers get OSS'd etc. This stuff is twisty!
[1] https://sarusso.github.io/blog/container-engines-runtimes-or...
> It indeed does not enforce (or even permit) robust isolation between the containers and the host, leaving large portions exposed. … More in detail, directories as the /home folder, /tmp, /proc, /sys, and /dev are all shared with the host, environment variables are exported as they are set on host, the PID namespace is not created from scratch, and the network and sockets are as well shared with the host. Moreover, Singularity maps the user outside the container as the same user inside it, meaning that every time a container is run the user UID (and name) can change inside it, making it very hard to handle permissions.