> I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.
Please share.
FROM [foo]: [foo] is a reference that is generally not namespaced (ubuntu is relative to some registry, but it doesn't say which one) and it's expected to be mutable (ubuntu:latest today is not the same as ubuntu:latest tomorrow).
There are no lockfiles to pin and commit dependency versions.
Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
Mostly resulting from all of the above, build layer caching is basically a YOLO situation. I've had a build result in literally more than a year out-of-date dependencies because I built on a system that hadn't done that particular build for a while, had a layer cached (by name!), and I forgot to specify a TTL when I ran the build. But, of course, there is no correct TTL to specify.
Every lesson that anyone in the history of computing has ever learned about declarative or pure programming has been completely forgotten by the build systems.
Why on Earth does copying in data require spinning up a container?
Moving on from builds:
Containers are read-write by default, not read-only.
Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
The tooling around what constitutes a running container is, to me, rather unpleasant. I can't make a named group of services, restart them, possibly change some of the parts that make them up, and keep the same name in a pleasant manner. I can 'compose down' and 'compose up' them and hope I get a good state. Sometimes it works. And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
I'm sure I could go on.
I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
> Why on Earth does copying in data require spinning up a container?
It doesn't.
> Containers are read-write by default, not read-only.
I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
> Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
Almost all of this is wrong.
> And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
Pretty much everything you've outlined is, as I see it, a misunderstanding of what containers aim to solve and how they're operationalized. If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
> I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
They're not so different. An environment is just big software. People have come up with schemes for building large environments for decades, e.g. rpmbuild, nix, Gentoo, whatever Debian's build system is called, etc. And, as far as I know, all of these have each layer explicitly declare what it is mutating; all of them track the input dependencies for each layer; and most or all of them block network access in build steps; some of them try to make layer builds explicitly reproducible. And software build systems (make, waf, npm, etc) have rather similar properties. And then there's Docker, which does none of these.
> > Containers are read-write by default, not read-only.
> I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
Right. The issue is that the default is wrong. In a container:
$ echo foo >the_wrong_path
works, by default, using COW. No error. And the result is even kind of persistent -- it lasts until the "container" goes away, which can often mean "exactly until you try to update your image". And then you lose data.> > Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
> Almost all of this is wrong.
I would really like to believe you. I would love for Docker to work better, and I tried to believe you, and I looked up best practices from the horse's mouth:
https://docs.docker.com/get-started/docker-concepts/running-...
and
https://docs.docker.com/get-started/docker-concepts/running-...
Look, in every programming language and environmnt I've ever used, even assembly, an interface has a name. If I write a function, it looks like this:
void do_thing();
If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.At least the docs try to remind people that the whole mechanism is "insecure by default".
I even tried asking a fancy LLM how to export a port by name, and LLM (as expected) went into full obsequious mode, told me it's possible, gave me examples that don't do it, told me that Docker Compose can do it, and finally admitted the actual answer: "However, it's important to note that the OCI image specification itself (like in a Dockerfile) doesn't have a direct mechanism for naming ports."
> > And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
> What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
I'd like to have some way for a developer to declare that their software can be run with the 'app' container and a 'mysql' container and you connect them like so. Or even that it's just one container image and it needs the following volumes bound in. And you could actually wire them up with different orchestration systems, and the systems could all read that metadata and help do the right thing. But no, no such metadata exists in an orchestration-system-agnostic way.
> If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.