It's less about particular features (none of these is individually harmful, but I personally find the monolithic architecture a bit of an issue), and more about taking time to actually define stable and complete APIs and boundaries. Stabilization (from technical reliability POV) is all good and fine, but - for instance - is there a way to easily find out containers' names and links, and exact forwarded ports, from the API without regexping `docker ps` output and "tcp/1234"-like strings that should be meant for human eyes only? Is it still Docker's official position that CLI is only official interface, and using HTTP API is frowned upon? Is all information about a container available in `docker inspect` or API equivalent, or do I still need to parse `docker ps` to find out about names and links? Does the registry/index mess ever going to go away, did it even make sense (other than vendor lock-in) at some point? Since Docker and registry API is HTTP, why can't I use http auth (even implemented by an nignx proxy) instead of certs (which are fairly new) or centralized index for authentication? It should be as simple as using `
https://user:pass@IP/…` URLs, what's the problem with it? While we're at it, why on Earth is HTTP Docker endpoint specified as `tcp://IP:PORT` rather than `
https://IP:PORT`, which would make it possible and easy to proxy, use http auth in an intuitive way, and
generally reflect how it works rather than obscure it? Should I go on?