Nope! It’s not wrong in any way at all!
You’re thinking of how it’s built. I’m thinking of what it does (for me).
I tell it a package (image) to fetch, optionally at a version. It has a very large set of well maintained up-to-date packages (images). It’s built-in, I don’t even have to configure that part, though I can have it use other sources for packages if I want to. It fetches the package. If I want it to update it, I can have it do that too. Or uninstall it. Or roll back the version. I am 100% for-sure using it as a package manager, and it does that job well.
Then I run a service with a simple shell script (actually, I combine the fetching and running, but I’m highlighting the two separate roles it performs for me). It takes care of managing the process (image, which is really just a very-fat process for these purposes). It restarts it if it crashes, if I like. It auto-starts it when the machine reboots—all my services come back up on boot, and I’ve never touched systemd (which my Debian uses), Docker is my interface to that and I didn’t even have to configure it to do that part. I’m sure it’s doing systemd stuff under the hood, at least to bring the docker daemon up, but I’ve never touched that and it’s not my interface to managing my services. The docker command is. Do I see what’s running with systemd or ps? No, with docker. Start, restart, or stop a service? Docker.
I’ve been running hobbyist servers at home (and setting up and administrating “real” ones for work) since 2000 or so and this is the smoothest way to do it that I’ve seen, at least for the hobbyist side. Very nearly the only roles I’m using Docker to fill, in this scenario, are package manager and service manager.
I don’t care how it works—I know how, but the details don’t matter for my use case, just the outcomes. The outcome is that I have excellent, updated, official packages for way more services than are in the Debian repos, that leave my base system entirely alone and don’t meaningfully interact with it, with config that’s highly portable to any other distro, all managed with a common interface that would also be the same on any other distro. I don’t have to give any shits about my distro, no “oh if I want to run this I have to update the whole damn distro to a new major version or else manually install some newer libraries and hope that doesn’t break anything”, I just run packages (images) from Docker, update them with Docker, and run them with Docker. Docker is my UI for everything that matters except ZFS pool management.
> It's also not cross-platform, or at least 99.999% of images you might care about aren't— they're Linux-only.
I specifically wrote cross-distro for this reason.
> There actually are cross-platform package managers out there, too. Nix, Pkgsrc, Homebrew, etc.
Docker “packages” have a broader selection and better support than any of those, as far as services/daemons go; it’s guaranteed to keep everything away from the base system and tidy for better stability; and it provides a common interface for configuring where to put files & config for easier and more-confident backup.
I definitely use it mainly as a package manager and service manager, and find it better than any alternative for that role.