Or a constantly-updating behemoth, running as root, installing packages from yet another unauditable repository chain?
And security updates, as you said, are needed regardless of whether you run Docker on top. I think Docker is a needless complexity and security risk.
The second option is standardized and usually the same 1 or 2 commands to run anywhere
Granted, you do need to learn how Docker works, and be ready to help others do likewise if you're onboarding folks with little or no prior experience of Docker to a team where Docker is used. That's certainly a tradeoff you face with Docker - just as with literally every other shared tool, platform, codebase, language, or technological application of any kind. The question that wants asking is whether, in exchange for that increased effort of pedagogy, you get something that makes the increased effort worthwhile.
I think in a lot of cases you do, and my experience has borne that out; software in containers isn't materially more difficult to maintain than software outside it if you know what you're doing, and in many cases it's much easier.
I get that not everyone is going to agree with me here, nor do I demand everyone should. But it would be nice if someone wanted to take the time to argue the other side of my claim, rather than merely insisting upon it with no more evident basis than arbitrarily selected first principles given no further consideration in the context of what I continue to hope may develop into a discussion.
Whatever set-up your application needs is a still necessary step in the process. But now you've not only added more software in docker with its a docker registry, and Docker's state on top of the application's state, you've also introduced multiple virtual filesystems and a layer of mapping between those and locations on the host, mappings between the container's ports and the host's ports. There is no longer a single truth about the host system. The application may see one thing and you, the owner, another. If the application says "I wrote it to /foo/bar", you may look in "/foo/bar" and find that /foo doesn't even exist.
All of that is indirection and new ways things can be that did not exist if you just ran your code natively. What is complexity if not additional layers of indirection and the increase of ways things can be?
To host something as a docker container I need 2 things: to know how to host docker, and a docker image. In fact, not even an image, just a dockerfile/docker-composer.yaml in my source code. If I need to host 1000 apps as a docker containers, I need 1000 dockerfiles and still to know (and remember) 1 thing: how to host docker. That's 1 piece of knowledge I need to keep in my head, and 1000 I keep on a hard-drive, most of the time not even caring what's the instruction inside of them.
If I need to host 1000 apps without dockerfiles, I need to keep 1000 pieces of knowledge in my head. thttpd here, nginx to java server there, very simple and obvious postgres+redis+elastic+elixir stack for another app… Yeah, sounds fun.