This kind of article just adds to the Jonesing-for-shiny-things mentality that really doesn't do the engineering world any favours.
I worked for an e-commerce, leader of their market, that used no Containers for the past 3 years.
CI produced agnostic packages, which were deployed with Ansible in any environment and auto-scalable using Golden AMIs, EBS Snapshots and AWS ASG. Almost the same concept, but way less complex.
We contemplated moving to ECS/EKS to stay "current" and to make test/development environments easier to create, but in the end, with a small team, it would add an unnecessary burden and "re-invent" the wheel on something that was working fine for the time being.
I've done this with Debian, it worked great.
I like this, i wish more orgs would work like that.
Kubernetes feels mostly like a very opaque layer to abstract the underlying cloud vendor... Yet the cloud vendors are managing to get their lock-in back into kubernetes.
At my last company we implemented a couple of different services as just an AWS provided AMI with a small init script that would install docker if necessary, pull our container, and run it with the ports mounted properly. That worked great in ASGs and totally obviated the need to think about machine provisioning beyond 3 or 4 simple steps that everyone understood. I feel like Docker gets a bad rap _because_ of Kubernetes, when in fact Docker on its own solves a real problem in a pretty elegant way.
No small site needs kubernetes/docker orchestration and that's fine.
Having said that, I agree 100% that for most small/medium sized companies kubernetes is an overkill which creates more problems than it solved.
I'm pretty sure a lot of companies use docker simply to ease deployment of internal apps.
This is such a silly and unrealistic argument - no one is a docker-only expert. And docker is a desirable skill, just not on its own. It's like saying that git-only experts are unemployable.
Except not really... The original article's writer doesn't know what they're talking about. If anything, knowing Docker has increased my employability in the new K8s-centric world.
1 - https://www.amazon.com/Deployment-Docker-continuous-integrat...
I tried to illustrate with some concrete examples. Like being asked about pods and ingress in interviews, which are kubernetes specific. Experience with docker doesn't help much here, gotta keep up with Kubernetes. You could de-facto fail the interview if the interviewer realizes you didn't work with kubernetes but docker alone.
Companies are super biased toward kubernetes. They're really looking for the unicorn with years of kubernetes experience in production.
> Kubernetes has succeeded where docker failed. Management buy-in.
This must be one of the silliest articles I've read in a long time. Computer science and engineering does not revolve around the latest devops flavour du-jour. It will be something else in three years time anyway.
The real innovation around Docker was taking existing building blocks which were not straightforward to use on their own (linux cgroups, overlayfs) and bringing them under a cohesive package that's accessible to any developer.
The Linux features like cgroups/overlayfs etc that were used to deliver reproducibility at an acceptable performance cost are more of an implementation detail than the actual innovation, imo. I think one of the co-founders of docker might agree [1].
[1]: https://twitter.com/solomonstre/status/1111004913222324225
Job postings most definitely do.
These skills are transferrable from one implementation to another. It makes no sense in this age to put all our eggs in one Kubernetes basket to be hireable. Expertise of the underlying computer science and engineering is far more important.
Is there some kind of "Kubernetes-light" out there? So something like in-between running services like NGinx and Postgres on bare Linux machines and having this (I think complex) Kubernetes setup? It's important to say that I don't need any scaling capabilities (apart from maybe some load-balancing in case of a machine error).
K3s (run on same hardware) had a similar Api to full k8s which we had previous experience with and handled container lifecycles much more robustly.
Nomad+Consul is the definition of easy.
Has been posted here a few times.
Out of the box its quite good, depending on what you are doing, once you have cert-manager issuing you free certs, linkerd managing a service mesh, and stackdriver giving you an entire ops stack, its a bit hard to go back
Note: I’ve not yet tried it, but when we switch this was one of the things I was going to try first.
i spent hours trying to make helm work on my mac and in the end gave up. the original error was with comparing floats in the latest version. that error seems to pop up every now and then in their issues list. now, there is no way to install an earlier version. not with brew anyways. installing from binaries or sources is nightmarish. and when you do, it might not like your minikube setup. and yaddi yaddi yadda...
it feels odd and strange to me that the trend nowadays is to accept these complicated systems and be amazed by how complicated they are...
But yep, I'd agree with the general premise here - with the emergence of tools like cri-o[0], podman and buildah (which let you build and ship container images without the need to run a background daemon like docker at all, avoiding the associated operational/security/system overheads) - docker may need to evolve or it'll quickly become less favourable.
Project Atomic[1] runs a good PPA with many of these packages for anyone interested and using Ubuntu.
Project Atomic's website is down at the moment - checking their GitHub, the site hasn't been updated in a while? https://github.com/projectatomic/atomic-site
Links for future reference, for myself and others:
Podman - https://podman.io/
Buildah - https://buildah.io/
Open Container Initiative - https://www.opencontainers.org/
For starters, you couldn't expose ports as a standard user running podman last time I used it . Also every container got it's own conmon process, so there's still an overhead, it's just done differently.
I guess it's better to say that only a monitoring daemon is required with this setup (rather than all of the additional daemon services that docker provides).
Re: rootless podman, it looks like there's a good resource to track progress here: https://github.com/containers/libpod/blob/v1.6.2/rootless.md - that must be a common ask, could be interesting to track.
(I'm definitely guilty of being overoptimistic about these tools, but do hope they improve because the principles behind them seem very sound)
in order to force this issue, Fedora has made cgroups v2 as default and mandatory in the new upcoming Fedora 31 causing docker to fail to run. https://github.com/docker/for-linux/issues/665
Podman (and other docker equivalents) have supported cgroups v2 for years.
I suspect that k8s will move away from docker to recommending one of the alternatives pretty soon.
there are sections on using cri-o on EKS AWS
I bet in about 5 years time we will be reading a similar article about Kubernetes.
Use the right tool for the job, please. Trying to force something, just because it's thw buzzword of the day, will only waste money and bring suffering.
The overall concept is pretty simple: You create a deployment that spins up pods which are your containers. You create ingress and services to direct traffic to the pods. You configure it all with environment variables through ConfigMaps and Secrets.
However, there are still so many one-line commands you need to add to YAML or weird networking issues, or set of commands you have to type in each time, or permissions that are hard to configure and manage... And creating a cluster is a pain, unless you use something like kops. Great tool but it too takes a few hours to figure out even the basics.
I think in time, Kubernetes will get worked out. I love the core of Kubernetes. It took so, so long to figure out the rest.
Also I have never met a dedicated "docker expert" as the article calls it. I mean, is there any company out there who's hiring people that only knows Docker? Does that make any sense?
Docker may get replaced by alternatives as they start getting more traction over time but I don't think this will happen all of the sudden - Docker is still relevant for good or for bad.
In my experience Docker is almost as trouble-free as it should be, with straightforward tools to make mistakes and undo them; it requires good engineers who know what they want, not wizards who know how to get it.
This is a great precautionary tale to founders and an awesome example of hubris at play.
Docker's biggest problem was that they provided tremendous value with their opensource product, leaving few to have any justifiable reason to pay them money.
They courted Riot Games for years, until finally they flat out told them they would never see a penny from them. There are many things that can be learned from a business perspective here...
It gets harder and harder to find a stack that doesn't rely on it.
At my company, we chose to use ECS/Fargate when possible. It integrates nicely with SSM Parameter Store for config and secrets, and has a simple service discovery feature based on DNS.
A few services run on EC2 + ASG, using AMIs build with Ansible and Packer.
Are we missing something by not using Kubernetes? Is the experience so amazing, compared to ECS? I don't care about vendor lock-in.
DevOps/SRE jobs are full on discriminating for kubernetes experience, not docker, and preferably on their exact stack AWS ECS, EKS, GKE, etc... it can get real tough as a job seeker if you're not on it.
But the irony is that the Docker infrastructure is a critical dependency for the vast majority of K8s users. And if it falls apart, a lot of stuff is going to break. I hope someone has some contingency plans for Docker Hub going away.
AWS, Google, Azure should already have mirrors in place for their own offerings, should be ready to substitute to docker hub.
We use docker as part of our CI, because that's what Gitlab uses for our CI system. It works very well. Of course we could use podman locally (and I do on some machines), but Gitlab will still be using docker for us.
For those that don't know, Kubernetes is a container orchestrator. That means when you have lots of containers, hundreds or thousands and lots of servers to run them on. Instead of wiring them manually and deploying them manually, kubernetes make's it easy. kubernetes will decide which server to run them on and wire them together, if a server goes down, it will restart the down containers on new servers provided you have the capacity.
Imagine that docker is a computer program, kubernetes is the operating system.
The point is really about docker, not docker swarm. Kubernetes is integrating the whole ecosystem vertically and it's being leveraged to push out docker. There are lots of actors at play incentivized and actively working against docker (not just docker swarm).
I guess it's more of a business and marketing lesson if anything.
Is that true on Ubuntu/Debian? I couldn't find a source for this.
One article here, debian jessie 8 is from 2015. https://mariadb.com/kb/en/library/moving-from-mysql-to-maria...