• Lets companies brag about having # many production services at any given time
• Company saves money by not having to hire Linux sysadmins
• Company saves money by not having to pay for managed cloud products if they don't want to
• Declarative, version controlled, git-blameable deployments
• Treating cloud providers like cattle not pets
It's going to eat the world (already has?).
I was skeptical about Kubernetes but I now understand why it's popular. The alternatives are all based on kludgy shell/Python scripts or proprietary cloud products.
It's easy to get frustrated with it because it's ridiculously complex and introduces a whole glossary of jargon and a whole new mental model. This isn't Linux anymore. This is, for all intents and purposes, a new operating system. But the interface to this OS is a bunch of <strike>punchcards</strike> YAML files that you send off to a black box and hope it works.
You're using a text editor but it's not programming. It's only YAML because it's not cool to use GUIs for system administration anymore (e.g. Windows Server, cPanel). It feels like configuring a build system or filling out taxes--absolute drudgery that hopefully gets automated one day.
The alternative to K8s isn't your personal collection of fragile shell scripts. The real alternative is not doing the whole microservices thing and just deploying a single statically linked, optimized C++ server that can serve 10k requests per second from a toaster--but we're not ready to have that discussion.
As a spectator, not a tech worker who uses these popular solutions, I would say there seems to be a great affinity amongst in the tech industry for anything that is (relatively) complex. Either that, or the only solutions people today can come up with are complex ones. The more features and complexity, the more something is constantly changing, the more a new solution gains "traction". If anyone reading has examples that counter this idea, please feel free to share them.
I think if a hobbyist were to "[deploy] a single statically linked, optimized [C++] server that can serve 10k requests per second from a toaster" it would be like a tree falling in the forest. For one because it is too simple, it lacks the complexity that attracts the tech worker crowd, and second, because it is not being used by well-known tech company and not being worked on by large numbers of people, it would not be newsworthy.
I was pretty skeptical too but then handed over a project which was a pretty typical mixed bag: Ansible, Terraform, Docker, Python and shell scripts, etc... Then I realized relying on Kubernetes for most projects has the huge benefit of bringing homogeneity to the provisioning/orchestration which improves things a lot both for me and the customer or company I work for.
Let's be honest here, in many cases it does not make a difference whether Kubernetes is huge, inefficient, complicated, bloated, etc... or not. It certainly is. But just the added benefit of pointing at a folder and stating : "this is how it is configured and how it runs" is huge.
I was also pretty skeptical of Kustomize but it turned out to be just enough.
So, like many here. I kind of hate it but it serves me well.
Citation? In my experience companies hire more sysadmins when adopting k8s. It's trivial to point at the job reqs for it.
> Company saves money by not having to pay for managed cloud products if they don't want to
Save money?! Again citation. What are you replacing in the cloud with k8s? In my experience most companies using k8s (as you already admitted) don't have a ton of ops experience and thus use more cloud resources.
> Treating cloud providers like cattle not pets
Again. Citation? Companies go multi-cloud not because they want to but because they have different teams (sometimes from acquisition) that have pre-existing products that are hard to move. No one is using k8s to get multi-cloud as a strategy.
> It's going to eat the world (already has?).
Not it won't. It's actually on the downtrend now. Do you work for the CNCF? Can you put in a disclaimer if so?
> just deploying a single statically linked, optimized C++ server that can serve 10k requests per second from a toaster
completely un-necessary; most of the HN audience is not creating a c++ webserver from scratch; most of the HN audience can trivially serve way more than 10k reqs/sec from a single vm (node, rust, go, etc. are all easily capable of doing this from 1 vcpu)
Your C++ example is orthogonal to the deployment aspect because it discusses the application. Kubernetes and the fragile shell scripts are about the deployment of said application.
How are you going to deploy your C++ application? Both options are available, and I would wager that in most cases, Kubernetes makes more sense, unless you have strict requirements.
In some cases, the cost of a managed cloud product may be cheaper than the cost of training your engineers to work with K8s. It just depends on what your needs are, and the level of organizational commitment you have to making K8s part of your stack. Engineers like to mess around with new tech (I'm certainly guilty of this), but their time investment is often a hidden cost.
> The alternatives are all based on kludgy shell/Python scripts or proprietary cloud products.
The fact that PaaS products are proprietary is often listed as a detriment. But, how detrimental is it really? There are plenty of companies whose PaaS costs are insignificant compared to their ARR, and they can run the business for years without ever thinking about migrating to a new provider.
The managed approach offered by PaaS can be a sensible alternative to K8s, again it just depends on what your organizational needs are.
You are writing this and i thought yesterday how to extend my current home k8s setup even further.
I would even manage that little c++ tool through k8s.
K8s brings plenty of other things out of the box: - Rolling update - HA - Storage provisioning (which makes backup simpler) - Infrastructure as code (whatever your shellscript is doing)
I think that the overhead k8s requires right now, will become smaller over the years, it will be simpmler to use it, it will become more and more stable.
It is already a really simple and nice control plane.
I like to use a few docker containers with compose. But if i already use docker compose for 2 projects, why not just using k8s instead?
How do you manage deployments for that C++ monolith? How is the logging? Logrotate, log gathering and analysis? Metrics, their analysis and and display? What happens when you have software developed by others that you might also to want to deploy? (If you can run a company with only one program ever deployed, I envy you).
All of that is simplified by kubernetes by simply making all stuff follow single way - "classical" approaches tend to make Perl blush with the amount of "There is more than one way to do it" that goes on.
.. but hire others to manage k8s? Or existing software engineers have to spend time doing so?
Many of the other points don't seem unique to k8s either.
I do like the alternative you've suggested though.
Most of it can be managed by text boxes on the front-end with selections and then it can just generate or edit the required files at the end of a wizard?
But then again it's actually the first public/popular attempt on a cloud OS. There might be a next one with better ergonomics than yamls.
Is that an incorrect understanding? I know C++ is supposed to be great for performance, but in truth I've never needed anything to be that fast. And if I can get the job done just as well with something I already know, I won't bother learning something like C++ which has a reputation for not being approachable.
But maybe I don't have full context?
• Company saves money by not having to hire Linux sysadmins
• Company saves money by not having to pay for managed cloud products if they don't want to
As a developer I want to right code, not manage a Kubernetes installation. If my employer wants the most value from my expertice they will either pay for a hosted environment to minimize my time managing it or hire dedicated staff to maintain an environment.
A lot of people is just really interested in having something complex instead of understanding their actual needs.
With hypervisors and managed environments taking over distributed computing, if there is a kernel derived from Linux or something completely different, it is a detailed that only the cloud provider cares about.
The alternative is to have a old and boring cluster of X identical java nodes which host the entire backend in a single process... The deployment is done by a pedestrian bash script from a Jenkins. It used to work fine for too long I guess and folks couldn't resist "inventing" microservices to "disrupt" it.
k8s is popular because Docker solved a real problem and Compose didn’t move fast enough to solve orchestration problem. It’s a second order effect; the important thing is Docker’s popularity.
Before Docker there were a lot of different solutions for software developers to package up their web applications to run on a server. Docker kind of solved that problem: ops teams could theoretically take anything and run it on a sever if it was packaged up inside of a Docker image.
When you give a mouse a cookie, it asks for a glass of milk.
Fast forward a bit and the people using Docker wanted a way to orchestrate several containers across a bunch of different machines. The big appeal of Docker is that everything could be described in a simple text file. k8s tried to continue that trend with a yml file, but it turns out managing dependencies, software defined networking, and how a cluster should behave at various states isn’t true greatest fit for that format.
Fast forward even more into a world where everybody thinks they need k8s and simply cargo cult it for a simple Wordpress blog and you’ve got the perfect storm for resenting the complexity of k8s.
I do miss the days of ‘cap deploy’ for Rails apps.
I introduced K8s to our company back in 2016 for this exact reason. All I cared about was managing the applications in our data engineering servers, and Docker solved a real pain point. I chose K8s after looking at Docker Compose and Mesos because it was the best option at the time for what we needed.
K8s has grown more complex since then, and unfortunately, the overhead in managing it has gone up.
K8s can still be used in a limited way to provide simple container hosting, but it's easy to get lost and shoot yourself in the foot.
There are basically two relevant package managers. And say what you will about systemd, service units are easy to write.
It's weird to me that the tooling for building .deb packages and hosting them in a private Apt repository is so crusty and esoteric. Structurally these things "should" be trivial compared to docker registries, k8s, etc. but they aren't.
> everybody thinks they need k8s and simply cargo cult it for a simple Wordpress blog
docker _also_ has this problem though. there are probably 6 people in the world that need to run one program built with gcc 4.7.1 linked against libc 2.18 and another built with clang 7 and libstdc++ at the same time on the same machine.
and yes, docker "provides benefits" other than package/binary/library isolation, but it's _really_ not doing anything other than wrapping cgroups and namespacing from the kernel - something for which you don't need docker to do (see https://github.com/p8952/bocker).
docker solved the wrong problem, and poorly, imo: the packaging of dependencies required to run an app.
and now we live in a world where there are a trillion instances of musl libc (of varying versions) deployed :)
sorry, this doesn't have much to do with k8s, i just really dislike docker, it seems.
At my company we have had better success with micro-services on AWS Lambda. It has vastly less overhead than Kubernetes and it has made the tasks of the developers and non-developers easier. "Lock-in" is unavoidable in software. In our risk calculation, being locked into AWS is preferable than being locked into Kubernetes. YMMV.
Oh boy I do not miss them. Actually I'm still living them and I hope we can finally migrate away from Capistrano ASAP. Dynamic provisioning with autoscaling is a royal PITA with cap as it was never meant to be used on moving targets like dynamic instances.
Add operators, complicated deployment orchestration and more sophisticated infrastructure... It is hard to know if things are failing from a change I made or just because there are so many things changing all the time.
Kubernetes is very complex and took a long time to learn properly. And there have been fires among the way. I plan to write extensively on my blog about it.
But at the end of the day: having my entire application stack as YAML files, fully reproducible [1] is invaluable. Even cron jobs.
Note: I don't use micro services, service meshes, or any fancy stuff. Just a plain ol' Django monolith.
Maybe there's room for a simpler IAC solution out there. Swarm looked promising then fizzled. But right now the leader is k8s[2] and for that alone it's worth it.
[1] Combined with Terraform
[2] There are other proprietary solutions. But k8s is vendor agnostic. I can and have repointed my entire infrastructure with minimal fuss.
Effectively, "every infrastructure as code project will reimplement Kubernetes in Bash"
Kubernetes is fine, but setting it up kind of feels like I'm trying to earn a PhD thesis. Swarm is dog-simple to get working and I've really had no issues in the three years that I've been running it.
The configs aren't as elaborate or as modular as Kubernetes, and that's a blessing as well as a curse; it's easy to set up and administer, but you have less control. Still, for small-to-mid-sized systems, I would still recommend Swarm.
Once everything is "infrastructure as code", the app team becomes less dependent on other teams in the org.
People like to own their own destiny. Of course, that also removes a lot of potential scapegoats, so you now mostly own all outages, tech debt, etc.
When you get to that blog post please consider going in depth on this. Would love to see actual battletested information vs. the usual handwavy "it works everywhere".
The way I learned it in Bret Fisher's Udemy course, Swarm is very much relevant, and will be supported indefinitely. It seems to be a much simpler version of Kubernetes. It has both composition in YAML files (i.e. all your containers together) and the distribution over nodes. What else do you need before you hit corporation-scale requirements?
I’ve deployed swarm in a home lab and found it really simple to work with, and enjoyable to use. I haven’t tried k8, but I often see view points like yours stating that k8 is vastly superior.
Edit: not sure why the down votes, I was just trying to point out what seems like a big distinction that the article is trying to make.
Most companies who were late on the Cloud hype cycle (which is quite a lot of F100s) got to see second-hand how using all the nice SaaS/PaaS offerings from major cloud providers puts you over a barrel and don't have any interest in being the next victim, and it's coming at the same time that these very same companies are looking to eliminate expensive commercially licensed proprietary software and revamp their ancient monolithic applications into modern microservices. The culimination of these factors is a major facet of the growth of Kubernetes in the Enterprise.
It's not just hype, it has a very specific purpose which it serves in these organizations with easily demonstrated ROI, and it works. There /are/ a lot of organizations jumping on the bandwagon and cargo-culting because they don't know any better, but there are definitely use cases where Kubernetes shines.
The meaning of "app" on top of these two operating system abstractions is entirely different and the comparison probably doesn't extend beyond this. From a computing stack standpoint though, it makes sense.
Remember the first time you saw the AWS console? And the last time?
I'm sorry, but I can't stand this kind of bullshit. You cannot possibly take two random things, put 'modern' in front of one word and 'ancient' in front of another, to justify changing things.
The problem of Kubernetes is probably that people started drinking the microservices koolaid and now need complex solution to deploy their software that became more complex when they adopted a microservices architecture.
Today Kubernetes is the antithesis of the cloud - Instead of consuming resources on demand you're launching VMs that need to run 24/7 and have specific roles and names like "master-1". Might as well rent bare-metal servers. It will cost you less.
Oh darn, I still don’t understand. Maybe I should learn what Docker is first?
The buzzword mumbo-jumbo on the first paragraph alone (which isn't really even your fault or anything, just the bogus pomp inherent to k8s as a whole) is already a scarecrow to anyone that "wasn't born with the knowledge", really.
It is pretty hard to get used to it. Brushing it away won't make it approachable.
I've yet to meet anyone who can easily explain how the CNI, services, ingresses and pod network spaces all work together.
Everything is so interlinked and complicated that you need to understand vast swathes of kubernetes before you can attach any sort of complexity to the networking side.
I contrast that to it's scheduling and resourcing components which are relatively easy to explain and obvious.
Even storage is starting to move to overcomplication with CSI.
I half jokingly think K8s adoption is driven by consultants and cloud providers hoping to ensure a lock-in with the mechanics of actually deploying workloads on K8s.
A single Service with Type=LoadBalancer and one Deployment may be all you need on Kubernetes if you just want all connections from the load balancer immediately forwarded directly to the service.
But if you have multiple different services/deployments that you want as accessible under different URLs on a single IP/domain, then you'll want to use Ingresses. Ingresses let you do things like map specific URL paths to different services. Then you have an IngressController which runs a webserver in your cluster and it automatically uses your Ingresses to figure out where connections for different paths should be forwarded to. An IngressController also lets you configure the webserver to do certain pre-processing on incoming connections, like applying HTTPS, before proxying to your service. (The IngressController itself will usually use a Type=LoadBalancer service so that a load balancer connects to it, and then all of the Ingresses will point to regular Services.)
Kubernetes should have been IPv6 only, with optional IPv4 ingress controllers.
An ingress object creates an nginx/nginx.conf. That nginx server has an IP address which has a round robin IPVS rule. When it gets the request it proxy's to a service ip which then round robins to the 10.0.0.0/8 container IP.
Ingress -> service -> pod
It is all very confusing but once you look behind the curtain it's straight forward if you know Linux networking and web servers. The cloud providers remove the requirement of needing Linux knowledge.
K8S is extremely complicated for huge swarm of webdevs and java developers that really reqlly dont understand how the stuff they use/code really works.
K8S was supposed to decrease the need for real sysadmins but in my view it actually increased the demand because of all the obscure issues one can face in production if they dont really understand what they are doing with K8S and how it works under the hoods.
Which I find hilarious.
Badly! That'll be $500, thanks for your business.
On a serious note, the whole stack is keeping ok-ish coherence considering the number of very different parties putting a ton of work into it.
In a few years' time it'll be the source of many war stories nobody cares about.
Also, after OpenStack, the bar for "consulting-driven software" is far from reached :)
That said I think Kubernetes may be at its Productivity journey on the tech Hype cycle. Networking in Kubernetes is complicated. This complication and abstraction has a point if you are a company at Google scale. Most shops are not Google scale and do not need that level of scalability. The network abstraction has its price in complexity when doing diagnostics.
You could solve networking differently than in Kubernetes with IPv6. There is not a need for complicated IPv4 nat schemes. You could use native ipv6 addresses that are reachable directly from the internet. Since you have so many ipv6 addresses you do not need Routers/Nats.
Anyhow in a few years time some might be using something simpler like an open source like Heroku. If you could bin pack the services / intercommunication on the same nodes there would be speed gains from not having todo network hops going straight to local memory. Or something like a standardized server less open source function runner.
https://en.wikipedia.org/wiki/KISS_principle https://en.wikipedia.org/wiki/Hype_cycle
There are many arguments that IPv6 didn't solve too many IPv4 pain points, but if it solved something is definitively this.
1) It solves many different universal, infrastructure-level problems. 2) More people are using containers. K8s helps you to manage containers. 3) It's vendor agnostic. It's easy to relocate a k8s application to a different cluster 5) People see that it's growing in popularity. 6) It's Open source. 7) It helps famous companies run large-scale systems. 8) People think that it looks good on a resume and they want to work at a well known company. 9) Once you've mastered K8s, it's easy to use on problems big and small. (Note, I'm not talking about installing and administrating the cluster. I'm talking about being a cluster user.) 10) It's controversial which means that people keep talking about it. This gives K8s mind share.
I'm not saying K8s doesn't have issues or downsides.
1) It's a pain to install and manage on your own. 2) It's a lot to learn--especially if you don't think you're gonna use most of it's features. 3) While the documentation has improved a lot, it's still weak and directionless in places.
I think K8s is growing more popular because it's pros strongly outweigh it's cons.
(Note I tried to be unbiased on the subject, but I am a K8s fan--so much so that I wrote a video course on the subject: https://www.true-kubernetes.com/. So, take my opinions with a grain of salt.)
I still believe 90% of users would be better served by Nomad. And if someone says "developers want to use the most widely used tech", then I'm here to call bullshit, because the concepts between workload schedulers and orchestrators like k8s and nomad are easy enough to carry over from one side to the other. Learning either even if you end up using the other one is not a waste of time. Heck, I started out using CoreOS with fleetctl and even that taught me many valuable lessons.
Where is the momentum?
Hosted GKE costs the same per month as an hour of DevOps time, what's wrong with paid management for k8s?
As a relatively noob sysadmin, I liked it a lot. Easy to deploy and easy to maintain. We've got a lot of mixed rented hardware + cloud VPS, and having one layer to unify them all seemed great.
Unfortunately I had a hard convincing the org to give it a serious shot. At the crux of it, it wasn't clear what 'production ready' Nomad should look like. It seemed like Nomad is useless without Consul, and you really should use Vault to do the PKI for all of it.
It's a bit frustrating how so many of the HashiCorp products are 'in for penny, in for a pound' type deals. I know there's _technically_ ways for you use Nomad without Consul, but it didn't seem like the happy path, and the community support was non-existent.
Please tell me why I'm wrong lol, I really wanted to love Nomad. We are running a mix of everything and its a nightmare
I think a Distributed OS is the only sane solution. Build the features we need into the kernel and stop futzing around with 15 abstractions to just run an isolated process on multiple hosts.
Well sure, but if the story just ended with "everyone use the least exciting tool", then there'd be few articles for tech journals to write.
But Kubernetes promises so much, and deep down everyone subtly thinks "what if I have to scale my project?" Why settle for good enough when you could settle for "awesome"? It's just human nature to choose the most exciting thing. And given that I do agree that there's some manufactured hype around Kubernetes, it isn't surprising to me why few are talking about Nomad.
The second reason is also about standards, but using them more assertively. Docker had way more attention and activity until 2016 when Kubernetes published the Container Runtime Interface. By limiting the Docker features they would use, they leveled the playing field between Docker and other runtimes, making Docker much less exciting. Now, new isolation features are implemented down at the runc level and new management features tend to target Kubernetes because it works just as well with any CRI-compliant runtime. Developing for Docker feels like being locked in.
Isn't the most popular k8s case to deploy Docker images still though?
Likewise, Linux is also a confusing mess of different parts and nonsensical abstractions when you first approach it. It does take some time to understand how to use it, and in particular how to do effective troubleshooting when things aren't working the way you expect.
But I 100% agree--I think it's the new Linux. In 5-10 years, it'll be the "go to", if not sooner.
Then a lot of people drink the koolaid and apply it everywhere / feel they're behind if they aren't in Kubernetes.
We are not in Kubernetes and have multiple datacenters with thousands of VMs/containers. We are doing just fine with the boring consul/systemd/ansible set up we have. We also have somethings running in Containers but not much.
Funnily enough at the OSS summit I had a couple of chats with people in the big companies (AWS, Netflix, etc.) and they themselves have the majority of their workflows in boring VMs. Just like us.
IMO containers are greatest for stateless apps that don't require much resources, but having a dedicated machine for them is a waste.
The smart people at Google knew that by quickly packaging their own internal tech and releasing it on open source they’d help people move from the incumbent AWS.
Helping customers switch IaaS hurts the both, lock in is better, but it hurts AWS way more. Proof? They made it free to run the necessary compute behind K8s control plane, until recently that was.
Are there benefits on running your biz’ web app using constructs made for a “cloud”? Sure there is, that’s why people are moving to K8s. There is real business benefits, given a certain amount of necessary moving parts. LinkedIn had such a headache with this they created Kafka.
I suspect most organisations’ Architects and IT peeps push for K8s as a moat for their skills and to beef up their resumé. They know full well that the value is not there for the biz’ but there’s something in it for them.
1. It's simple to get started with, but complex enough to tweak to your needs in respect to simplicity of deployment, scaling and resource definition.
2. It's appealingly cloud-agnostic just at the time where multiple cloud providers are all becoming viable and competitive.
I think it's more #2 and #1; as always, timing is everything.
The system becomes so complex that most people screw up simple things like redundancy, perimeter security and zero downtime updates.
I've seen all of the above from very bright and capable people.
What k8s brings to the table is a level of standardization. It's the difference between bringing some level of robotics to manual loading and unloading of classic cargo ships, vs. the fully automated containerized ports.
With k8s, you get structure where you can wrap individual program's idiosyncracies into a container that exposes standard interface. This standard interface allows you to then easily drop it into server, with various topologies, resources, networking etc. handled through common interfaces.
I said that for a long time before, but recently I got to understand just how much work k8s can "take away" when I foolishly said "eh, it's only one server, I will run this the classic way. Then I spent 5 days on something that could be handled within an hour on k8s, because k8s virtualized away HTTP reverse proxies, persistent storage, and load balancing in general.
Now I'm thinking of deploying k8s at home, not to learn, but because I know it's easier for me to deploy nextcloud, or an ebook catalog, or whatever, using k8s than by setting up more classical configuration management system and deal with inevitable drift over time.
Kubernetes is one way to deploy containers. Configuration systems like Ansible/Salt/Puppet/Chef/etc are another way to deploy containers.
Kubernetes also makes it possible to dynamically scale your workload. But so does Auto Scaling Groups (AWS terminology) and GCP/Azure equivalents.
The reality is that 99% of users don't actually need Kubernetes. It introduces a huge amount of complexity, overhead, and instability for no benefit in most cases. The tech industry is highly trend driven. There is a lot of cargo culting. People want to build their resumes. They like novelty. Many people incorrectly believe that Kubernetes is the way to deploy containers.
And they (and their employers) suffer for it. Most users would be far better off using boring statically deployed containers from a configuration management system. Auto-scaled when required. This can also be entirely infrastructure-as-code compliant.
Containers are the real magic. But somehow people confused Kubernetes as a replacement for Docker containers, when it was actually a replacement for Docker's orchestration framework: Docker Swarm.
In fact, Kubernetes is a very dangerous chainsaw that most people are using to whittle in their laps.
As far as I can tell: those are imperative. At least in some areas.
Kubernetes is declarative. You mention the end state and it just "figures it out". Mind you, with issues sometimes.
All abstractions leak. Note that k8s's adamance about declarative configuration can make you bend over backwards. Example: running a migration script post deploys. Or waiting for other services to start before starting your own. Etc.
I think in many ways, those compete with Terraform which is "declarative"-ish. There's very much a state file.
But basic versions of these things are provided by Kubernetes natively and can be declared in a way that is divorced from configuring the underlying software. So you just learn how to configure these broader concepts as services or ingresses or network policies, etc, and don't worry about the underlying implementations. It's pretty nice actually.
Kubernetes isn't a silver bullet of course, there will be applications where running it in containers adds unnecessary complexity, and those are best run in a VM managed by a CM tool. I'd argue using k8s is safe default for deploying new applications going forward.
Unlike Ansible (and I suspect the others) where it's really only more of a 'run once' type of thing... And sometimes if you try running it a second time it won't even succeed.
Lets use Istio's "istioctl manifest apply" to deploy a service mesh to my cluster that allows me to pull auth logic / service discovery / load balancing / tracing out of my code and let Istio handle this.
Lets configure my app's infrastructure (Kafka (Strimzi), Yugabyte/Cockroach, etc) as yaml files. Being able to describe my kafka config (foo topic has 3 partitions, etc) in yaml is priceless.
Lets move my entire application and its infrastructure to another cloud provider by running a single bazel command.
k8s is the common denominator that makes all this possible.
can't... terraform make all of that possible?
Again the cross cloud portability is a non starter, unless you're really at scale.
What k8s really scales is the developer/operator power. Yes, it is complex, but pretty much all of it is necessary complexity. At small enough scale with enough time, you can dig a hole with your fingers - but a proper tool will do wonders to how much digging you can do. And a lot of that complexity is present even when you do everything the "old" way, it's just invisible toil.
And a lot of the calculus changes when 'managed services' stop being cost effective or aren't an option at all, or you just want to be able to migrate elsewhere (that can be at low scale too, because of being price conscious).
That's not actually simple at all, and you would need to build a lot of the other stuff that Kubernetes gives you for free.
Kubernetes gives you an industry standard platform with first-class cloud vendor support. If you roll your own solution with ECS, what you are really doing is making a crappy in-house Kubernetes.
And, we only have to learn one complex system and avoid learning each cloud, one of which decided product names which have little relation to what they do was a good idear
We moved to it from docker swarm because docker swarm still has a lot of glitches with its overlay network. Rolling upgrades would leave stale network entries and its impossible to reproduce. Sometimes it happens sometimes it doesn’t.
With a managed solution, Kubeadm, or RKE it’s not hard to deploy anymore. All our infrastructure is in code, is immutable, and if you’re careful can be deployed into any kubernetes cluster.
Just like Docker has been great for easily deploying open source products, kubernetes is great for doing the same thing when you need to deploy horizontally. It’s easy for OSS to provide a docker image, a docker compose file for single node deploy, and Kubernetes yaml for a horizontal deploy.
The environments advertise themselves via that same modified ingress's default backend. We stick a tiny bit of deploy yaml in our projects, the deployments kube tagging gives us all the details we need to provide diffs, last build time, links to git repos, web sites etc for the particular environment. The yaml demonstrates conclusively how an app could or should be run, regardless of os or software choice, so when we hand it to ops folks there is a basis for them to run from.
However, because enterprise ops prior to Kubernetes are both costly and brittle, Kubernetes just works for enterprises.
We had a huge PowerShell codebase and it was a nightmare to maintain. in the meantime, it's no way as robust as Kubernetes.
It's just as simple as that: sure, Kubernetes seems to be complex, but most enterprise stuff are even worse. At the same time, despite they are costly, the quality is usually pretty crappy because those scripts are written under delivery pressure.
I've noticed that there are a lot of replies such as "it is overhyped" and "I can just run a VM".
Kubernetes is not for you as your use case may not match what it does and solves. Kubernetes provides a standard way of running your applications. It is complex but logical. Yaml sucks but it is simple and logical. I prefer to use terraform for kubernetes but it is the same thing, simple and logical. You cannot say the same with puppet, chef, ansible etc. All of those configuration tools are a big mess of different setups and scripts. I can go to any company and understand how their system works quite quickly. It makes searching for answers easy too because it is standard.
When you are running several services and there is an outage, it is a godsend. You can instantly view the status of things, how they are configured and when they changed. That is POWERFUL.
It takes a while to understand how all of the resources fit together but that is the same case with any type of deployment system and/or operating system.
p.s. I am not running that huge of a system, maybe about 5k containers total between dev, staging and prod. Maybe 500k requests a day. Running a couple kubernetes clusters is significantly nicer than running things in ECS.
The kubernetes ecosystem is really amazing and full of invaluable resources. It's vast, complex, but well-thought. Getting to know all ins and outs of the project is time consuming. So much things to learn and so little time to practice...
Why should it have?
Many people I talk with will complain about security, performance and complexity of k8s (and containers in general). Non-practicing engineers (read: directors/vps-eng) will complain about the associated cost with administering their k8s clusters both in terms of cloud cost and devops personnel cost.
Someone earlier mentioned it was the new wordpress - I don't think that's an unfair comparison, although I would challenge the complexity/cost of it.
Longer term, I think the contribution of Kubernetes will be getting us used to a resource/API-driven approach to infrastructure that abstracts away cloud providers, hardware, etc. But it will probably be superseded in the coming years by something that honors similar API "contracts." Probably written in Rust troll
That being said, I will also be the first one to recognize that PLENTY of workloads are not made to run on Kubernetes. Sometimes it is way more efficient to spawn an EC2/GCE instance and run a single docker container on it. It really depends on your use-case.
If I had to run a relatively simple app in prod I would never use Kubernetes to start with. Kubernetes starts to pay itself off once you have a critical mass of services on it.
There is some tech so simple that you just learn it and start using it, others that you know you can pick up when the time is right.
And software you would be happy to invest time in... as long as someone is paying you to do it, software you fear might keep you from getting a job if you don't invest in it.
There is software so simple it might be right (it isn't) and software so complicated that it must be important if people are using it/working on it.
So it's not that Kubernetes is good, it's just that it makes people neurotic enough to jump on the bandwagon. Been a few of those in my career. A few have stuck, most have not.
It also promotes immutable infrastructure and hence increases the portability. While some of the things like load balancers and ingress are controlled by cloud provider almost everything else can be seamlessly migrated to another cloud provider or on prem.
It makes dev, test, staging, prod environments consistent and also solves a lot of pain points of managing infrastructre at scale with autoscaling, auto healing and more. Istio adds a lot more kubernetes and makes the supporting microservices even easier.
Its going to be an important piece in Hybrid world as it brings a lot of standardization and consistency in two disparate environments.
I only consider that late because I've been reading the hype around k8s for many years already.
Became a late adopter of containers just before k8s actually. Now I've migrated most of my setups both privately and professionally to containers. And setup my first k8s clusters both at work and in my homelab.
So my perspective is that containers are first and foremost an amazing way of deploying software because all that complexity I did in ansible to deploy the software has been moved to the container image.
The project itself now, be it Mastodon, Jitsi, Synapse to name a few, package most of their product for me in automatic build pipelines. All I need to do is run and configure it.
And therefore, moving on to k8s, it would stand to reason that some of those services are able to be clustered. Where better to do such clustering than k8s?
That's just an ops perspective. We also have devs where I work and with k8s they're able to deploy anything from routes down to their services using manifests in CD pipelines. What's not to like?
Only reason one might get disenchanted with k8s is if you expect it to be a one-stop solution for your aging .net application. Not saying you can't deploy that in k8s, I'm just using it as an example of something that might not be microservice ready.
It's basically running a big computer without even trying.
-------
Kubernetes - kubernetes.io
Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation.
Original author(s): Google
After spending 18 months working on bringing kubernetes(EKS) to production, with dozens of services on it, the time was right to hand over migrating old services to the software engineers who maintain them. Due to product demands, but also some lack of advocacy, this didn't happen, with the DevOps folks ultimately doing the migration and retaining all the kubernetes knowledge.
An unpopular opinion might be that Kubernetes is popular because it gives DevOps teams new tech to play with, with long lead times for delivery given its complexity. Kubernetes usually is a gateway to tracing, service meshes and CRDs, which while you don't need at all to run Kubernetes, they will probably end up in your cluster.
"Developers love it!" Yeah, I'd love someone to drive my car for me, too. Doesn't mean it's a great idea to use technology so complex you have to hire a driver (really several drivers) to use it.
If you already have 3 people working for you that (for example) understand etcd's protocols or how to troubleshoot ingress issues or how to prevent (and later fix) crash loops, maybe they can volunteer to babysit your cluster for you, do all the custom integration into the custom APIs, keep it secure, etc. But eventually they may get tired of it and you'll have to hire SMEs.
If you're self-hosting a "small" k8s cluster and didn't budget at least $500k for it, you're making a mistake. There are far simpler solutions to just running a microservice that don't require lots of training and constant maintenance.
Complexity isn't always bad, but unnecessary complexity always is.
- Setup VM: and their dependencies, tool chain. If you use thing like package that has native component such as image processing you event need to setup some compiler on the VM - Deployment process - Load balancer - Systemd unit to auto restart it. Set memory limit etc.
All of that is done in K8S. As long as you ship a Dockerfile, you're done.
If you have 16CPU EC2 for your business logic, one for your DB, and you're smartly hosting your static content elsewhere or via Cloudflare ... I mean you need to have a 'big company' before going too far beyond that.
What gives? What are all these startups doing?
This is not a story about K8's, this is entirely something else, it's about psychology, complexity, our love of it, or rather our 'belief' that complexity = productivity that solving 'the hard infra problem' must inherently, be somehow be 'good for the company' because it 'feels difficult' and therefore must be doing something powerful or at least gaining some kind of competitive advantage?
(Aside from the 'Docker is Useful and K8's follows' point which actually makes sense a little bit ...)
http://www.smashcompany.com/technology/my-final-post-regardi...
- Most code running on k8s hasn't hit full production load yet.
- Where it has worked well, its been managed by devs that know what they are doing.
- It something worth putting on a backed dev resume
- Apparent cost saving ('we just need 1 vm instead of 5', 'we can auto scale to infinity','we don't have yo pay for aws, we get it all on our own vms').
Wait a few months and we will see a slurry of posts that read 'why we moved away from kubernetes', 'top 5 reasons to not use kubernetes', 'How using kubernetes fucked us, in the ass', 'You dont need kubernetes', 'Why I will never work on a project that uses kubernetes', 'Hidden costs of kubernetes' and so on.
C'mmon, you know how this works. Just take the time and read the docs. They are well written (They just don't mention where k8s does not work well)
I dont want do memory management-> gc
I dont want to do packaging -> Docker
I dont want to do autoscaling -> Kubernetes
Something like an easy to use (and operate!) multi-tenant docker-compose on steroids with user management/RBAC and a built-in Docker image repository that gets out of your way would be amazing for small teams / startups that don't want to deal with the complexity of Kubernetes.
Jokes aside, when you've lots of teams, all working on small pieces of a large product and shipping on their own, iterating fast... you need a platform and ecosystem on top to meet their requirements. As you reach planet-scale, you need to NOT let your cost grow exponentially. Hence it is popular.
What if you're not planet-scale? Well, it will still help (attract talent, design for scale, better ecosystem etc.). Hence it is popular.
If you're building a business however, focus on business and time-to-market, definitely not the infra, i.e. kubernetes.
operations details are hidden from developers and development details (the details of the workload) are hidden from the operations engineers.
I can't just blow away the instance, make a new one with their API, and run a bash script to set it up because I need to persist some sqlite databases between deploys.
Nix looks promising, but also seems to be a lot to learn. I think I'd rather focus on my app than learn a whole new language and ecosystem and way of thinking about dependencies.
I don't think my needs are insane here, I'm surprised there seems to be no infrastructure as code project for tiny infrastructures.
User data is a bash script that can be automatically run when the machine first spins up.
You could pass that script via digital oceans cli or even a tool like terraform.
Just use Ansible if you miss YAML, and you can actually deploy to real hardware.
Everyone was trying to make a system simple and adopted, but if you want it to be adopted, it's going to need a lot of features. Also Google worked some real magic in getting Kubernetes being supported by all the cloud providers.
It's a framework that will enable you to do what you want, while being the standard.
You could write your script to do that in a simpler way, but most people already know the standard and it's easier for everybody to understand Kubernetes rather than your clever solution.
But it doesn’t seem like it generates quite the same buzz as kunernetes. Not even within the azure/win/.net part of the world.
So have anyone here worked with both and could share some experience?
yaml is such a horrible format that I would even prefer JSON...
Can k8s success be explained partly due to the need for a more polyglot stack?
Once you know K8s, it's not very difficult to use. Plus, it provides solutions to a lot of different infrastructure-level problems.
it's not unique in what it does, but even with puppet and the likes you always had this or that exception because networking, provider images varying selinux defaults etc.
kuberent on it's own already covered most ground, but configmap and endpoints really tie it together in a super convenient package
it's not without pitfalls, like ms aks steal 2gb from each node so you have to be aware of that and plan accordingly, but still.
This is what I hate alot about things like k8, docker, etc is the memory profile… pretty much makes it a non starter if you want to run it on anything low cost.
What is the cheapest way to setup a production kubernetes on a cloud provider?
Kubernetes is popular because it's the new 'cool'.
1. I work for GCP 2. https://cloud.google.com/anthos/gke
To the first- yes, enormously so. If you know your history, it is the Linux to the Microsoft that is AWS- except backed by a business. (Google is maybe RedHat in that story, but the analogy is more inaccurate than accurate).
To the second, not really. GCP is mostly turning into an ML play.
And like PHP, it will be criticised with the power of hind sight but will continue to be used and power vast swaths of the internet.
But what is the universally regarded theory that k8s contradicts? I don't think there is one.
I've never really thought it was that useful for (for example) nodejs, where you can just npm install your whole environment and deps, and off you go.
- Automatic scaling of pods and cluster VMs to meet demand.
- Flexible automated process monitoring via liveness/readiness probes.
- Simple log streaming across horizontally scaled pods running the same app/serving the same function using stern.
- Easy and low cost metrics aggregation with Prometheus and Grafana.
- Injecting secrets into services.
I'd imagine there are other things can offer the same, but I find it convenient to have them all in the same place.
Our management of cluster is just a simple "add more CPU or memory to this nodepool", sometimes change a nodepool name for deployment for certain service. All done simple cloud management UI. For those who call microservices fancy stuff. No, we are a startup with fast delivery, deploy cycle. We have tons of subproject , integrations, and our main languages are nodejs, golang and python. Some of these are not good at multi-thread so no way to run it as a monolith. The other one is used only when it's needed for high performance. So All together Microservices + Kubernetes + Helm + good CI + proper pubsub gives our backend extremely simple fast cycle of development, delivery, and what's important flexibility in terms of language/framework/version.
What is also good is the installation of services. With helm I can install high availability redis setup for free in 5 minutes. The same level of setup will cost you several thousand dollars for devops work and further maintenance and update. With k8s it's simple helm install stable/redis-ha
So yeah, I can totally understand some simple projects don't need k8s. I can understand you can build something is Scala and Java slowly but with high quality as a monolith. You don't need k8s for 3 services. I can understand some old DevOps don't want to learn new things and they complain about a tool that reduces the need of these guys. Otherwise, you really need k8s.
https://static.googleusercontent.com/media/research.google.c...
YAML should not even be needed for Kubernetes. Configuration should be representable in a purely declarative way, instead of making the YAML mess, with all kinds of references and stuff. Perhaps the configuration specification needs to be re-worked. Many projects using YAML feel to me like a configuration trash can, where you just add more and more stuff, which you haven't thought about.
I once tried moving an already containerized system to Kubernetes for testing, how that would work. It was a nightmare. It was a few years ago, maybe 3 years ago. Documentation was plenty but really sucked. I could not find _any_ documentation of what can be put into that YAML configuration file, what the structure really is. I read tens of pages of documentation, none of it helped me to find, what I needed. Then even to set everything up, to get the Kubernetes running at all also took way too much time and 3 people to figure out and was badly documented. It took multiple hours on at least 2 days. Necessary steps, I still remember, not being listed on one single page in any kind of overview, but somewhere a required step was hidden on another documentation page, that was not even mentioned in the list of steps to take.
Finally having set things up, I had a web interface in front of me, where I was supposed to be able to configure pods or something. Only, that I could not configure everything I had in my already containerized system, via that web interface. It seems that this web interface was only meant for the most basic use cases, where one does not need to provide containers with much configuration. My only remaining option was to upload a YAML file, which was undocumented, as far as I could see back then. That's were I stopped. A horrible experience and I wish not to have it again.
There were also naming issues. There was something called "Helm". To me that sounds like an Emacs package. But OK I guess we have these naming issues everywhere in software development. Still bugs me though, as it feels like Google pushes down its naming of things into many people's minds and sooner or later, most people will associate Google things with names, which have previously meant different things.
There were 1 or 2 layers of abstraction in Kubernetes, which I found completely useless for my use-case and wished they were not there, but of course I had to deal with them, as the system is not flexible to allow me to only have layers I need. I just wanted to run my containers on multiple machines, balancing the load and automatically restarting on crashes, you know, all the nice things Erlang offers already for ages.
I feel like Kubernetes is the Erlang ecosystem for the poor or uneducated, who've never heard of other ways, with features poorly copied.
If I really needed to bring a system to multiple servers and scale and load balance, I'd rather look into something like Nomad. Seems much simpler and also offers load balancing over multiple machines and can run docker containers and normal applications as well, plus I was able to set it up in less than an hour or so, having to servers in the system.
What I can tell you, is that the unbelievable bloat in the complexity of our systems is going to bite us in the ass. I'll never forget when I joined a hip fintech company, and the director of eng told us in orientation that we should think of their cloud of services as a thousand points of light, out in space. I knew my days were numbered at exactly that moment. This company had 200k unique users, and they were spending a million dollars a month on CRUD. Granted, banking is its own beast, but I had just come from a company of 10 people serving 3 million daily users 10k requests a second for images drawn on the fly by GPUs. Our hosting costs never exceeded 20k per month, and the vast majority of that was cloudflare.
Deploying meant compiling a static binary and copying it to the 4-6 hardware servers we ran in a couple racks, one rack on each side of the continent. We were drunk by 11am most of the time.
Today, it's apparently much more impressive if you need to have a team of earnest, bright-eyed Stanford grads constantly tweaking and fiddling with 100 knobs in order to keep systems running. Enter kubernetes.
I am a huge Kubernetes fan, and think that it is a good and necessary tool with little accidental complexity (most concepts are there because you will likely need them and/or that they are a valid concern), but my position is that the growth of Kubernetes has not been organic -- it's been heavily promoted and marketed and pushed to where it is today.
Let's compare a project like Ansible first release in 2012[0], and the first AnsibleFest is in 2016[0]. Ansible is a very useful abstraction/force multiplier for doing ops. If a dedicated conference is a measure of community/enthusiasm reaching a fever pitch, it took 4 years for Ansible to reach critical mass. Kubernetes had it's first Kubecon in 2015[1] ONE year after it's initial release in 2014[2]. Did it reach critical mass 4x quicker than ansible? Maybe, but I think the simpler explanation is that the people who want Kubernetes to succeed know that creating buzz and the appearance of widespread adoption and community is more important than it actually being there, as it becomes a self-fulfilling prophecy. Once you have enough onlookers, people motivated to work on open source (i.e. give away labor, time and energy for free) will come improve your project with you, serve as an initial user base, your biggest promoters, all the while strengthening your ecosystem.
Another interesting side to this is how thoroughly Kubernetes seems to be crushing it's competition -- DC/OS (Mesos), Nomad and other competition are not fighting a functionality war, they're fighting a marketing war. DC/OS and Nomad are not obviously worse in function, but certainly don't compare when you consider ecosystem size (perceived, if not actual) and brand. It's a winner-take-most scenario and tech companies are particularly good at seizing this kind of opportunity. Of course, if you compare the resources of the entities backing these projects, it's clear who was going to win the marketing war.
In a world of free tiers as a good way to get people locked in, developer evangelists who build essentially propaganda projects (no matter how cool they are), and shrinking attention spans, Kubernetes is a good tool which has marketed itself to greatness. In it's wake there are efforts like the CNCF which I struggle to characterize because it's hard to differentiate their efforts to standardize from an effort to bureaucratize. I'm almost certainly blinded by my own cynicism but most of this just doesn't feel organic. Big, useful open source software gets world-renowned after years/decades of being convenient/useful/correct/etc but Kubernetes (and other projects given the CNCF gold star) seem to be trying to skip this process or at least bootstrap a reputation out of the gate.
DevOps traditionally moved much slower -- I can remember what seemed like an age of "salt vs ansible vs chef", with all three technologies having had lots of times to prove themselves useful. Even the switch to containers instead of VM/user based process isolation took more time than Kubernetes has taken to dominate the zeitgeist.
[0]: https://en.wikipedia.org/wiki/Ansible_(software)
[1]: http://www.voxuspr.com/2019/03/what-is-kubecon-its-past-pres...
1. It's portable
2. It's fast
3. It's declarative
4. It's fun / productive / easy
5. It's safe / automatic
6. It's an integrated framework
The opposites are also used to detract competitors.
The idea of k8 is that it will be portable to all hosting providers and linux distributions as opposed to developing shell scripts for Red Hat, especially multiple versions. I don't think it's easy or fun or fast.