Kubernetes isn't simple. Some of that is due to the problem its trying to solve, some of it is because the design, some if it is because the ecosystem that has evolved around it with regard to meshes/load balancers/control planes and overlay networks. We are after all replying on an article about a startup ecosystem being created around Kubernetes.
Tools like Kubernetes gaining adoption are partly based on technical merit/need and part based on fashion. Developers like toys and for some people the toys are what gets them through the day. Most of the people adopting Kubernetes probably don't need it. Developer Advocates and consulting shops are the ones making people think they need Kubernetes when they don't. This is an industry problem and that's all well and good. Let's just not pretend there isn't some influence from startups (typically backed by VC, which exacerbates the pressure) and consulting shops.
Sharepoint and BizTalk spring to mind... MS Dynamics (or whatever), too.
SAP and other "configuration heavy" systems bring it to another level.
> Most of the people adopting Kubernetes probably don't need it
If they geniunely don't need it then it will bite the in the ass.
On the other side: the absolute nightmare hodge-podge of half-baked non-integrated solutions that Kubernetes has murdered, and the promises it delivers on for cloud-native and cloud-first systems, are major strategic wins with clear ROI...
MS does not profile Google tech without cause. The largest virtualization supplier does not integrate competing Google tech without cause. Amazon is an easier mark here, but: the very fact there is a common, turn-key, orchestration platform across all major cloud providers is massive, and very hard to attribute to developer advocacy.
Way to trigger my PTSD...
It's a quick read, but to summarise: it's almost as complex as google's internal borg system but the benefit isn't even close (partially because nobody else has google problems). I can't down K8s personally, as I've never used it myself. But I wonder if there's the possibility of a system that's more 80/20 of google's borg rather than the seeming 20/80 coming from k8s. And I wonder if Grab will release it next year.
Borg is an accumulation of a decade's work with containers at Google, and has been described by googlers as a rich but a little messy, having been designed incrementally over many years as needs have surfaced. Borg could never be open-sourced because it's so specific to Google; for example, it uses Google's own cgroups-based container tech, not Docker/OCI/etc. Omega, as I understand, was an effort to clean up Borg and modernizing it, but apparently it was never put into production; instead, some of the innovations ended up being backported to Borg [1].
More importantly, Kubernetes is based roughly on the same design as Borg: A declarative, consistent object store, with controllers, schedulers and other bits and pieces orchestrating changes to the store, mediated by a node-local controller (Borglet/Kubelet). A major difference between Borg and Kubernetes is that with Borg, the object store is exposed to clients, whereas Kubernetes hides it behind an API. Another difference is the structure of containers; Borg's "allocs" are coarser-grained than pods and Borg is less strict about where things go, which googlers have described as a shortcoming compared to Kubernetes' strict pod/container structure. Another difference, also seen as a shortcoming, is that Borg lacks Kubernetes' one-IP-per-pod system; all apps on Borg apparently share the host's network interface. Kubernetes also innovates on Borg in several ways; for example, Borg doesn't have labels [2].
Borg, from what I gather, scales much further than Kubernetes at this point, but it's really not related to the design. The design is fundamentally the same.
Yegge's criticisms are too handwavy ("overcomplicated") to counter, but I don't think Yegge knows what he's talking about here. As for "benefit": Not sure what you mean by this, but Kubernetes arguably comes with benefits — declarative ops, platform abstractions, container isolation — even if you're just running a single node. The notion that you only need Kubernetes if you have "Google-scale problems" is just nonsense.
PS. What's "Grab"?
[1] https://ai.google/research/pubs/pub44843 (I recommend reading this paper)
[2] https://kubernetes.io/blog/2015/04/borg-predecessor-to-kuber...
Grab is the company Yegge left google for. He always complained about google's inability to platformize, so random hunch is he instills this desire into Grab? But entirely random. I also don't know how influential he was inside google vs outside.
I think turnkey kubernetes solutions like Rancher will dominate for a lot of use cases, especially for individual devs and small teams that can't have a dedicated DevOps resource to manage kubernetes.
Granted we run our own bootstrap (when we started none of the bootstraps running around we're ready, we started with a terraform/make implementation of hightower's hardway and we've just kept adding. We're thinking about revisiting the space again.
You don't go Google scale because it's cool to. You either do it because you absolutely have to, or you don't because, thankfully, you don't have to.
Attaching the Google name to projects stopped having meaning ~5 years ago. Their hiring process optimizes for fresh graduates who know nothing about engineering. Something coming out of Google means nothing, especially when Google itself doesn't even use it!
>It was created by engineers for engineers.
So was openstack. Then the feature creep happened.