Must say I do miss those days when K8s was an idea that could fit in your head. The primitives were just enough back then. It was powerful developer tool for teams and we used it aggressively to accelerate our development process.
K8s has now moved beyond this and seems to me to be focussing strongly on its operational patterns. You can see these operational patterns being used together to create a fairly advanced on-prem cloud infrastructure. At times, to me, it looks like over-engineering.
Looking at the borg papers, I don't remember seeing operational primitives this advanced. The develop interface was fairly simple i.e this is my app, give me these resources, go!
I know you don't have to use this new construct but it sure does make the landscape a lot more complicated.
Ironically, the push to "simplify" the platform with various add-on tools is what is making it seem more complicated. Rather than just bucking up and telling everyone to read the documentation, and understand the concepts they need to be productive, everyone keeps building random, uncoordinated things to "help", and newcomers become confused.
For example, I don't know who this operator framework is aimed at -- it's not at application developers, but at k8s component creators who write cluster-level tools, but what cluster tool writer would want to write a tool without understanding k8s at it's core? Those are the table stakes -- if I understand k8s and already understand the operator pattern (which is really just a Controller + a CRD, two essential bits of k8s), why would I use this framework?
I think if they really wanted to help, they'd produce some good documentation or a cookbook and maintain an annex of the state of the art in how to create/implement operators. But that's boring, and has no notoriety in it.
well consider you wanted to have a High Available solutions that supports Blue Green / Rolling Deploys without downtime. You either built it yourself or you rely on something like k8s. It's not that much over engineering. K8s is a lot of code, yes. But the constructs is still pretty simple. I think deploying k8s is still way easier than most other solutions out there, like all these PaaS's and Cloud solutions. Spinning up K8s is basically just using ignition/cloud-config, coreos and PXE or better iPXE. Yeah sometimes it's troublesome to upgrade a k8s version or a etcd cluster. However everything on top of k8s or even coreos itself is extremly simple to upgrade.
inb4 or our current system is using consul, haproxy, ansible and some custom built stuff to actually run our stuff. System upgrades are still done manually or trough ansible and my company plans to replace that with k8s. it's just way simpler to keep everything up-to date and run for high availability without disruption on deployments. it's also way simpler to actually get new services/tools into production, i.e. redis/elasticsearch without needing to keep them up to date/running.
Have you seen nomad + consul + traefik? Much easier to install and the end result is close to a K8s cluster.
Kubernetes has the huge day-1 problem, that it doesn't solve all of your problems. The hard stuff like networking and distributed storage are hook-in APIs. That's fine on Google's cloud where all the other stuff is there and was developed with these interfaces in mind, so all the endpoints are there. But most companies don't work in GCP/AWS alone. The moment you come on-premise you see that kubernetes only does 25% of what it needs to do to get the job done.
So, oyu have this tool who already lacks 75% in its original design and it tries to overcome this by adding more stuff. Then you combine this with a prematurely hyped community who just adds more stuff to solve problems that are solved, that don't need to get solved or that aren't problems, just to get their own names and logos into what's out there.
These two are patterns that make it very clear that it is impossible for Kubernetes to ever become a lean, developer friendly tool. But it's a great environment to make money already, I can tell you. And I think maybe that was the main goal from the beginning.
We forever rush to the limits of current technology and then blame the technology.
I think it's worth noting that Kubernetes never tried hard to impose an opinion about what belongs to the operator (as in the person running it) and what belongs to the developer. You get the box and then you work out amongst yourselves where to draw the value line.
Cloud Foundry, which came along earlier, took inspiration from Heroku and had a lot of folks of the convention-over-configuration school involved in its early days. The value line is explicitly drawn. It's the opinionated contract of `cf push` and the services API. That dev/ops contract allowed Cloud Foundry to evolve its container orchestration system through several generations of technology without developers having to know or care about the changes. From pre-Docker to post-Istio.
Disclosure: I work for Pivotal, we do Cloud Foundry stuff. But as it happens my current day job involves a lot of thinking about Kubernetes.
Sad to see it 'lose the race' against kubernetes.
From what I see, it seems like Operators are a better tool for defining and deploying Custom k8s resources whereas helm charts are good way to organize applications (deployment, service etc. templates packaged into one tar).
Helm is a package manager. Think of it like apt for Kubernetes.
Operators enable you to manage the operation of applications within Kubernetes.
They are complementary. You can deploy an operator as part of a Helm Chart. I recently wrote a blog post that explains how the different tools relate. https://codeengineered.com/blog/2018/kubernetes-helm-related...
The operator (or to be more precise one or more controllers) listen to those messages and try to reconcile the desired state with the current state.
So an operator is a combination of the message types and the controllers that take of the reconciliation process.
Taking this point of view, k8s is much more than container orchestration framework. I.e. it can orchestrate anything in the real world, as long as there is a way to define the desired state of a thing and a controller that can affect the real world.
Back to the original question. Helm was created in order to raise the abstraction of resource definition (I.e. the desired state) from plain yaml to property file which is much more readable and smaller. Along the way, it also became a packaging tool.
A Helm chart by comparison is a way to template out K8s objects to make them configurable for different environments.
Some shops will end up combining one with the other.
https://github.com/operator-framework/helm-app-operator-kit
The templating aspect of Helm and the set of quality content is complementary to being able to add higher level lifecycle.
No no no no no. The words "framework" and "application" are so meaningless now that even reading this post is draining.
CoreOS pioneered the Operator pattern, but I think building up that pattern into a framework to get people developing it away from the knowing the basics of k8s is such a mistake. The operator pattern falls out of the primitives that k8s offers (there's literally a concept called a controller) -- this makes it seem like another app platform. I think the level of abstraction isn't even right, this is like trying to enable people to write daemons without knowing anything about linux or signals or processes.
Then again, I also dislike tools like Helm because they do the same thing. Why is everyone so in a rush to make inevitably leaky abstractions to make everything so easy a monkey could do it? All you're doing is encouraging cookie cutter programmers to write cookie cutter poorly understood code that will break on someone inevitably.
All essential complexity introduced by features in the layer below an abstraction cannot be simplified, it can only be hidden or removed. It is OK for things to be hard, as long as they are simple (in the rich hickey easy vs simple analogy).
"Operators", as introduced in 2016, were just bespoke Go programs that communicated with Kubernetes internals in a pretty low-level way.
You were writing special-case plugins for Kubernetes, but they didn't want to make it sound that way, because I guess that just doesn't sound hip or devopsy. This branding exercise worked out for CoreOS -- Red Hat just bought them.
This whole space is massively infused with bullshit. It's because all of these companies want to make money selling you cloud stuff, because it's profitable to rent computers at 3-5x the TCO. Google especially is hungry to claw back the lead in the cloud space from Amazon, and it's not hard to conceive why Kubernetes doesn't seem to work without fuss anywhere except GKE, or to understand the massive marketing dollars that Google is pumping into this whole Kubernetes farce (and for the record, Google seems to consider HN an important platform for k8s PR; I've been censured after too many Googlers found my k8s-skeptical posts "tedious").
Anyway, I guess that's neither here nor there. Just annoyed at what is by now the totally conventional status quo of overhyped empty promises made by people who seem more like ignorant promoters and fanboys than serious engineers.
This "Operator Framework" seems to be the same concept of Operators, just with additional library support for the plugins -- err, "Operators". It may be a good improvement, will have to research more.
I think they chose "operator" over the more traditional "controller" because the latter can be quite simple, whereas an operator is potentially a combination of several things, including CRDs, API extensions, and controllers. For example, an operator might start different controllers depending on what cloud it's deploying to. It's a useful distinction; if someone says "I'm using this operator for X", I instantly know what they mean.
FWIW, I'm one of those who remember your name, simply because you pop up in every Kubernetes discussion with a predictably contrarian, long-winded opinion. I don't know what you're getting out of it. In this case, you're not wrong — but the curmudgeonly, somewhat tone deaf way that you go about it isn't very nice, which probably explains the downvotes.
There are several k8s conversations on HN each day. I skip most of them.
> I don't know what you're getting out of it.
I get a lot of useful feedback out of it. Most of the posts I make on HN are about trying something out and gauging the response, because I want to learn from it. Sometimes I get a complete correction, which is great, because I stop believing something that's wrong. More often I get minor shifts in my personal POV, perspective on what arguments are effective and which aren't, the pitfalls/commmon counterarguments to specific positions, good feedback on tone / interpretation, and lots of other valuable information. Also, once in a while I make a nice personal connection.
I'll note for the record that response on k8s/containers is mixed. There are certainly a substantial number of threads where I'm at the bottom, but there are also threads where I make essentially the same arguments and score pretty well, along with a few supporting comments from people who say they don't get it either. HN's responses generally seem to be signaled by the tone of the thread and headline, and the preponderance of existing responses. If the groundwork is laid with a positive outlook, negative comments will usually have a hard time, and vice-versa.
Also, my arguments are usually not purely repetitive, even if they have the same core message (because same core things remain relevant). I have never talked about Operators on HN before. They came up and they're a good example of how people pretend that Kubernetes is more production-ready than it is, by obfuscating things like "you have to write special programs to teach Kubernetes how to deploy and manage your applications because the YAML configuration interface they tout isn't good for complex cases", behind the much hipper "Build a Kubernetes Operator, then you'll be cloudified and Dockerized out the wazoo!!"
I admit that I find a culture focused on this type of hype to be grating and immature, and as a sign of its inability to really bring substantial improvement to the table. I don't think I feel this way about anything new in general, I just think it's a reaction to a progressively-worsening engineering deficit in the "devops" field. I hope that I can learn whether this is right or wrong as time goes on.
> In this case, you're not wrong — but the curmudgeonly, somewhat tone deaf way that you go about it isn't very nice, which probably explains the downvotes.
Yeah, so this is a great example of why I continue to post about this. Most people would consider this subject matter very dry, but Kubernetes is something that people imbue with much more personal identification than is typical for infrastructure orchestration projects. Where are the CloudFoundry Diego disciples (paging jacques_chester ;) )?
It's important to post and learn the pressure points, and if there is any argument or circumvention that is effective against that identity imprint. I'm still trying to learn, so I continue to post and draw feedback from the community. I appreciate your participation in teaching me thus far. :)
Second, I feel like it's valid to point out that Operators are not really just a method of "packaging", in a post that tries to make it sound like Operators are just a small bit of YAML or metadata. You're writing real, non-trivial Go code that tells Kubernetes explicitly how to deploy and manage the lifecycles of specific types of applications.
At least until now with the "Operator Framework", there wasn't really even anything that firmly defined an Operator as an Operator; it's just what some people called their Go code that manipulated k8s's object handling and lifecycle internals.
But, if you insist, here's one operator I've worked with: https://github.com/coreos/prometheus-operator . This is from CoreOS itself.
Here's a patch I submitted about a year ago: https://github.com/coreos/prometheus-operator/pull/289 . This required updating the way the software handled HTTP response codes in one of its "watcher" daemons (because all packaging methods need those, right?), and fixing the order of operations in the bootstrap scripts.
Some more general info about this repo:
$ du -sh prometheus-operator/.git
51M prometheus-operator/.git
The repo size is 51M. $ git rev-list --count master
1716
There have been almost 2000 commits. $ cloc --vcs=git --exclude-dir vendor,example,contrib .
290 text files.
278 unique files.
121 files ignored.
github.com/AlDanial/cloc v 1.74 T=0.82 s (295.4 files/s, 48117.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Go 50 1581 1392 20622
JSON 9 0 0 6276
YAML 132 260 885 4164
Markdown 30 792 0 2957
Bourne Shell 17 65 58 257
make 1 34 2 91
Python 1 10 5 40
TOML 1 11 20 30
Dockerfile 2 8 0 27
-------------------------------------------------------------------------------
SUM: 243 2761 2362 34464
-------------------------------------------------------------------------------
It appears there are over 20k lines of Go code after excluding vendor libraries and the example and contrib directories (arguably, contrib should've been included).I dunno, it just feels a little disingenuous, to me, to say that something that involves this much code is just a "packaging method" for a normal application. "Sure, just write an operator to package that up" like it's comparable to a package.json manifest or something. It's not! You need custom daemons that watch files to make sure that your k8s deployment stays in sync, and then you need to exert very meticulous and specific control over Kubernetes' behavior to make things work well.
I think it's demonstrative that it takes north of 20k lines of Go code to package an application for deployment on Kubernetes. What do you think?
-------------
EDIT: And one clarification: my opinion on containers as such is probably not well-known, since you're conflating it with my opinion on Kubernetes.
I like containers conceptually (who wouldn't?) and I run several of them through LXC:
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
axxxx-dev STOPPED 0 - - - false
gentoo-encoder STOPPED 0 - - - false
jeff-arch-base-lxc STOPPED 0 - - - false
jeff-crypto RUNNING 0 - xxx.xxx.xx.xxx - false
jeff-ffmpeg STOPPED 0 - - - false
jeff-netsec STOPPED 0 - - - false
jeff-ocr STOPPED 0 - - - false
localtorrents-lxc RUNNING 0 - xxx.xxx.xx.xxx - false
nim-dev STOPPED 0 - - - false
plex-2018 RUNNING 1 - xxx.xxx.xxx.xxx - true
unifi STOPPED 0 - - - true
I believe this is the kind of thing people actually want. Highly efficient, thin "VMs" that are easy to manage and run as independent systems without requiring the resource commitment.There is a good place for Kubernetes in probably about 1% of deployments where it's used. Most other people are just trying to run something like LXC, but they're confused because everyone who is critical of k8s drops to -4 and gets HN's mods after them. :)
Aronchick (at) google.com
Disclosure: I work at Google on Kubeflow
We're moving away from an imperative configuration/operational model to a declarative one. While these operators target applicatings running in k8s I could imagine them being created to manage applications running elsewhere as well.
However, you might've been trained incorrectly, perhaps as a joke, because you've assigned properties to the opposite solutions!
Configuration management like Ansible or Salt are agnostic modules that can be plumbed under the hood to work with any platform: use Ansible's "copy" module and it can be implemented under the hood as cp, rsync, passenger pigeon, whatever. You can use the same playbook against any target, including targets that run on orchestration platforms like Kubernetes.
Kubernetes Operators, on the other hand, are tightly linked to Kubernetes internals. They are not declarative; they require explicit instruction on how to manage your application's lifecycle, implemented in a statically-typed programming language. Indeed, if your system doesn't need to muck around in the Kubernetes internals, you don't create an "Operator", you just use the pre-baked object types like Service, Deployment, etc.
So I agree in principle, but Kubernetes is the opposite of what you've expressed here.