> If you're hiring k8s guys who don't know etcd and the backend of k8s [...] then you're not hiring Seniors who have worked on k8s for several years.
> Finding someone with 2+ years of k8s experience who isn't a Xoogler is fairly rare right now because 2 years ago it wasn't the market behemoth that it is right now.
Indeed, there's not enough people who know how to run it for as broadly as it's spread; for how much it's hyped.
If there's a thousand people out there who have that level of experience, I'd be surprised. And in an industry running hundreds of thousands of clusters (or more!), that's just too few people.
I did much of the early research POCs for my company when the idea of containerization really took off, and my deployments would seem to not have a shelf life of more than a few days before I would have to conform to some new method they came up with. I was using Tectonic when it was first released and the documentation would change underneath me as I would try to set up the clusters. It's a LOT to keep up with.
I can understand and explain to someone every protocol or idea underlying Kubernetes, sure, because they build upon standards that we have all used before in operations. But to try to understand how it is all working together within Kubernetes, and then add in the complex interplay if you are like us and integrate with non-k8s systems that have comlex firewall and routing rules now to allow the intercommunication... add in Calico or Flannel...Docker under the sheets with all its warts...it's a lot to manage. You need people that are engaged with the k8s project at a level that would normally be reserved for Googlers working on it.
Don't get me wrong... I like Kubernetes for the most part. I do agree that if you are planning to run in-house, you are in for some challenges, and that you will need a very high caliber of operations team to deploy and maintain it.
I won't deny actual k8s experts are low in abundance right now. It's a complex platform in its near infancy. There are people brand new to k8s embarking on the journey to learn it every day in slack. Give them 6mo+ or another year and you'll have a few near experts and a ton of just generally experienced admins.
I don't feel as though it's any different from when I was working on my CCIE. When I was working on that there were only 18k other people out there who had CCIEs. I very rarely met one or even someone with just that level of skill (I'm not in the Bay Area/NYC). I had to ask questions through newsgroups and IRC and in IRC there were maybe 4-10 people at that level out of thousands. You could say there are thousands of networks out there that need a CCIE to run them but that isn't ever what happened; you'd have a CCIE basically lead from the top and their skill/experience would trickle down to lesser experienced netengs or they'd be brought in as consultants when necessary. I've worked with very few CCIEs. I see k8s going the same way. Every State will likely have a handful of experts on the subject while there will remain a ton of CCNA/CCNP level k8s admins and you just need to determine how complex your k8s infrastructure is and what level you'll need to hire to effectively manage it.
Brendan Burns goal is to democratize ephemeral infrastructure so that anyone who can code can manage it. That's another topic entirely but the community is starting to output enough general guidance in the form of blogs, books, slack, et al. that hopping into the ecosystem now is basically a breeze compared to what it was when I got involved in the 1.x days.
Could just be that I'm a masochist and like learning painful things.
My biggest complaint about k8s right now is the lack of real-world production knowledge being distributed. A lot of people set up a cluster and leave it and never optimize it or make it actually production-ready. My goal is to significantly accelerate that through training, blogs, etc.
It's also not comparable. A network is setup only once by a CCIE. It's done and it doesn't need touching for the next 10 years.
A kubernetes cluster is setup once and then the troubles begin. Constant care needed every week for 10 years.
Kubernetes was barely a blip on the map until 2017. There was Cisco routing and switching hardware LONG before there was even a CCIE certification.
This is a new ecosystem. Of course it takes time to develop experts.
A CCIE not touching a network for 10 years? That's ludicrous. I was a network engineer. I wish that was even remotely the case. I was constantly fighting firmware/bugs/etc. In fact I swore off Cisco and began working on my JNCIE at some point to stick with Juniper which of course had it's own issues.
I've run k8s 1.5 in production for 2 years on various clusters. That is almost a 2 year old release. I've had zero k8s specific problems. I recently migrated those clusters to 1.9 and apart from updating some API endpoints that changed over the major releases, and a lot of annotations that changed, it was very little actual work. It was mostly tedious "find & replace" work.
I'm not going to bullshit people. K8s has its bugs, quirks and is complex, but there seem to be a huge number of people who run away in fear on HN.
It can be understood. I didn't even know how to use Docker before I jumped into learning k8s.