As a spectator, not a tech worker who uses these popular solutions, I would say there seems to be a great affinity amongst in the tech industry for anything that is (relatively) complex. Either that, or the only solutions people today can come up with are complex ones. The more features and complexity, the more something is constantly changing, the more a new solution gains "traction". If anyone reading has examples that counter this idea, please feel free to share them.
I think if a hobbyist were to "[deploy] a single statically linked, optimized [C++] server that can serve 10k requests per second from a toaster" it would be like a tree falling in the forest. For one because it is too simple, it lacks the complexity that attracts the tech worker crowd, and second, because it is not being used by well-known tech company and not being worked on by large numbers of people, it would not be newsworthy.
Developer time for fixing these bugs is in most cases more expensive, than to throw more hardware at your software written in a garbage collected language.
What do we want to deploy, okay, stop monitoring/alerts, okay, flip the load balancer, install/copy/replace the image/binary, restart it, flip LB, do the other node(s), keep flipping monitoring/alerts, okay, do we need to do something else? Run DB schema change scripts? Oh fuck we forgot to do the backup before that!
Also now we haven't started that dependent service, and so we have to rollback, fast, okay, screw the alerts, and the LB, just rollback all at once.
And sure, all this can be scripted, run from a laptop. But k8s is basically that.
...
And we get distributedness very fast, as soon as you have 2+ components that manage state you need to think about consistency. Even a simple cache is always problematic (as we all know how the cache invalidation joke).
Sure, going all in on microservices just because is a bad idea. Similarly k8s is not for everyone, and running DBs on k8s isn't either.
But, the state of the art is getting there. (eg the crunchydata postgresql operator for k8s.)
If you want to be a successful indie company, avoid cloud and distributed like the plague.
If you want to advance in the big corp career ladder, user Kubernetes with as many tiny instances and micro-services as you can.
"Oversaw deployment of 200 services on 1000 virtual servers" sounds way better than "started 1 monolithic high-performance server". But the resulting SaaS product might very well be the same.
All this docker and k8s stuff just feels like reinventing application servers, just 10x more complex as means to sell consulting services.
However, their learning curve is pretty steep (particularly Rust) and most developers don’t enjoy having to worry about low-level issues, which makes recruitment and retention a problem. Whereas one can be reasonably proficient with Python/Ruby in a week, Java/C# is taught in school, and everyone has to know JS anyway (thanks for nothing, tweenager Eich), so it’s easy to pick up manpower for those.
Disclaimer: Neither do I claim to be a very good developer, nor do I think you, the reader, is only average. Just given that you are reading Hackernews is a strong indicator for your interest in reflection and self improvement, regardless of your favorite language.
For every time I have to deal with k8s I deeply miss application servers.
Within our teams, we’ve found we can do with an (even) higher level of abstraction by running apps directly on PaaS setups. We found this sufficient for most of our use-cases in data products.
And at this point the hobbyist might wonder, "why isn't my toaster software being used by well-known companies? where are the pull requests to add compatibility for newer toaster models?"
> As a spectator, not a tech worker who uses these popular solutions, I would say there seems to be a great affinity amongst in the tech industry for anything that is (relatively) complex.
I think you have it backwards. General/abstract solutions (like running arbitrary software with a high tolerance for failure) have broad appeal because they address a broad problem. Finding general solutions to broad problems yields complexity, but also great value.
Not this one. :)
That, and mixture of a sunken cost fallacy/lack of the ability to step back and review if the chosen solution is really better/simpler rather than a hell of accidental complexity. If you've spend countless months to grok k8s and sell it to your customer/boss, it just has to be good, doesn't it?
Plus, there's a great desire to go for an utopian future cleaning up all that's wrong with current tech. This was the case with Java in the 2000s, and is the case with Rust (and to a lesser degree with WASM) today, and k8s. Starting over is easier and more fun than fixing your shit.
And another factor are the deep pockets of cloud providers who bombard us with k8s stories, plus devs with an investment into k8s and/or Stockholm syndrome. Same story with webdevs longing for nuclear weapons a la React for relatively simple sites to make them attractive on the job market, until the bubble collapses.
But like with all generational phenomenae, the next wave of devs will tear down daddy-o's shit and rediscover simple tools and deploys without gobs of yaml.
The domain I'm working in might be non-representative, but for me fixing my shit systematically means switching from C++ to Rust. The problems the borrow checker addresses come up all time either in the form of security bugs (because humans are not good enough for manual memory management without a lot of help) or in the form of bad performance (because of reference counting or superfluous copies to avoid manual memory management).
But otherwise I agree with you that if we never put in the effort to polish our current tools, we'll only ever get the next 80%-ready solution out of the hype train.
K8S is developed by a multitude of very large companies. Each with their own agenta/needs. All of them have to be addressed. Thus the complexity. If you think about it they probably manage to keep the complexity to relatively low levels. Maybe because it is pushed to the rest of the ecosystem (see service meshes for example).
Being pushed by the behemoths also explains the popularity. Smaller companies and workers feel that this is a safe investment in terms of money and time familiarizing with the tech stack so they jump on. And the loop goes on.
Main business reason for all that though I think it's the need of Google et all to compete with AWS creating a cloud platform that comes to be a standard and belongs to no-one really. In this sense it is a much better, versatile and open ended openstack attempt.
And yes, there is less fancy companies like one where I work where we don't use Kubernetes because it's kind of overkill if all of your production workload fits onto 2 beefy bare metal servers.
I can see a point in using Docker to unify development and production environments into one immutable image. But I have yet to see a normal-sized company that gets a benefit from spreading out to hundreds of micro instances on a cloud and then coordinating that mess with Kubernetes. Of course, it'll be great if you're a unicorn, but most people using it are planning for way more scaling than what they'll realistically need and are, thus, overselling themselves on cloud costs.
Last year, my bare metal website had 99.995% uptime. Heroku only managed 99.98%.
Of course, I could further reduce risk by having a hot standby server. But I'm not sure the costs for that are warranted, given the extremely low risk of that happening.
hah, i've noticed that too - specifically around k8s/deployments/system architecture too. i've taken to calling it complexity fetishisation.
i think it stems from the belief/hope that, whilst they don't "google sized" data today, they need to allow for it.
i'll take two toasters, please.