k8s is probably a great excuse to think how to compose your infrastructure and software in a declarative way - I'm still fascinated by https://demo.kubevious.io/ - It just made "click" when playing with that demo - it's not goo it's a different operating system and a different mindset.
You can do 80% of that with docker-compose / swarm for small projects but:
If you read HN you are in a huge bubble - gruelsome patched tomcat7 apps on Java8 with 20 properties/ini/xml config files are still popular - hosting things in docker or doing ci/cd is still not mainstream. At least in Europe in the public sector stuff where I was involved.
Sure you can mock it - but the declarative approach is powerful - if you can pull it off to have it across all your infrastructure and code with ci/cd and tests you are fast.
This alone correctly implemented https://github.com/adobe/rules_gitops solves so many problems I can't count the useless meetings we had over any of these bullet points, bazel alone would have solved most major pain points in that project. Just by beeing explizit and declarative.
Don't believe the hype but it's a powerful weapon.
Comparing a troubleshooting guide to running a site on a couple of servers is a bit too different for me. Compare it to a troubleshooting guide for those two servers, let's see how they stack up. No using any "ask {specific person}" either
Don't get me wrong, kubernetes is overkill for most side project level things. I don't disagree, I just like to see things knocked down a peg fairly!
Also as mentioned by this tweeter they use more than 2 servers anyway
https://twitter.com/shadowmanos/status/1434980544740306947
They could probably save on resources and maintenance effort if they switched to containers assuming this is still the same or more
> This is #1 in a very long series of posts on Stack Overflow’s architecture. Welcome.
https://stackexchange.com/performance
1.3 billion page views per month, 9 web servers, 4 sql servers
Stackoverflow is notable because they went down the C#/MVC/SQL Server route from the start, which meant much better performance per server. Thats why they make an interesting counterexample to the usual way...
And also notable because everything is under an expensive license, so big performant servers is the cheaper option.
Edit: everything = Windows servers for their .NET app ( apparently in the process of migrating to .NET Core) and SQL Server
It's worth mentioning that the diagram is explicitly incomplete. The yellow endpoints are fixes but except for "END" the other endpoints are all either "unknown state" (i.e. "I have no idea what's broken") or problems that aren't addressed in further detail like "The issue could be with Kube Proxy" or even "Consult StackOverflow".
I'm not sure what a complete diagram would even look like but I don't think there's any way to infer complexity by looking at them in comparison.
SO is relatively simple; it's basically customized forum software which is a solved problem that has been around for decades. A junior dev can build an alternative, and it can be built using tried and true solutions like MySQL + PHP, which are horizontally scalable with database sharding, read replicas, and maybe stuff like memcached to accumulate votes before updating the database or a CDN for caching static files.
Google has different problems and different workloads, and they have hundreds of times more applications with thousands of times more load. Apples and oranges.
> Google has different problems and different workloads
Which of these do you think most organisations most closely resemble?
I don’t think anybody would disagree if you said that you should use Kubernetes for organisations that resemble Google. But most organisations don’t look anything like Google. They look a lot more like Stack Overflow. So the “You don’t need this…” statement holds true for almost everyone.
Of course junior dev can do it, the same way junior dev can make Youtube
it'll work as long as there's less than 100 concurrent users on SO and less than 50 4K 20min videos on youtube
Except 99% of forums run terrible software that doesn't perform, is not easily usable and won't work right on phones. That tells me it's not a solved problem at all.
Features that seemed to be advocating for k8s were not server provisionning, but instead :
log management, easy setup of blue/green & canary deployment, not having to restart a vm upon new code deployment, etc...
How would you do those things as easily with other techs ?
It is a choice. I have personally moved on from the "Kubernetes is never a good choice over running things yourselves" camp.
I've written about Nomad vs k8s on my blog if that might interest you:
https://atodorov.me/2021/02/27/why-you-should-take-a-look-at...
And I've also written about some common things, like Traefik for ingress, Loki for logs, etc. to supplement the pretty complete Hashicorp tutorials.
As long as your applications follow 12-factor principles it shouldn't be too hard to move between different orchestration tools and you can pick the one that best suits your needs.
You'll still get layer-upon-layer of abstraction - for example Consul for key-value and service discovery, Traefik for load balancing, Terraform to build up the service discovery rules, etc - but it feels somewhat more intentful, less boilerplate.
all what a modern web app needs out of the box with 1 day to learn instead of years... and the best? you don't have to modify your code to work on it, ie environment and code are separate.
I’m sure a troubleshooting map for bare linux server wouldn’t be less complicated than that.
Except your k8s runs on a Linux server, so this is just an addition. (Unless you're using a fully managed k8s cloud offering, but then you have an even bigger toubleshooting flowchart to navigate the provider's management interface: at least that's my experience with GKE, maybe Amazon and others are better)
Wouldn't it be more likely in this case that the server is built from configs? Ansible or whatever
The troubleshooting for the Linux server side is "spin up a new one and delete the old one"
https://stackoverflow.blog/2021/07/21/why-you-should-build-o...
Does it?
Pretty impressive I think.
• 9 web servers
• 4 SQL servers
• 2 Redis servers
• 3 tag engine servers
• 3 Elasticsearch servers
• 2 HAProxy servers
That comes to 23. I know “a couple” is sometimes used to mean more than two, but… not that much more than two.
“A couple” is just flat-out wrong; I’d guess that he’s misinterpreting ancient figures, taking the figures from no later than about 2013 about how many web servers (ignoring other types, which are presently more than half) they needed to cope with the load (ignoring the lots more servers that they have for headroom, redundancy and future-readiness).
StackOverflow didn't use that either and instead chose to invent their own query builder/mapper known as Dapper.
K8s can as well.
The difference is a bunch of servers running k8s or a bunch of servers running custom code to duplicate parts of k8s.
Every other project as different constraints.
Nowadays, everybody insists on putting stuff on K8s regardless of how large or small it is.
An application is an application for the purpose of running it on a server. It doesn't really matter how much functionality it has.
It is microservices "revolution" (quotes intentionally) that caused larger applications to be split a lot of small ones and complicated the execution environment to the point that a lot of people spend a lot of time just trying to figure out how to run their applications reliably.
That is not necessary.
If you can have multiple microservices, more likely than not you can have them as separate libraries used by single application or a separate modules of a single application. Just make sure to put the same thought into modularizing it as you would designing microservices APIs and you can have the same but much easier and with much better performance (no serialization/deserialization, no network hops, no HTTP stack, no auth, etc.)
But if you already have the tooling, experience and support for k8s, why wouldn't you use it?
I can fire up a k8s cluster on any major cloud provider in minutes (or bare metal in slightly longer), and deploy almost any web app using a standard interface that has extremely wide adoption.
K8s supports a ton of things, but you don't have to use them. It can be complicated, but usually it's not.
It feels a bit like saying why use Linux for your simple app when it could run just fine on DOS or QNX. How many years of my life have I wasted debugging networking issues on Linux or Windows that turned out to be caused by a feature I wasn't even (intentionally) using...