story
It's not complex to set up a load balancer in a given specific environment. But it's another kind of ask to say "set up a load balancer, but also make it so that the load balancer also exists in future dev environments that can be auto-set-up and auto-teared-down. And also make it so that load balancer will work on dev laptops, AWS, Azure, google, our private integration test cluster on site, and on our locally-hosted training environment, with the same configuration script." All of these things can be done in k8s, and basically are by default when you add your load balancer in k8s. They can be done other ways, too, or just ignored and not done, also. But k8s offers a standardized way to approach these kinds of things.
I've been having this thought very often lately.
The only way for humans to do something faster is to use a machine. Any machine is built on some assumption that something is repeatedly true, that some things can be repeatedly interacted with in the same way.
Finding true invariants is very hard, but our world is increasingly malleable. Over time it is getting easier to invent new invariants and pad things out so that the invariant holds.
Ansible and Vagrant are not perfect, but I think they are far simpler than a single node k8s instance, and more representative of an actual production environment.
This is not my strength in any way, but hearing from those teams, Kubernetes will be a godsend
The time I spent managing HAproxy for 5 services was bigger than the time I spent managing load-balancing and routing using k8s for >70 applications that together required >1000 load balanced entrypoints.
It's a lever for the sysadmin to spend less time on unnecessary work.