Gotta be super careful with this one. We did this at reddit and it bit us bad. The problem was as soon as the load on a machine went down it got pounded with new requests and the load shot up, but it takes a few seconds for the load number to react to all the new requests. So we saw really bad see-saw affect.
We had to add extra logic to mark how long a machine had beed at a certain load and also randomly send requests to slightly more loaded machines to keep things even.
The moral of the story here is make sure you pick a metric that reacts to the change in request rate as quickly as your request rate changes!
I'm curious how you reached this condition as a requirement:
> The moral of the story here is make sure you pick a metric that reacts to the change in request rate as quickly as your request rate changes!
It makes sense intuitively, but I'm having trouble proving to myself that this is necessary+sufficient.
Picking a metric that reacts to changes quickly is neither necessary nor sufficient, but it certainly helps reduce the error on your calculation. You need to know how far off your set point you are and so you need as accurate a measurement as possible.
Imagine a cruise control for a car where the speedometer had a five second delay. You’d still stay at your desired speed on average but it would vary a lot more and require more work to get back to the desired speed. It would have to accelerate harder and brake harder.
I worked on a system that used total connections as our heuristic, measured by the load balancer. The problem we experienced was that some failure scenarios could cause requests to fail quickly compared to normal traffic. In effect what would happen is that a host would go into a bad state, start failing requests with a lower latency than normal traffic causing the load balancer to route an increasing amount of traffic to the bad host. This happened because the load balancer was only capable of measuring connections and didn’t discriminate between good/bad responses.
We ended up injecting fake latency into bad responses at the application layer which worked to prevent this sort of “black hole” effect.
> The moral of the story here is make sure you pick a metric that reacts to the change in request rate as quickly as your request rate changes!
The key things are that you don't react to changes in the metric faster than the metric can move (be appropriately damped), and that you react to the metric "smoothly" (e.g. pick a random server where the odds to get a specific server vary with the loading metric, like you mention).
As others say, it's fundamentally a controls problem... a controls problem where there are many, many actuators and the delay/phase shift is relatively unpredictable. Making things easy for the control system by making reaction smooth and the system overall react slowly (overdamped) is important.
The claim is that it results in pretty decent behavior under load while avoiding some problems caused by delayed information.
They work better when it's just one in front of a fleet of servers, and so have the total picture of what is going on, but of course that's quite the bottleneck. So you get two LBs, or more, and they each only have their notion of what the back end fleet is doing. There's no standard feedback mechanism to them at all.
Some offer approaches like measuring response time, but that doesn't work so great as soon as you consider APIs where no two requests perform the same. Was it a fast request that got answered slowly (back-end overloaded?), or a slow request that got answered quickly (back-end bored?). Who knows.
For the service I was working on a few years ago, no two requests are the same by any stretch of the imagination, even for the same API call, and came with a variation on request size, and computational power required to process them.
As you'd expect, traditional loadbalancer behaviour actually handled about things to an okay degree probably 90% of the time. That 10% was a real killer though.
The intention was that if a server was returning results "quickly" that meant it was least-loaded, and could handle the newest requests.
What it actually meant though was that the server disk filled up, and it started returning "500, Internal Server Error" errors. Very quickly.
At the point the alarms were raised almost all incoming traffic had been routed to this dead/dying host.
We also had to keep track of how many people were on the webpage for a channel when it went live so that we could preemptively replicate the video stream to enough, but not too many, servers.
A useful metric for "load" is just as hard as doing the load balancing itself.
Requests can be vastly different, unless you have only one application they're also constantly changing and there are more load balancers involved (both horizontally and vertically as a single request can pass multiple). There are also numerous failure conditions under which responses are very fast.
In that situation it's easier to design for an even spread, then work to improve that metric as much as possible as more information becomes available.
http://www.brendangregg.com/bpf-performance-tools-book.html
Of course, you can always just use cloudflare ;)
I managed a team that built a 5x 1000 node distributed setup 10+ years ago.
We ended up going with
a) short DNS TTL + a custom DNS server that sent people to the closest cluster (with some intra-communication to avoid sending people to broken clusters)
b) in each cluster; three layers: 1) Linux keepalived load balancing, 2) Our custom HTTP/TLS-level loadbalancers (~20 nodes per DC), 3) our application (~1000 nodes per DC)
A typical node had 24 (4x6) CPU cores when we started and 48 (4x12) towards the end.
These were not GC/AWS nodes, we were buying hardware directly from IBM/HP/Dell/AMD/Intel/SuperMicro and flying our own people out to mount them in DCs that we hired. Intel gave us some insane rebates when they were're recovering from the AMD dominance.
Load-balancing policy: we just randomized targets, but kept sticky sessions. Nodes were stateless, except for shared app properties - we built a separate globally/dc-aware distributed key-value store - that was a whole new thing 12 years ago we built based on the vague concept of AWS Dynamo. App nodes reported for duty to the load balancers when they were healthy.
We had a static country-to-preferred-DC mapping. That worked fine at this scale.
This setup worked fine for a decade and 250M+ MAUs. We had excellent availability.
At some point like 10 years ago a kinda well known US-based board member really, really wanted to us to move to AWS. So we did the cost calculations and realized it would cost like 8X more to host the service on AWS. That shut him up.
Different times. It's so much easier now with AWS/GC to build large-scale services. But also so much more expensive - still! I wonder how long that can last until the concept of dealing with computation, network and storage really becomes a commodity.
Basically one CPU second per web page. 150k pages/second @ peak. 5 million HTTP requests/s. 150 Gbit/s. The web for 250 million people.
Kinda insane numbers when I think about it now, still. (I left five years ago, after it peaked.)
Two years or so back, I stumbled on power-of-2 load balancing via Twitter Finagle documentation. Found it pretty interesting. Here is a relevant news.yc discussion: https://news.ycombinator.com/item?id=14640811
And of course, the exponential weighted moving average is a good algorithm too. It is, I believe, used by Elasticsearch. Cloudflare blogged abt using it, as well: https://blog.cloudflare.com/i-wanna-go-fast-load-balancing-d...
True your 99th percentile slowest requests won't hit the cache, and certainly that caching won't solve all your scaling difficulties.
However, keeping requests for commonly-needed data away from (say) a DB cluster decreases the load on it at a given level of throughput, and that can be good for P99, and (as the post notes) caching can specifically help with super-hot data which can cause problematic hotspots in some sharding strategies.
Obviously situations vary and there're limits, but a cache seems like a legit tool, not just a band-aid, for a decent number of situations.
Edit: Link to paper http://www.eecs.harvard.edu/~michaelm/postscripts/handbook20...
Also see, this nice little blog post: https://maisonbisson.com/post/hash-rings-sharding-request-re...
"If all the clickstream clicks from the same user or state-change events from the same workflow or whatever go to the same host, you can hold the relevant state in that host’s memory, which means you can respond faster and hit your database less hard."
It seems that in both cases the idea is to distribute (supposedly independent) requests between workers and one of the main difficulties is that requests might not be independent either within one stream (say, in the case of sessions) or between different streams (say, if they need to use one common state).