[0] https://www.youtube.com/watch?v=lJ8ydIuPFeU
[1] https://bravenewgeek.com/everything-you-know-about-latency-i...
I don't agree with all of it, but definitely a few points made directly or indirectly hit home, such as:
- there is no single metric that can accurately represent "latency"
- most of our metrics are misleading in what they unconsciously include or exclude
I can remember once looking at a graph of requests/second and wishing I could see a distribution of requests per millisecond within an individual second. That level of detail is hard to come by, so in the meanwhile, we do what we can with the data we have.
Would you rather see "number of requests started at this ms" (you seem to suggest this), or is something else more interesting?
I think a sort of Gantt chart that plots duration of requests as well as starting time within the time span (e.g. a second or more) might be very informative. Each individual request on a different position on the Y axis, time on the X axis. Perhaps you have some bound on requests in flight, that could be the height of the Y axis, so you can easily see calm or busy periods.
At least our observability stack doesn't show this level of detail, but it would be very interesting to have it. (We do have calculated heatmaps based on maximum request time in Grafana, which is at least better than plots of average request times)
I'd want to log when the requests start, as I'm mostly concerned with how well-distributed request arrival was at that level of granularity.
I wondered if the network layers in between my client and server were effectively "smoothing" request arrival across each second, or if instead requests were very bursty so that a per-minute spike in a typical graph was dominated by a few seconds or milliseconds within that minute.
Queueing is only useful for a few cases, IMO:
* The request is expensive to reject. For example, the inputs to the rejected request also came from expensive requests or operations (like a file upload). So rejecting the request because of load will multiply the load on other parts of the system. You still need backpressure or forwardpressure (autoscaling).
* Losing a request is expensive, delaying the result is not. Usually you want a suitably configured durable queueing system (e.g. Kafka) if you have this scenario.
* A very short queue is acceptable if it's necessary that downstream resources are kept 100% busy. A good example of this is in a router, the output to a slower link might queue 1-2 packets so that there is always something to send, which maximizes throughput.
* If you have very bursty traffic, you can smooth the bursts to fit in your capacity. But this runs the danger of having the queue always full, which you have to manage with load shedding (either automated or manual).
----
An underappreciated queue type is LIFO (last-in, first-out). It sounds unfair, but it keeps you from moving the median response time at the cost of the maximum response time, and it behaves well when full. It fails over into either responding quickly or just rejecting requests when full, so it works well for dealing with bursty traffic.
Why is that beneficial?
1 slip every job, annoying all of the customers whose jobs are queued up. You get a bad reputation.
2 Move onto the next job on time, and gradually complete the stalled job in the background by sending workers back to it when you have spare (which you should have, because in general you must overestimate or things will go badly wrong). That customer will now suffer because their job is going to take a multiple of the expected time, but all of the other customers are happy, so your reputation is good.
I had a section in the post I cut out about how optimizing queue selection started out as a technical problem, but transformed into more of a business and ethical problem the more I pondered it.
You're effectively deciding how to distribute suffering across a large group of people.
Comes up in any situation where large metric gains can be accomplished by optimizing for specific groups - recommender and personalization systems are another example.
You therefore only need enough bolts at each station that they won't run out before the restocker completes a lap, and such that there aren't so many that they get in the way.
Kingmans Formula says that as you approach 100% utilization, waiting times explode.
The correct way to deal with this is bounded queue lengths and back pressure. I.e don’t deal with an overloaded queue, don’t allow an overloaded queue.
Does it reject entries when service times are too high?
Your debugging effort may become more predictable when the system measures the time workers take to complete.
I note you say it used to work overloaded. I would argue it probably was having hidden problems. Perhaps ask those people what the acceptable service time is and lock it in by refusing new entries when it is exceeded.
If they want both infinite queue length and consistently acceptable service times then you must add enough work resource to do that.
You may also want to implement reader/writer locks if your load has many more reads than writes.
Unfortunately, nobody really teaches you these things in a really clear way, and plenty of engineers don't fully understand it either.
The purple account is just plain wrong. Classically, the full architecture is this (keeping in mind that all rules are sometimes broken):
* CQRS is the linchpin.
* You generally only queue commands (writes). A few hundreds of ms of latency on those typically won't be noticed by users.
* Reads happen from either a read replica or cache.
The problem the author faces are caused by cherry-picking bits of the full picture.
A queue is a load smoothing operator. Things are going to go bad one way or another if you exceed capacity, a queue at least guarantees progress (up to a point). It's also a great metric to use to scale your worker count.
> What will you do when your queue is full
If your queue fills up you need to start rejecting requests. If you have a public facing API there's a good chance that there will be badly behaved clients that don't back off correctly - so you'll need a way to IP ban them until things calm down. AWS has API Gateway and Azure APIM that can help with this.
If you're separating commands and queries you should _typically_ see more headroom.
But even if you shifted reads to one or more caches or read replicas, wouldn't those also have queues that will fill up when you are under-provisioned?
Note that I'm using the term "queue" pretty loosely, to include things like Redis' maxclients or tcpbacklog, or client-side queues when all connections are in use.
As long as you have capacity to keep it mostly empty, it's fine. When requests backup, at least some people will still get quick responses, instead of making everyone suffer.
For a stack, a backup means that some requests are informally forgotten, and although they still appear to be open, they will not complete until the end of time.
That's worse. It's a better match to the behavior you want, except for the part where the old requests still appear to be open. You need to actually close them.
You might also want to consider how requesting behavior will change when requests are stacked instead of queued. As soon as people have learned that you keep requests in a stack, the correct way to make a request is to make it, wait for a very small amount of time, and then, if your request hasn't already succeeded, repeat it.
Guess what will happen then?
It would be very hard to learn this so long as the queue is a very small fraction of the total throughput. If the queue depth is 100, and you receive 10,000qps, but process 9,900 qps, the queue will get full, and roughly 97 calls will go unanswered. Ideally you should have another mechanism to time these out, which most systems do. Whatever queue type you pick, you are going to reject 1% of the inbound, but with a FIFO queue, you will also delay 100% of the responses. Do that at several layers, and you can even end up with the client timing out even though their request wasn't even rejected at any stage.
All metrics up! Will fit nicely in my promo packet.