So in many, if not most, contemporary Information Retrieval (IR) problems, there is a total document set larger than could be explored on an interactive basis and so the data structures get laid out in such a way that with some probability north of a coin, you’ll find “better” documents in the “front” half. This is hand-waving a lot of detail away, so if you’d like me to go into some detail about multi-stage ranking, and compact posting lists and stuff I’m happy to do that in subsequent comment.
But it’s a useful fiction as a model and the key part is that there’s still “good” stuff in the “back” half: you’d like to consider everything if you had time.
A PID controller (again oversimplifying a bit) is charged with one primary task: given some observed quantity (temperature in a room) and some controlled quantity (how hard to run the AC), maintain the observed quantity as close to a target as possible via manipulating the controlled quantity.
If you hook one of these things up to an IR/search system (web search, friend search, eligible ads, you name it) where the observer quantity is e.g. the p95 or p99.9 latency of the retrieval, and the controlled quantity is how “deep” to go into the candidate set something magical happens: you always do something close to your best even as the macro load on the system varies.
That’s again a pretty oversimplified (to the point of minor technical inaccuracies) TLDR, but I think it makes the important point.
If you’d like more depth feel free to indicate that in a comment and I’ll do my best.