My previous (very long) project was in such a state when I got there that it was only in the last year I was there that measuring things in microseconds was something I could get on other people's radars. I wish I had started sooner because I found a lot of microseconds in about a four month period. That was the most intensive period of user-visible latency reduction I saw the entire time it was there and second through fourth place took years of manpower to accomplish. Past me and future me are both still mad about that.
This isn't as accurate, but it's often adequate. I have never liked making a network request to see if I can make a network request; too slow. (I do use this architecture to rate-limit my own website, mostly because I wanted to play with writing a service to do that. But I'd hesitate to make someone else use that.)
In terms of preserving state, the answer for rate limiting is that it is almost always far, far less dangerous to fail open than it is to deny requests during a failure. If you really, really, wanted to preserve state (something I'd suggest avoiding for a rate limiter), several KVS's have optional persistence you can turn on, for example, Redis' AOF.
The end services themselves should be designed with some sort of pushback mechanism, so they shouldn't be in any danger of overloading, regardless of what's going on with the rate limiter.
If you wanted to throw another layer of load balancer in, there are consistent hashing-adjacent strategies in nginx+ that would allow you to go from 2 ingress routers to 3 shards with rate limiters to your services, using one KV store per box. But I highly suspect that the latency profile there will look remarkably similar to ingress routers doing rate limiting talking to a KV store cluster.