That would work if you can assign requests from a given tenant to a single instance, but there are many situations in which that's either impossible or unwise. What if a single server doesn't have enough capacity to handle all the traffic for a tenant? How do you preserve the state if that instance fails?
I'm sorry, I don't think I understand your question. Are you talking about the KVS? You shard the servers for extra capacity. Several KVS's have built-in clustering if you want to go that route. They're usually incredibly stable, but if one goes down for whatever reason (say the physical machine fails), you just spin another one up to take its place.
In terms of preserving state, the answer for rate limiting is that it is almost always far, far less dangerous to fail open than it is to deny requests during a failure. If you really, really, wanted to preserve state (something I'd suggest avoiding for a rate limiter), several KVS's have optional persistence you can turn on, for example, Redis' AOF.
The end services themselves should be designed with some sort of pushback mechanism, so they shouldn't be in any danger of overloading, regardless of what's going on with the rate limiter.
I think I misunderstood what you were saying. By "in-memory KVS" with "access times in the microseconds" I thought you were implying a KVS hosted on the server that handles the requests. Otherwise, even if the KVS can respond in 100us to a local query, network latency is going to add much more than that.
Ah ok, that makes sense. Over loopback, you can roundtrip Redis at least as fast as double-digit micros. Intra-DC, your network latency should be somewhere in the triple-digit micros. I'd say if you're not seeing that, something is probably wrong.
I don't recall all the details but we did end up with lua to talk to redis or memcached to do some traffic shaping at one point. One for bouncing to error pages before we switched to CDN (long story), and another one for doing something too clever by half about TTFB. It's still really cheap especially on a box that is just doing load balancing and nothing else.
If you wanted to throw another layer of load balancer in, there are consistent hashing-adjacent strategies in nginx+ that would allow you to go from 2 ingress routers to 3 shards with rate limiters to your services, using one KV store per box. But I highly suspect that the latency profile there will look remarkably similar to ingress routers doing rate limiting talking to a KV store cluster.