> Eg compare a cache dropping randomly selected items to LFU or LRU caches
I think it's not that randomly (no pun intended) adding randomness to an existing algorithm or solution could improve it, but that sometimes a random sampling process mimics the underlying problem.
For caching, the assumption that LRU item should be ejected is that - an assumption. If you checked this assumption and found that it was wrong, but that instead the access patterns are random, then dropping random items _do_ mimic the problem domain, and that makes it a better solution!
It’s not quite that simple. It’s also probabilistically arriving at the asymptote of the ideal solution for many cases. If an item is accessed often, it’ll be re-inserted quickly when it is accidentally removed but only at risk of eviction at a rate of 1/n. On average, the items you use often will be in the cache and the items you don’t, won’t. It has nothing to do with specifically actually random access pattern. Because you access it a lot, the fact that it got evicted one of those n accesses is completely amortized by the other accesses (even more so when you take into account that you never evict the item that you are currently accessing).
The solution fails for specific non-random access scenarios. Eg you access an item exactly every n-1 accesses: in an LRU, it’ll always be in the cache but in a probabilistic eviction scenario, it’s most likely not going to be in there.
I think it's not that randomly (no pun intended) adding randomness to an existing algorithm or solution could improve it, but that sometimes a random sampling process mimics the underlying problem.
For caching, the assumption that LRU item should be ejected is that - an assumption. If you checked this assumption and found that it was wrong, but that instead the access patterns are random, then dropping random items _do_ mimic the problem domain, and that makes it a better solution!