Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You can have massive amounts of RAM these days.

True, but I am finding that balancing CPU and RAM can be tricky. Slapping 128GB on a 1-core machine means you quickly have CPU limitations.



Redis is single-threaded and will have no problem saturating a 10G NIC with a single socket.


My concern is how fast it takes a CPU to scan through all of that memory.


What "scanning"? That's not how memory access works in a K/V store, and Redis does very little work that demands much of the CPU.


There are workloads that will saturate a redis instance's CPU: using it as an LRU cache, eventually you will hit the configured memory limits and adding new keys will require finding old keys to delete. Eventually it may also require redis to do memory defragmentation which can be fairly intensive.


> There are workloads that will saturate a redis instance's CPU

I might imagine this scenario if you're excessively using smembers and a few other slow ops, but I have yet to see CPU issues outside of bad eval's.

> require finding old keys to delete

LRU/LFU eviction is not particularly CPU intensive.

> redis to do memory defragmentation which can be fairly intensive

Active defrag has relatively negligible overhead, and assuming jemalloc even more so.


Nothing but lots of small (~100b) pipelined SETs and a small number of GETs here and there. Only 10MB/s but at 100k SETs/sec redis’s CPU core sits at 60-70%. Active defrag can easily send it into a death spiral.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: