Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That still allows for perhaps 3000-4000 simultaneous writes, depending on how many file handles are in use by other processes.


On Linux, /proc/sys/fs/file-max has the maximum number of simultaneous open filehandles supported by the kernel. On my laptop this is about 1.6 million


And mine is about twice that.

But also keep in mind every executable has at minimum its own executable as an open file. Picking a random python process I currently have running, lsof reports it has 41 open *.so files.


Yes, but it's also highly unlikely that if you're trying to push transactions per second into that kind of range that you'd be doing it with individual processes per transaction. You'd also be likely to be hitting IO limits long before the number of file descriptors is becoming the issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: