Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And that's why I shouldn't do math at night...

Anyway, 100ms is quite a lot in the life of a modern CPU.



Even 1ms is a lot. I have some experience with algorithmic trading. The application took messages off the network, processed them and responded to market within 5 microseconds. That's 1/200th of 1ms. This measured on a special type of switch (https://en.wikipedia.org/wiki/Cut-through_switching ).

Lots of stuff happens during those 5us. The message is read from the network device (directly by the application, no Linux or syscalls anywhere during those 5us). Then it is parsed, deduplicated (multiple multicast channels carry redundant copies of the messages), uncompressed (the payload is compressed with zlib), the uncompressed payload is parsed, interpreted (multiple types of messages). Business logic is executed to update state of the market in memory then to generate signals to listening algorithms. The algorithm is run to figure out whether it wants to execute an order. The order is verified against decision tree (for example to check whether it does not exceed available budget). The market order packet is created and sent over TCP.

Now imagine, all that stuff happens in 1/200th of 1ms. In comparison, transferring 48kB from L2 or L3 to L1 is pretty damn insignificant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: