Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I doubt this is as big a win as claimed. Sure, a benchmark managed to achieve 1.6GB/s CRC-64 throughput, but that requires 16KB (!) of lookup tables. If those don't reliably stay in at least L2 cache, the throughput could be very poor in practice.

If speed is important, xxHash is probably a better choice than CRC-64. http://fastcompression.blogspot.com/2012/04/selecting-checks...

If you want both speed and security, Siphash-2-4 is a cryptographically secure hash algorithm that runs at ~1.4cycles/byte for long inputs (faster than even crc16speed), and yet is still fast for short inputs (2.4cpb for 64 bytes). It also doesn't require 16KB of lookup tables. https://131002.net/siphash/ http://bench.cr.yp.to/results-auth.html#amd64-titan0



The constraint was: the output must be CRC-64-Jones, so switching implementations wasn't an option.

So, given that constraint, any better solutions are welcome. :)


Why is that a hard constraint?


in some cases the checksums are persisted, so future checksums must be computed in the same way.


Backwards compatibility?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: