There are a lot of interesting algorithms, but as the author points out, he had to move the data to a RAM drive to avoid the disk access being the limiting factor. For a lot of use cases, it's not the CPU that is going to limit you.

I did love the anecdote about adding gzip moving the bottleneck to the cpu from the network, and actually slowing down the whole system.

Storage is rarely the bottleneck in modern systems, a lot of software is just written as if it was. It is completely ordinary these days to have 4-12+ GB/sec of sustained storage bandwidth available, and that is available to the application if you write the code correctly.

Even with serious performance engineering, it is difficult to drive compression, parsers, codecs, etc with throughput comparable to modern storage. There are several non-cryptographic hashing algorithms that can be driven that hard, but none of them are mentioned in the article.

Agree with everything you say except that the post didn't mention non-cryptographic hashing algos that can be driven that hard. xxHash[1] (and especially XXH3) is almost always the fastest hashing choice, as it both is fast and has wide language support.

Sure there are some other fast ones out there like cityhash[2] but there aren't good Java/Python bindings I'm aware of and I wouldn't recommend using it in production given the lack of wide-spread use versus xxhash which is used by LZ4 internally and in databases all over the place.

[1] https://github.com/Cyan4973/xxHash [2] https://github.com/google/cityhash