Download TurboBench from Releases [2]
Here Some Benchmarks:
- https://github.com/zlib-ng/zlib-ng/issues/1486
- https://github.com/powturbo/TurboBench/issues/43
Download TurbBench from Releases [2]
Here Some Benchmarks:
- https://github.com/zlib-ng/zlib-ng/issues/1486
- https://github.com/powturbo/TurboBench/issues/43
You are comparing the zstd with asynchrounous I/O + multithreading against a single thread simple lz4 cli.
Additionally you're using some exotic and not representative hardware.
In theory, the only single thread decompression case where zstd can beats lz4 is when you have slow (reading) i/o.
lz4 & zstd are mostly used as libraries and real benchmarks are not considering I/O or multithreading.
This is what I'm getting with TurboBench & enwik9. lz4 decompressing 2,8x faster than zstd.
C Size ratio% C MB/s D MB/s Name
356828015 35.7 400.71 1557.59 zstd 1
416874817 41.7 312.31 1967.66 lzav
509199084 50.9 630.78 4395.08 lz4 1
Download TurboBench [1] from [2] and make the tests on your machine.Download TurboBench for windows from releases [2]
Lenovo IdeaPad 5 Pro - Ryzen 6600HS - DDR5 6400 - gcc-13.1
File: silesia.tar 220MB (Mixed text/binary)
C Size ratio% C MB/s D MB/s Name
77885207 36.7 43.20 4479.24 lz4 9
81343184 38.4 123.17 4299.68 lz4 3
85664894 40.4 324.11 5171.12 lzturbo 11
88672448 41.8 435.96 2769.77 lzav v1.3
99553305 47.0 914.25 5425.54 lzturbo 10
100883728 47.6 813.76 4667.85 lz4 1
211948544 100.0 16032.92 16023.24 memcpy
lzturbo (closed source) included as reference[1] - Benchmark App: https://github.com/powturbo/TurboBench
[2] - https://twitter.com/powturbo/status/1678333893626195969
[3] - https://bugs.chromium.org/p/chromium/issues/detail?id=124697...
Download TurboBench and make your own tests:
In the speedup plots you can see the best compressors for content providers:
- brotli 11 is best for static content
- brotli 5 is best until 1MB/s network transfer speed
- libdeflate 6 is best from 1MB/s to 6MB/s (followed by brotli,4)
- igzip 1,2 is best for very fast networks > 10MB/s
brotli brings little value at decompression for users
[1] https://github.com/powturbo/TurboBench
[1] https://sites.google.com/site/powturbo/home/web-compression
[2] https://encode.su/threads/2333-TurboBench-Back-to-the-future...
This is for html web compression, but the results are similar for other datasets. For internet transfer more compression is better than more decompression speed.
You can make your own experiments incl. the plots with turbobench [2]
[1] https://sites.google.com/site/powturbo/home/web-compression [2] https://github.com/powturbo/TurboBench
The proprietary LzTurbo [0] and free Lizard [1] both claim to be faster at decompression while having better compression ratios.
Come on, this is not serious.
Brotli's fastest compression algorithm is still significantly slower than zstd. And more importantly, it compresses _much worse_.
For a 3rd party evaluation, one can try [TurboBench](https://github.com/powturbo/TurboBench) or even [lzbench](https://github.com/inikep/lzbench) which are open-sourced. Squash introduces a wrapper layer with distortions which makes it less reliable, and more complex to use and install, quite a pity given the graphical presentation is very good. I'm interested in speed, and in this area, all benchmarks point in the same direction : for a given speed budget, Zstandard offers better ratio (and decompresses much faster).
Better test this yourself on your own data with TurboBench: https://github.com/powturbo/TurboBench
The enwik9bwt must be generated. The two other files can be directly downloaded. You must also download rle64 if you want to use test it.
The better option is to use TurboBench: https://github.com/powturbo/TurboBench