Image compression is complicated. Licencing topics, decompression cpu time, decompression memory use, decompression memory bandwidth use, compression time, is asymmetric slower but more efficient compression available, multi-threading, how much of the decoding can be shadowed during the data transfer, how many levels of progression are preferred, global image artefacts (such as banding) that don't appear in objective metrics, inter-frame copying ending up copying strange stuff around, red colors tend to not to be compressed with great success (there is no gamma compression for red in the eye, but in image formats there is), how much the image format is for the eyes and how much for the metrics, are the results the same on different platforms/implementations, is alpha supported, how good is the HDR modeling and its rather non-linear relations to gamma and colors in general, does the image format tend to preserve the quality of materials such as wood, marble, skin, cloth -- or replace them with cheap plastic imitation, some image formats work dramatically well at low BPP (<0.5) but start failing when compared to old JPEG at higher BPP (2.5+ BPP). Some image formats decode only a few megapixels per second which can be a disaster if your images happen to be in the 40 megapixel category.

Overall, it is a very complex landscape.

> there is no gamma compression for red in the eye

What does this mean? Do you have a citation to a source explaining whatever this is trying to say using precise standard color science terminology?

Don't you know every HN thread needs its standard armchair top comment rebuttal on how TFA is actually wrong and the author is naive and doesn't really understand the problem with all its intricacies?

The top comment rebuttal is written by the author of the WebP lossless, https://github.com/google/butteraugli and https://github.com/google/pik so calling it armchairing doesn't really seem appropriate.