What does HackerNews think of mozjpeg?

Improved JPEG encoder.

Language: C

FWIW, Mozilla has been maintaining their own fork for quite a while now[1]

But AFAIK most Linux Distros have been using libjpeg-turbo as a drop-in replacement for libjpeg, after some drama in ~2010 where libjpeg came under new management, decided to break ABI/API several times over and add incompatible, non-standard format extensions[2].

[1] https://github.com/mozilla/mozjpeg

[2] https://en.wikipedia.org/wiki/Libjpeg#History

No.

See https://github.com/mozilla/mozjpeg

Also, there is a fairly big problem with JPG that the ‘quality’ setting is not calibrated. That is you might look at one image and think it looks fine (which is subjective, depends on what you want to use the image for…) with a quality of 60%, but then you compress a million images at that rate, delete the originals, then you find that many of them look really awful. Not only that but there are images you could have compressed more and still been happy with the output.

If you are publishing images for the web consider using WebP which is consistently better, well supported now, and has a calibrated quality knob.

Guetzli was already mentioned and roughly does what you are talking about.

MozJPEG [1] includes several quantization tables that are optimized for different contexts (see the quant-table flag and source code for specific tables[2]), and the default quantization table has been optimized to outperform the recommended quantization tables in the original JPEG spec (Annex K).

It's also worth noting that MozJPEG uses Trellis quantization [3] to help improve quality without a per-image quantization table search. Basically rather than determining an optimal quantization table for the image, it minimizes rate distortion on a per-block level by tuning the quantized coefficients.

Both the SSIM and PSNR tuned quantization tables (2 and 4) provided by MozJPEG use a lower value in the first position of the quantization just like this article suggests (9 and 12 vs the libjpeg default of 16).

[1] https://github.com/mozilla/mozjpeg

[2] https://github.com/mozilla/mozjpeg/blob/5c6a0f0971edf1ed3cf3...

[3] https://en.wikipedia.org/wiki/Trellis_quantization

They're still being used. A newer, optimized JPEG encoder, mozJPEG[0], seems to use progressive encoding by default. I suspect with faster internet speeds, most JPEGs download and decode so fast that the cool 'enhance' animation is rarely seen anymore.

[0] https://github.com/mozilla/mozjpeg

mozjpeg greatly improved the state of art jpeg encodig, while maintaining full backwards compatibility https://github.com/mozilla/mozjpeg

but there's a limit on what can be done with the primitives that jpeg offers. for example, jpeg is stuck with the older huffman coding for the entropy encoding part, instead of the better arithmetic coding or asymetric numeral systems

IMHO these JPEG optimizers need to explain what optimization they actually do.

At the very least, it should mention if the optimization is lossless (by dropping metadata, optimizing Huffman table/progressive scan parameters etc.) or lossy: because they have very different use cases (sometimes you need the image to be pixel-wise identical).

Even better, if lossy, "how lossy" it is.

I'm aware it's using mozjpeg [1] which is pretty good (guetzli [2] is another good one for the interested); still, it comes with many settings and routines (both lossless and lossy) that can be configured.

[1] https://github.com/mozilla/mozjpeg

[2] https://github.com/google/guetzli

Yes, this is definitely an interesting question. Earlier this year I implemented a version of trellis quantization for some compression experiments that I've been tinkering with in my spare time. My code considers more possibilities than just rounding to the nearest quantization step when it deems that the bit rate savings in after entropy encoding may be worth the additional image loss (e.g., it might even completely zero a coefficient). That would violate this decoder's assumption that the original DCT coefficient must have been within a limited range around the quantization step.

I know that mozjpeg [1] features trellis quantization for JPEG encoding. I wonder how this decoder would do with that?

[1] https://github.com/mozilla/mozjpeg

It's great you listen. So I'll try.

1. Scaling down in linear colorspace is essential. One example is [1], where [2] is sRGB and [3] is linear. There are some canary images too [4].

2. Plain bicubic filtering is not good anymore. EWA (Elliptical Weighted Averaging) filtering by Nicolas Robidoux produces much better results [5].

3. Using default JPEG quantization tables at quality 75 is not good anymore. That's what people referring as horrible compression. MozJPEG [6] is a much better alternative. With edge detection and quality assessment, it's even better.

4. You have to realize that 8-bit wide-gamut photographs will show noticeable banding on sRGB devices. Here's my attempt [7] to reveal the issue using sRGB as a wider gamut colorspace.

[1] https://unsplash.com/photos/UyUvM0xcqMA

[2] https://cloud.githubusercontent.com/assets/107935/13997633/a...

[3] https://cloud.githubusercontent.com/assets/107935/13997660/b...

[4] https://cloud.githubusercontent.com/assets/72159/11488537/3d...

[5] http://www.imagemagick.org/Usage/filter/nicolas/

[6] https://github.com/mozilla/mozjpeg

[7] https://twitter.com/vmdanilov/status/745321798309412865

Has anybody tried mozjpeg? https://github.com/mozilla/mozjpeg

It works much better than any of the alternatives listed here, I'm surprised noone mentioned it yet.

Here is an image compressed to the same size with

mozjpeg http://m8y.org/hn/12597098.jpeg jpeg.io http://i.pi.gy/AP8v.jpg

It looks like this binary searches across the Go quality levels to find acceptable quality. This isn't a bad idea, but it won't be fruitful. The Go standard library JPEG encoder is pretty basic, and reuses the quant tables from the spec. It also doesn't optimize the huffman tables, so the pics are typically 10% bigger than they need to be.

This idea, taken to the extreme is mozjpeg. It is really advanced and can take advantage of a lot of cool tricks (like trellis optimization) in order to get the absolute best quality for the size.

https://github.com/mozilla/mozjpeg