Thing I'd love to try if I had time - a compressor which derives the best table for the image. I'm imagining a loop of: start with the default, compress, calculate the difference from the original, dct the error to see which patterns are missed, adjust the table, repeat. Stop on some given error/size-increase ratio. (yes, I'm trying to get someone else nerd sniped into doing this)

Edit: something like this https://www.imaging.org/site/PDFS/Papers/2003/PICS-0-287/849...

So from the older known ones there DCTune and DCTex methods, but it seems neither is available for download anywhere.

Guetzli was already mentioned and roughly does what you are talking about.

MozJPEG [1] includes several quantization tables that are optimized for different contexts (see the quant-table flag and source code for specific tables[2]), and the default quantization table has been optimized to outperform the recommended quantization tables in the original JPEG spec (Annex K).

It's also worth noting that MozJPEG uses Trellis quantization [3] to help improve quality without a per-image quantization table search. Basically rather than determining an optimal quantization table for the image, it minimizes rate distortion on a per-block level by tuning the quantized coefficients.

Both the SSIM and PSNR tuned quantization tables (2 and 4) provided by MozJPEG use a lower value in the first position of the quantization just like this article suggests (9 and 12 vs the libjpeg default of 16).

[1] https://github.com/mozilla/mozjpeg

[2] https://github.com/mozilla/mozjpeg/blob/5c6a0f0971edf1ed3cf3...

[3] https://en.wikipedia.org/wiki/Trellis_quantization