Ah, this is a great idea, and I can see how this could be extended beyond simple deblocking.

In essence, it computes the uncertainty from the JPEG quantization step (the source of most of JPEG’s compression) and uses that information to select a decoding that minimizes a particular error metric (here, discontinuities at block boundaries).

But it’s not limited to just that, in principle. It could optimize towards any decoded block within the quantization range given a suitable prior - and I imagine that this could be paired with e.g. a DNN to faithfully reconstruct an image from a JPEG with much better accuracy.

Additionally, I'd love to see a decoder add noise proportional to the uncertainty and surrounding structure as over smoothed images look unnatural as well. See activity masking demo https://people.xiph.org/~jm/daala/pvq_demo/

Pik can synthesize noise. https://github.com/google/pik