You can see how unbelievably better this is than JPG (at almost any file size) here:

http://xooyoozoo.github.io/yolo-octo-bugfixes/#swallowtail&j...

Or here:

https://bellard.org/bpg/lena.html

The difference between webp from the first link is actually quite small (http://xooyoozoo.github.io/yolo-octo-bugfixes/#swallowtail&w...).

Additionally you can find the Javascript decoder here:

https://github.com/xingmarc/bpg-decoder

It's 215k of Javascript though so isn't really that practical in most cases and I'd worry about how it was affecting battery life. It'd be very interesting to do some tests and see.

> unbelievably better

I don't know that this qualifies as unbelievable. This is just good marketing and spin. The image in that link is exploiting the fact that modern codecs specify upsampling filters, so the HEVC half looks smoothly varying while JPEG can, per spec, only look pixelated when blown up like that.

There's absolutely no reason that thing couldn't have shown you a jpeg image rendered with bilinear filtering, which of course is what you'll see if you put that JPEG on a texture and scale it up using the GPU.

But it didn't, because it wanted to convince you how much "unbelievably" better HEVC is. Meh.

I mean, to be clear: HEVC is absolutely better, to the tune of almost a factor of two in byte size for the same subjective quality. Just not like this. If you've got a site where still images are a significant fraction of your bandwidth budget (and your bandwidth budget is a significant fraction of your budget budget) then this could help you. In practice... static content bandwidth is mostly a solved problem and no one cares, which is why we aren't using BPG.

A few times I have looked at the possibility of switching from JPEG to another format for photo web sites and every time I have I've come to the conclusion that you can't really win.

There are three benefits that one could get from reducing the file size:

1. Reduced storage cost

2. Reduced bandwidth cost

3. Better user experience

In my models, storage cost matters a lot. You can't come out ahead here, however, if you still have to keep JPEG copies of all the images.

Benefits in terms of 2 are real.

Benefits in terms of 3 are hard to realize. Part of it is that adding any more parts to the system will cause problems for somebody somewhere. For instance, you can decompress a new image format with a Javascript polyfill, but is download+decompress really going to be faster for all users?

Another problem is that much of the source material is already overcompressed JPEG so simply recompressing it with another format doesn't lead to a really improved experience. When I've done my own trials, and when I've looked closely at other people's trials, I don't see a revolutionary improvement.

A scenario that I am interested in now is making desktop backgrounds from (often overcompressed) photos I find on the web. In these cases, JPEG artifacts look like hell when images are blown up, particularly when images have the sharp-cornered bokeh that you get when people take pictures with the kit lens. In that case I can accept a slow and expensive process to blow the image up and make a PNG, something like

https://www.mathworks.com/help/images/jpeg-image-deblocking-...

or

https://github.com/nagadomi/waifu2x

The other approach I imagine is some kind of maximum entropy approach that minimizes the blocking artifacts.