Every time I run an image comparison, the webp version looks worse and yet Google insists it's the same quality. It's baffling.

Even if the above were just an individual... bafflement? and not an actual issue, the size savings really don't seem worth the compatibility hassle, the extra manpower/workflow complexity to support 2 formats, the additional storage (and caching) caused by this duplication.

And the above is if it's done RIGHT. 80% of people outside this forum won't understand that you're not supposed to convert from a lossy format to another, and just convert jpegs to webp.

Webp seems so pointless. A progressive jpeg with an optimized Huffman table can be understood by a 27 year old decoder without issues, achieves 90% of the quality/size claims of webp and can be produced losslessly from your source jpegs. This is without even touching Arithmetic Coding (also lossless, part of the official standard, but poorly supported due to some software patents, even though they all expired by now), or playing with customized DCT quantization matrices to get more compressible outputs (incurs a generation loss, but produces standard files).

And that is the reason why I dont like Open Media Alliance in General.

Not only does the best JPEG encoder perform as good if not better than the best WebP encoder. With a JPEG Repacker [1] JPEG file size could easily be 20% smaller. If you have to support a new format with relatively little benefits, why not just support using the repacked instead.

[1]https://github.com/google/brunsli