Again a buffer overflow in image decoding, that sounds similar to the one from 2021 [1]. That one was wild, building a CPU out of primitives offered by an arcane image compression format embedded in pdf, to be able to do enough arithmetic to further escalate to arbitrary code execution!

[1]: https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...

Which actually makes me more sympathetic to Chrome not (yet) adopting JPEG-XL.

Don't get me wrong, I think JPEG-XL is a great idea, but to everyone saying "how can supporting another image format possibly do any harm", this is the answer.

Why not implement all image codecs in a safer language instead?

That would seem to tackle the problem at its root rather than relying on an implementation's age as a proxy for safety, given that that clearly isn't a good measure.

There are efforts to do that, notably https://github.com/google/wuffs

RLBox is another interesting option that lets you sandbox C/C++ code.

I think the main reason is that security is one of those things that people don't care about until it is too late to change. They get to the point of having a fast PDF library in C++ that has all the features. Then they realise that they should have written it in a safer language but by that point it means a complete rewrite.

The same reason not enough people use Bazel. By the time most people realise they need it, you've already implemented a huge build system using Make or whatever.