I find it funny that now fuzzers are being written in rust, as if that translates to better quality bugs being found.

It looks to me like the effort would have been better spent writing a decoder in rust. AFAIK Mozilla moved the mp4 (that's the container format often used around h264) parser in firefox to rust, but their h264 decoder is from ffmpeg (?). In the end the h264 will most likely be decoded in the gpu using closed source code by the hardware vendor.

Kudos to the team for doing smart fuzzing instead of just throwing garbage. Most fuzzing projects spend much less brain power, and usually get worse results.

The mp4 demuxer is indeed in Rust [0], and runs in the content process (= the process in which the web page is loaded).

We don't have a h264 decoder in our source tree, we use the platform's decoder (because of patents). It's very often in a separate, dedicated process, and when it's not, it's in the GPU process, because when hardware accelerated decoders are used, they're using more or less the same resources as the rendering code.

Those other processes with the tightest sandbox possible (per process type, per platform, etc.), and don't have access to the web page.

On Linux, the platform decoder we're using is `libavcodec` from FFmpeg, but that's still in a separate process with a tight sandbox.

We're also doing something interesting, which is compiling libraries to WASM and then back to native code to get memory safety [1]. This is used when performance isn't critical (unlike codecs, so, e.g. a demuxer that we don't want to rewrite in Rust).

[0]: https://github.com/mozilla/mp4parse-rust/ [1]: https://hacks.mozilla.org/2021/12/webassembly-and-back-again...