I'm surprised the summary of the article only talked about over-reliance on fuzzing and then suggested 1) more thorough code reviews and 2) sandboxing as solutions?! To me, the solution lies in using memory-safe languages.

I think sandboxing is the more powerful solution. You think in terms of "What privileges can the attacker gain if this code blows up?" and limit the code's privileges to the minimum.

Problem is, sandboxing is harder to implement so it's often done suboptimally or not at all.

Second problem: sandbox aren't perfect either. It's indeed useful, as part of a defense-in-depth approach, but it's far from sufficient.

Memory safety could solve the problem altogether, but then again no program is 100% memory safe, there's always some kind of primitive that uses memory-unsafe code under the hood, so it's not perfect either.

The “perfect” solution would probably be:

- use memory safe languages

- all primitives using memory unsafe stuff should get formally verified

Rust is kind of aiming at this (with things like [1] and [2]), but it's not there yet.

[1]: https://dl.acm.org/doi/pdf/10.1145/3158154 [2]: https://github.com/rust-secure-code/safety-dance