I read the entire thing, and honestly the heap grooming is very interesting, but really that's the boring part -- lots of trial and error, padding memory, etc. Also interesting that linked-lists aren't used by Apple† (and Ian Beer's suggestion that they ought to use them), but that's neither here nor there. Getting kernel memory read/write is also very interesting, albeit (again) a bit tedious. At the end of the day, it all started with this:

> Using two MacOS laptops and enabling AirDrop on both of them I used a kernel debugger to edit the SyncTree TLV sent by one of the laptops, which caused the other one to kernel panic due to an out-of-bounds memmove.

How did this even pass the _smell_ test? How did it get through code reviews and auditing? You're allocating from an untrusted source. It's like memory management 101. I mean, my goodness, it's from a wireless source, at that.

† In this specific scenario, namely the list of `IO80211AWDLPeer`s.

> How did this even pass the _smell_ test?

Because attackers only have to find one place that was unlucky in implementation, and hence defenders are burdened with eliminting every last one of them.

This is why implementing your network protocols in unsafe languages is bad. Testing can just find some bugs, not ensure absence of bugs.

If it's not one thing, it's another.

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

Now I know it's deeply comforting to think if you just had "safety" you could write all the code you want with abandon and the computer would tell you if you did it wrong, but this is a sophomoric attitude that you will either abandon when you have the right experiences, or you will go into management where the abject truth in this statement will be used to keep programmer salaries in the gutter, and piss-poor managers in a job. Meanwhile, these "safe" languages will give you nothing but shadows you'll mistake for your own limitations.

My suggestion is just learn how to write secure code in C. It's an unknown-unknown for you at the moment, so you're going to have to learn how to tackle that sort of thing, but the good news is that (with the right strategy) many unknown-unknowns can be attacked using the same tricks. That means if you do learn how to write secure code in C, then the skills you develop will be transferable to other languages and other domains, and if you still like management, those skills will even be useful there.

> https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

Almost all of those are due to code in unsafe blocks. In other words, not safe rust.

A few are cryptographic errors. No argument there, Rust won't save you from that.

FWIW Rust does badly need a standardized unsafe-block auditing mechanism. Like "show me all the unsafe blocks in my code or any of the libraries it uses, except the standard library". If that list is too long to read, that's a bug in your project.

Related to what you are looking for is https://github.com/rust-secure-code/cargo-geiger which analyzes the dependency tree for unsafe but afaik it doesn't actually show each individual block.

The readme is quite good.