DARPA is funding this. Good.
They haven't reached inter-procedural static analysis yet, which means they can't solve the big problem: how big is an array? Most of the troubles in C come from that. Whoever creates the array knows how big it is. Everybody else is guessing.
A bit of machine learning might help here. If you see
void dosomethingwitharray(int arr[], size_t n) {}
a good conjecture is that n is the length of arr. So, the question is, if
this is translated to fn dosomethingwitharray(arr: &[i64]) {}
does it break anything? Both caller and callee have to be analyzed. The C caller
has the constraint assert_eq!(arr.len(), n);
That's a proof goal. If a simple SMT-type prover can prove that true., then the call can be simplified to just use an ordinary Rust slice. If not, conversion to Rust has to drop to those ugly C pointer forms, preferably with a comment inserted. So you need something that makes good guesses, which is a large language model kind of thing, and something which checks them, which is a formalism kind of thing.The process can be assisted by putting asserts in the original C, as checks on the C and hints to the conversion process. That's probably the cleanest way to provide human assistance.
I've wanted this for conversion of OpenJPEG code to Rust. That's a tangle of code doing wavelet transforms, with long blocks of touchy subscripting and arithmetic, plus encoders and decoders for an overly complex binary format containing offsets and lengths. Someone recently ran it through c2rust. The unsafe Rust code works. It's compatible with the original C - it segfaults for the same test cases which cause the C code to segfault. This is why a naive transpiler isn't too helpful.
(The date at the bottom of the article is 2022-06-13. Has there been further progress?)
The article links to their github repo:
https://github.com/immunant/c2rust
There's commits in the last hour, so at least some signal of life.