And that's where model checking / formal verification is a tremendous help (or would be if it was easier to use).

There are many properties of multithreaded/distributed systems that are emergent and that current static compilers don't address.

In distributed systems it's becoming more common to use TLA+[1] to model the system and ensure that we don't have bugs at the design level (rather than just the system level).

In particular for message-passing, I'm quite curious about the state of the research on Communicating Finite State Machines[1] and Petri Nets[2] and if there are actual products that could be integrated in regular languages and are not just of academic interest only.

I expect that Ada/Sparks[1] has some support for this pattern though Ada multithreading story is not there yet (?).

I'm quite excited about the future Z3 prover integration in Nim[5] and the potential to write concurrent programs free of design bugs[6] to address both ends of the concurrency bugs:

- bugs in the low-level concurrent data structure via formal verification (which is something I used int he past to prove that my design was bug-free and that the deadlock I experienced was a condition variable bugs in Glibc[7] and I had to directly use futexes instead.

- bugs in the high-level design, implemented as event-loop / finite state-machine reacting to events as received by channels

Are there people working on model checkers or formal verification in Rust? Like the author, i think addressing memory bugs.the borrow checker is only a part of the "fearless concurrency" story and preventing design bugs would be extremely valuable.

[1]: https://lamport.azurewebsites.net/tla/tla.html

[2]: https://en.wikipedia.org/wiki/Communicating_finite-state_mac...

[3]: https://en.wikipedia.org/wiki/Petri_net

[4]: https://www.adacore.com/sparkpro

[5]: https://nim-lang.org/docs/drnim.html

[6]: https://github.com/nim-lang/RFCs/issues/222

[7]: https://github.com/mratsim/weave/issues/56

And shameless plug, my talk at the NimConf on debugging multithreading programs (data structure and design): https://www.youtube.com/watch?v=NOAI2wH9Cf0&list=PLxLdEZg8DR...

Slides: https://docs.google.com/presentation/d/e/2PACX-1vRq2wvd4q_mo...

Holy smokes, that Z3 prover for Nim programs with inline invariant conditions is mindblowing, thank you for sharing this.

Also, your username looked familiar -- you're the creator of Arraymancer and Laser!!

You wrote the most performant, memory-efficient fast-math libs in existence currently (as far as I know) and your work on data-parallel operations is super fascinating.

Thanks for the kind words, I think that crown is for MKL-DNN / OneDNN though, for now ;), and in the deep learning domain.

I also expect Halide[1], Taichi[2], DaCe[3] and Tiramisu[4] to be able to more easily reach those levels of performance hence why I want to add a compiler to Laser (implemented at compile-time via Nim macros) but prerequisite is excellent and composable multithreading support which OpenMP didn't provide.

In statistics / data analysis, my PCA though (Principal Component Analysis) beats all implementations I've seen from Scipy, Facebook and R.

[1]: https://halide-lang.org/

[2]: https://github.com/taichi-dev/taichi

[3]: https://github.com/spcl/dace

[4]: http://tiramisu-compiler.org/