This is Leslie Lamport's writing, and he's complaining that nobody uses his TLA+ formal specification language. (Here's the TLA book.[1] Read the introduction, and it's much the same rant.)

I used to work on formal specifications and program proofs, around the time Lamport was developing temporal logic. I took the position that, rather than trying to make programmers learn math, we should automate and divide up the problem so that they didn't have to.[2][3] This meant minimizing the formalism and making the verification information look like ordinary programming. It also meant carving out the harder math problems into a separate system where someone using a theorem prover would prove the necessary theorems in a way which allowed their use and reuse by ordinary programmers. The underlying math is still there, but not in your face so much.

There's a mathematician mindset: Terseness is good. Cleverness is good. Elegance is good. Generality is good. Case analysis is ugly. This is unhelpful for program verification.

Program verification generates huge numbers of formulas to be proven. Most of the formulas are very dumb, and can be dealt with automatically by a simple theorem prover. Where it's necessary to prove something hard, our approach was to encourage users to write more assertions in the code to narrow the thing to be proven, until they were down to "assert(expra); assert(exprb);" Getting to that point is much like ordinary debugging.

Now they just needed a theorem, independent of the program, that could prove "exprb" given "expra". That job could be handed off to a theorem-proving expert. A library of proved and machine-checked theorems would be built up. Some would be general, some project-specific. This is the software engineer's approach to the problem.

The mathematician's approach is to capture much of the program semantics in mathematical form and work on them as big math formulas. Most of the work is done in the formula space, where mathematicians are comfortable, not the code space, where programmers are comfortable. This seems to hold back the use of formal methods in program verification.

Much effort has gone into trying into trying to find some clever way to make program verification elegant. "A great variety of program logics are in use in computer science for the specification of (software and/or hardware) systems; for example: modal and temporal logics, dynamic logic, Hoare logic, separation logic, spatial logic, nominal logic, lax logic, linear logic, intuitionistic type theory, LCF."[4] We used to call this the "logic of the month club".

[1] http://research.microsoft.com/en-us/um/people/lamport/tla/bo... [2] http://www.animats.com/papers/verifier/verifiermanual.pdf [3] http://portal.acm.org/citation.cfm?id=567074 [4] http://homepages.inf.ed.ac.uk/als/PSPL2010/

I know I've posted this a few times but I don't think I've ever seen you in a thread about this. Do you have any simple way for a computer scientist to learn about proven methods.

More precisely, how do I engage in the dark incantations required to conjure up my own proven methods? For my critical code, right now I just write unit tests. Do you know some place that can shed some light as to what I actually do? I mean, I can just tell my coworkers "yea I formally proved this, it won't have bugs" but I don't think that's quite in good spirit.

Not the parent, but indeed like you said, writing unit tests doesn't show absence of bugs, but can only show their presence.

To be able to state that a piece of software is bug-free and formally correct, we first need a notion of correctness: that the algorithm follows a [formal] specification.

The first step would be to formally specify what the software/algorithm should do. Then, the next step would be to use formal methods and mathematical tools (e.g. set theory and logic) to prove that your algorithm does exactly as the specification says.

TLA+ is a formal method that can be used for writing specifications and model-checking them. While a big step forward from testing, model-checking still doesn't "prove" anything.

For proving, you can use theorem provers like the Z3 [0] SMT solver (or a formal framework that uses theorem provers) to do proof checking.

The Event-B [1] method is one such formal method that uses SMT solvers and allows you to model systems starting with a very abstract and high-level model, state invariants about your system, and prove that those invariants hold as you refine your models and make them more concrete.

Feel free to shoot me an email if you like to talk more about formal methods :)

[0]: https://github.com/Z3Prover/z3

[1]: http://www.event-b.org/