A formal spec isn't just ordinary source-code by another name, it's at a quite different level of abstraction, and (hopefully) it will be proven that its invariants always hold. (This is a separate step from proving that the model corresponds to the ultimate deliverable of the formal development process, be that source-code or binary.)
Bugs in the formal spec aren't impossible, but use of formal methods doesn't prevent you from doing acceptance testing as well. In practice, there's a whole methodology at work, not just blind trust in the formal spec.
Software developed using formal methods is generally assured to be free of runtime errors at the level of the target language (divide-by-zero, dereferencing NULL, out-of-bounds array access, etc). This is a pretty significant advantage, and applies even if there's a bug in the spec.
> A formal spec isn't just ordinary source-code by another name, it's at a quite different level of abstraction
This is the fallacy people have when thinking they can "prove" anything useful with formal systems. Code is _already_ a kind formal specification of program behavior. For example `printf("Hello world");` is a specification of a program that prints hello world. And we already have an abundance of tooling for applying all kind of abstractions imaginable to code. Any success at "proving" correctness using formal methods can probably be transformed into a way to write programs that ensure correctness. For example, Rust has pretty much done so for a large class of bugs prevalent in C/C++.
The mathematician's wet dream of applying "mathematical proof" on computer code will not work. That said, the approach of inventing better abstractions and making it hard if not impossible for the programmer to write the wrong thing (as in Rust) is likely the way forward. I'd argue the Rust approach is in a very real way equivalent to a formal specification of program behavior that ensures the program does not have the various bugs that plagues C/C++.
Of course, as long as the programming language is Turing Complete you can't make it impossible for the programmer to mistakenly write something they didn't intend. No amount of formalism can prevent a programmer from writing `printf("hello word")` when they intended "hello world". Computers _already_ "do what I say", and "do what I mean" is impossible unless people invent a way for minds to telepathically transmit their intentions (by this point you'd have to wonder whether the intention is the conscious one or the subconscious ones).
> thinking they can "prove" anything useful with formal systems
As I already said in my reply to xmprt, formal methods have been used successfully in developing life-critical code, although it remains a tiny niche. (It's a lot of work, so it's only worth it for that kind of code.) Google should turn up some examples.
> Code is _already_ a kind formal specification of program behavior.
Not really. Few languages even have an unambiguous language-definition spec. The behaviour of C code may vary between different standards-compliant compilers/platforms, for example.
The SPARK Ada language, on the other hand, is unambiguous and is amenable to formal reasoning. That's by careful design, and it's pretty unique. It's also an extremely minimal language.
> `printf("Hello world");` is a specification of a program that prints hello world
There's more to the story even here. Reasoning precisely about printf isn't as trivial as it appears. It will attempt to print Hello world in a character-encoding determined by the compiler/platform, not by the C standard. It will fail if the stdout pipe is closed or if it runs into other trouble. Even a printf call has plenty of complexity we tend to just ignore in day to day programming, see https://www.gnu.org/ghm/2011/paris/slides/jim-meyering-goodb...
> Any success at "proving" correctness using formal methods can probably be transformed into a way to write programs that ensure correctness
You've roughly described SPARK Ada's higher 'assurance levels', where each function and procedure has not only an ordinary body, written in SPARK Ada, but also a formal specification.
SPARK is pretty challenging to use, and there can be practical limitations on what properties can be proved with today's provers, but still, it is already a reality.
> Rust has pretty much done so for a large class of bugs prevalent in C/C++
Most modern languages improve upon the appalling lack of safety in C and C++. You're right that Rust (in particular the Safe Rust subset) does a much better job than most, and is showing a lot of success in its safety features. Programs written in Safe Rust don't have memory safety bugs, which is a tremendous improvement on C and C++, and it manages this without a garbage collector. Rust doesn't really lend itself to formal reasoning though, it doesn't even have a proper language spec.
> The mathematician's wet dream of applying "mathematical proof" on computer code will not work
Again, formal methods aren't hypothetical.
> I'd argue the Rust approach is in a very real way equivalent to a formal specification of program behavior that ensures the program does not have the various bugs that plagues C/C++.
It is not. Safe languages offer rock-solid guarantees that certain kinds of bugs can't occur, yes, and that's very powerful, but is not equivalent to full formal verification.
It's great to eliminate whole classes of bugs relating to initialization, concurrency, types, and object lifetime. That doesn't verify the specific behaviour of the program, though.
> No amount of formalism can prevent a programmer from writing `printf("hello word")` when they intended "hello world"
That comes down to the question of how do you get the model right? See the first PDF I linked above. The software development process won't blindly trust the model. Bugs in the model are possible but it seems like in practice it's uncommon for them to go unnoticed for long, and they are not a showstopper for using formal methods to develop ultra-low-defect software in practice.
> "do what I mean" is impossible unless people invent a way for minds to telepathically transmit their intention
It's not clear what your point is here. No software development methodology can operate without a team that understands the requirements, and has the necessary contact with the requirements-setting customer, and domain experts, etc.
I suggest taking a look at both the PDFs I linked above, by way of an introduction to what formal methods are and how they can be used. (The Formal methods article on Wikipedia is regrettably rather dry.)
I think the reason that formal proofs haven't really caught on is because it's just adding more complexity and stuff to maintain. The list of things that need to be maintained just keeps growing: code, tests, deployment tooling, configs, environments, etc. And now add a formal proof onto that. If the user changes their requirements then the proof needs to change. A lot of code changes will probably necessitate a proof change as well. And it doesn't even eliminate bugs because the formal proof could include a bug too. I suppose it could help in trivial cases like sanity checking that a value isn't null or that a lock is only held by a single thread but it seems like a lot of those checks are already integrated in build tooling in one way or another.
Yes, with the current state of the art, adopting formal methods means adopting a radically different approach to software development. For 'rapid application development' work, it isn't going to be a good choice. It's only a real consideration if you're serious about developing ultra-low-defect software (to use a term from the AdaCore folks).
> it doesn't even eliminate bugs because the formal proof could include a bug too
This is rather dismissive. Formal methods have been successfully used in various life-critical software systems, such as medical equipment and avionics.
As I said above, formal methods can eliminate all 'runtime errors' (like out-of-bounds array access), and there's a lot of power in formally guaranteeing that the model's invariants are never broken.
> I suppose it could help in trivial cases like sanity checking that a value isn't null or that a lock is only held by a single thread
No, this doesn't accurately reflect how formal methods work. I suggest taking a look at the PDFs I linked above. For one thing, formal modelling is not done using a programming language.
You mix up development problem with computational problem.
If you can't use formal proof just because the user can't be arsed to wait where it is supposed to be necessary, then the software project conception is simply not well designed.
Bugs in the formal spec aren't impossible, but use of formal methods doesn't prevent you from doing acceptance testing as well. In practice, there's a whole methodology at work, not just blind trust in the formal spec.
Software developed using formal methods is generally assured to be free of runtime errors at the level of the target language (divide-by-zero, dereferencing NULL, out-of-bounds array access, etc). This is a pretty significant advantage, and applies even if there's a bug in the spec.
Disclaimer: I'm very much not an expert.
Interesting reading:
* An interesting case-study, albeit from a non-impartial source [PDF] https://www.adacore.com/uploads/downloads/Tokeneer_Report.pd...
* An introduction to the Event-B formal modelling method [PDF] https://www.southampton.ac.uk/~tsh2n14/publications/chapters...