Because imperative programming is not very interesting from a programming language theory point of view. If this was a list about compiler construction, it would look very different.
As we're seeing more and more language features originating from functional programming trickle into mainstream languages, this research hasn't clearly gone to waste even though everyone is not programming in functional programming languages.
imperative programming is not very interesting from a programming language theory point of view.
I'm afraid I disagree, the opposite is the case. Modern PL theory is mostly about program correctness, and that is much easier for pure functional languages. Until recently, nobody had a handle on program logics for imperative languages, or reasoning techniques for operational semantics. The only available reasoning techniques used to be based on denotational semantics, following Scott's breakthrough domain-theoretic semantics of lambda-calculus. That's why so much theory was developed in a functional programming context, because FP is in some ways much simpler than imperative or concurrent programming (even though compiling FP languages is harder). The available textbooks reflect this state of afairs.
All that has changed in the last decade or so, but it will take another decade or so before recent research insights into non-functional languages perlocates down to accessible textbooks.
This is the real answer. The restrictions on functional languages that make them (perhaps) harder to use for everyday programming makes them much easier to reason about and prove properties of.
Oh, I agree with this. "Interesting" was a poor choice of words. It's definitely interesting but then again, as you say this changed in the past decade or so. Getting introduced to a new topic by exploring the latest state of the art without going through the basic theory first isn't a good way to learn most subjects.
I know it is possible. But I wonder what stops us from doing it for most programs we write from day to day. And, if proving correctness is indeed important (I believe so), can we do something / what is the next thing to do to enable us proving correctness in our daily job.
What stops us is cost. Formally verifying non-trivial properties (in the sense of CompCert) with current technology is several orders of magnitude more expensive and time-consuming than programming and a bit of testing. And this price is almost never worth it.
We hope that this cost can be lowered in the future.
The commonplace languages in use are difficult to reason about. It would take a serious paradigm shift for most companies to stop what they're doing and develop a tool chain for wholesale program verification. Additionally, it takes a special breed of programmer to be able and willing to write code in such a way that enables verification, then actually take the time to verify it. Functional programming is a jump for most main stream engineers, it is an altogether different matter to recruit or develop a team of engineers familiar with dependent type theory and formal verification.
The industry tends to lag behind academia by at least a couple decades. In the 90's, lots of research was being devoted to functional programming and the various applications of type systems. Now we see various companies starting to use languages like Haskell and ocaml. Nowadays, it seems like dependent types and program verification are popular in academia, so maybe we'll see some of that by 2030.
For an example of what mafibre is talking about: even proving that a BubbleSort implementation is correct is quite tricky.
First of all, you need to correctly define the specification. Saying "the returned list is sorted" isn't enough because that would also accept lists that have nothing to do with the input, like [1,2,3,4,5]. Also saying that the list only contains elements from the input is also not enough because that would accept a returned list with duplicated or missing elements. To really make a specification that models what you want you need to say that the returned list is a sorted permutation of the input list.
Then, once you manage to get a formal specification that is actually what you want prout invariantving it is also hard. In the bubble sort case you need to figure you need to write down the proofs in detail, down to the point when you use the fact that comparison is a transitive relation.
Because imperative programming is not very interesting from a programming language theory point of view.
I don't know, I find GCL and predicate transformer semantics to be quite interesting. Esterel's approach to synchronous reactive systems, too.
I don't see why the research potential in imperative languages couldn't be ripe. The semantic possibilities are large. A ton of PL and OS research from ETH Zurich, Xerox PARC and Bell Labs alike have been from imperative language designs.
In practice, most programming languages are compiled via SSA or even Array SSA, i.e., via a pure functional intermediate representation. And understanding it thoroughly is a huge advantage.
Note the word "enlightenment" in the title. Everything on the list is about functional programming languages because the author believes that, once you are enlightened, of course you're going to use functional programming languages.
If you find that condescending and/or narrow-minded, you're not alone. There are more things in programming language theory than are dreamt of in their philosophy (to steal a line from Shakespeare).
PL theory has never been dominated by OOP, except for those theories specifically related to it. Just take a look at a POPL '95 proceedings. FP starts from math, for OO math had to be shoe horned into successful existing work that didn't care about it that much.
Not just that, but we still don't really know how to combine subtyping and generics very well. The soundness question when considering variance is still quite open, and it shows in the poor support for type inference in OO languages (in spite of heroic efforts made in languages like Scala).
Much of this has to do with the root: most of our PL theory was basically designed for FP, and applying them to OOP has been predictably difficult. Most PL theoreticians are more interested in FP than OOP, so progress is quite slow (though see work done by Ross Tate, Igarashi, etc...for progress).
I suspect that combining generics with subtyping might always lead you into the land on undecidable type-inference. After all, pure genericity (System F) is already undecidable.
It is possible that he's referring to mutable and covariant types being unsound.
As an example, let's assume that you have two classes, Cat and Dog, both inheriting from Animal, and that you have a covariant mutable List implementation.
Say you have a value of type List[Cat]. Since List is covariant, this value can legally be passed to a function that expects a parameter of type List[Animal].
Now, imagine you have a function that expects a List[Animal] and adds a Dog to it. That's a legal operation: Dog extends Animal and can thus be used this way.
Putting the two together, you have a perfectly legal way of adding a Dog to a List[Cat], which a sound type system should not allow. An example of that is Java's Array, which is both covariant and mutable, and that as a result can yield type errors at runtime.