What that quote means is that you can take a Haskell program and compile it into lambda calculus and this is what GHC is more or less doing.
This has important implications. From a theoretical perspective, it's easier to prove that the language remains sound when you add new features. But also from a more practical point of view, it's easier to work on compiler backends for a more minimal language, to target new CPUs or for applying optimizations.
These issues are very well known to language designers. E.g. one of the big changes in Scala 3 / Dotty is that it's supposedly based on "DOT Calculus". Having the ability to "desugar" the language into a simpler language that is provably type safe is pretty awesome.
What makes Haskell interesting is that the developers using Haskell also become acquainted with such issues. Haskell is a language and ecosystem that raises the knowledge ceiling for its users. And there aren't many languages around that do that ;-)
The choir already understands all that. You lost everyone else at the words "lambda calculus".
Go into a random PHP shop, and I would be surprised if a single person would know what lambda calculus is.
I'm not complaining about the message, I'm complaining about the language you are using. If you want to evangelize for Haskell, you need to use words that someone who failed out of differential calculus will recognize and understand.
I was almost 100% certain it was a joke until I read the paragraph that quote is from.
I want to reassure you that most Haskell evangelists (myself included) that say stuff like that usually mean it as a joke. Most people who seek to evangelize haskell do not lose sight of the fact that it's pretty daunting at first and has a relatively steep learning curve.
That said, Haskell is simple, but it's simple in an unintuitive way (as most other languages obscure the ideas). For example I'd argue that when people when people first hear "enum", what they really want is a sum type (i.e. `data Something = OneThing {...} | TheOther {...}`) and not what you get in most languages which is enums-that-are-just-named-numbers or enums-that-are-just-named-strings.
Most languages are coming around to the way haskell/other ML languages view the world however:
- non-nullable + option types
- function composition (a bunch of languages get stuck in the filter/map phase but never get to the)
- typeclasses + data types over classes + interfaces/abstract classes
- pattern matching
- Monads
- The Free(R) Monad and attempts to make programs more like state machines and encode it at the type level
I'm probably preaching to the choir in this thread but Haskell's type system is like the mercedes of production-ready languages these days -- eventually the features trickle down to other languages. There are more advanced options out there like Idris (which already has good dependent type support) but it's going to take a while for any of them to get the support and ecosystem haskell fought hard for over all these years.
One highlight of Haskell's malleability and flexibility is the work around linear types in Haskell[0] -- they basically give you rust-like semantics (reference tracking for compile-time "automatic" memory management/etc) without having to rewrite haskell from scratch. Turns out you can classify rust's main feature under the super generic problem of "ascrib[ing] more precise types". Turns out with a good enough type system, and lots of patience/determination, you can solve a lot of common software issues at the type level.
As the famous saying goes, the future is here, it's just not evenly distributed.
I think the large amount of effort the Rust community puts into explaining what they're doing with their type system, making it a first-class feature that's included in standard tutorials and having a lot of ergonomic effort put into error messages, is actually essential to making it work for ordinary users. (And even then, just barely; fighting the borrow checker is a thing.)
A language that allows someone to build this as a library is much more scary. What other incomprehensible type hackery could someone do? Haskell can be fun but it's always going to look like a somewhat more practical research language to me.
> I think the large amount of effort the Rust community puts into explaining what they're doing with their type system, making it a first-class feature that's included in standard tutorials and having a lot of ergonomic effort put into error messages, is actually essential to making it work for ordinary users. (And even then, just barely; fighting the borrow checker is a thing.)
100% agree -- they've learned a lot from other languages and have padded their steep learning curve (due to the ownership/borrowing scheme and advanced type system), and it's done wonders for them. I spend a lot of my time these days trying to decide between Haskell and Rust for new projects, there are seemingly a lot of people in both camps that are freely moving between these two languages because of their similarities.
> A language that allows someone to build this as a library is much more scary. What other incomprehensible type hackery could someone do? Haskell can be fun but it's always going to look like a somewhat more practical research language to me.
I think it's a bit more than a library (many of the things in the original post and in linear types require deeper changes), and a ton of these features require extensions to GHC (via language extension pragmas in the source file), but I do agree, the type trickery involved is intense.
In my opinion type-level hackery in haskell is not the same as when you do crazy class hierarchies/patterns in other languages (let's say Java) -- for a few reasons:
- Haskell is pretty darn legible, for example, servant[0] is a library that lets you express an API as a type. It does a lot at the type level with advanced techniques, but here's an example of an API:
Despite the fancy type level trickery that servant is doing under the covers this is impressively readable. In many instances the advanced type stuff (like `Partial t`, `WithUUID t`, `Capture "uuid" UUID`) actually helps the code be easier to understand, but doesn't detract from readability.
- You can step up the levels of expressiveness on your own terms -- the simpler way to write things is always right next to the more complicated way, and you can choose to push more into the type level when you choose. Example:
data Task = Task { tName :: TaskName
, tDesc :: TaskDesc
, tState :: TaskStateValue
} deriving (Eq, Show, Read, Generic)
-- the "f" below can be swapped for a polymoprhic type like "Maybe" or "Identity" (which is equal to itself)
-- You can think about this by literally replacing the f with the word "Maybe" everywhere you see it in the below definition, it even makes sense in the semantic english sense.
data TaskF f = TaskF { tfName :: f TaskName
, tfDesc :: f TaskDesc
, tfState :: f TaskStateValue
}
data TaskFInState (state :: TaskState) f where
FinishedT :: f TaskName -> f TaskDesc -> TaskFInState 'Finished f
InProgressT :: f TaskName -> f TaskDesc -> TaskFInState 'InProgress f
NotStartedT :: f TaskName -> f TaskDesc -> TaskFInState 'NotStarted f
-- | The case where we don't know what the state actually is
-- Ex. when we pull a value from the DB, we can't be polymorphic over state with the other constructors
-- but the database *has* to know what was stored forthe state.
-- Once we have an UnknownStateT we can write functions that try to translate to what we expect/require and fail otherwise.
UnknownStateT :: f TaskName -> f TaskDesc -> f TaskStateValue -> TaskFInState state f
-- | Similar case, but for when we need to fix the state type variable to *something*
SomeStateT :: f TaskName -> f TaskDesc -> f TaskStateValue -> TaskFInState 'Some f
This code isn't the greatest but it's what I've been working with in a recent blog series with an in-depth guide to writing a simple rest-ish API. You can go from `Task` to `TaskF` to `TaskFInState` as you graduate, and even let these types go between each other and use the level that is required.
Want to write a function that only works on fully-specified Tasks that are in a very specific state? specialize the types! fore example: `cancelTask :: TaskFInState 'InProgress Identity -> TaskFInState 'NotStarted Identity`. Just reading this signature tells you important information about the function.
More on-topic, the "hello world" program for a dependently typed language is usually length-typed vectors (i.e. `Vector Int 5` to represent a list of ints with 5 elements) -- from my experience people start trying to get more sophisticated with types when they see cool advanced type trickery that they think would be useful, and start doing the reading/trudging uphill to figure it out. This probably is a bit difficult for new developers thrust into a codebase with tricks they don't understand, but again the legibility of haskell code and the clarity provided by type signatures and the syntax helps.
Finding Haskell difficult is nothing to be ashamed of. However it's going to appear more difficult than it ought to be if you're approaching it as an experienced programmer with strong opinions. It can be humbling to realize that there's a whole branch of programming you're completely new to and know little about. It's going to be difficult to climb that mountain and you will feel like a beginner again and that's okay.
> Finding Haskell difficult is nothing to be ashamed of
Agreed.
> It can be humbling to realize that there's a whole branch of programming you're completely new to and know little about.
Agreed.
> It's worth the effort!
To learn it? Sure. To use it? I disagree. I was more unproductive with Haskell than I've ever been with C, which is usually the language I love to bash on. I can look up opcodes and write assembly faster. "Ah, but with experience!" - I don't see why I'd bother.
I write mostly functional code in any language I'm working in as it is. The functional parts of Haskell don't really have anything for me, at least not to compensate the pain of not getting anything done.
Take lens, for instance. It's an incredibly complicated solution to a problem that AFAIK only plagues Haskell! I'd love to know of any exceptions.
Writing networking code was an exercise in frustration where I had to try multiple libraries and faff about with changing types from eager to lazy bytestrings..
I'm glad I bothered to try and learn Haskell. I just don't see me ever using it in production, let alone try to hire anyone who knows it. It's hard enough finding Python programmers who know what generator expressions are.
If we're only talking experiences here then I can say that I've written plenty of networking code in C. I grew up on C. I have 15+ years' experiencein C-like languages across the gamut; more if you count the years when I was a youth and hacking together games and homework assignments for fun.
I've definitely felt the pain you're describing and have even thought of lens in the way you ascribe. How can anyone be productive in a language where logging is so difficult to add? I can add logging to a Python application in 3 lines. And as you say... converting from lazy to strict from text to bytestring to string depending on the library. So annoying! Nothing like those clean examples they show you in the tutorials!
Every time I went down that rabbit hole was because I was getting frustrated with feeling like a beginner again. What was I getting from Haskell for all of this hassle? Why were these simple things in other languages so hard? What's the benefit?
Peace of mind.
It's easier to get started with a C or Python or Javascript program. Partly because of experience. Partly because those languages do nothing to isolate side effects, constrain mutation, or check my code in any effective way. I stick to a particular style (usually functional), I write tests first, and I usually end up paying for that easy start later on in the project when, despite our intentions and efforts; side effects, mutation, and plain old type errors creep in. What I gain in efficiency early on in the project I pay for later with interest. The interest rate is only compounded when we start adding team members.
My experience with Haskell has been that it is frustrating and sometimes slower to get started with, especially when I was first starting out, but that it has been worth the effort in the long run. Down the road when my project started to grow I was better equipped to manage the complexity because I had a type system that kept me honest and guided me towards strong designs. I had a tool that would ensure that only the parts I was really sure about could perform IO actions or use shared mutable state. And best of all I could not touch a module for months and when I came back to it there's a good chance I could understand what it was doing, make the change I needed to make, refactor it, and push it to production without any worries.
I think the human side of software development was the nicest feature of working with Haskell. As I added new team members I didn't have to worry about junior developers breaking the build as much. Training was much more straight forward. The documentation was much easier to write as we could focus on the higher-level designs and let the type system document the details. Testing was more effective as we could focus on problems and tests that brought more value to the business problems.
I still choose other languages for various reasons but Haskell has been worth it in my experience. Painful but worth it.
> Brainfuck is simple, but nobody would choose to write a real project in it.
Sure. But it's worth knowing whether a language is simple or not (and what tradeoffs have been made to get there).
> Most programmers struggle trying to understand what a monad is. That's not easy.
It's easier than achieving Haskell-like defect rates in languages that have ad-hoc built in solutions to the same problems, IME.
> The free monad is not easy.
Disagree. Certainly it's a lot easier than solving the same problem by hand.
> Monad transformers are not easy.
Agreed, but again, easier than achieving the same defect rates without them.
> Understanding foldable/traversable/arrows/applicate is not east.
Easier than understanding dozens of ad-hoc informally specified implementations of half of them, which is what working in an ecosystem without them boils down to.
> Lens is/are not easy.
Perhaps not, but easier than achieving the same defect rate without them.
> It's easier than achieving Haskell-like defect rates in languages that have ad-hoc built in solutions to the same problems, IME.
My first Haskell program type-checked and passed all tests. It was incredibly broken. As in it didn't work at all. I understood monads and it didn't help one jot.
Monads in other languages _can_ be helpful. It pays off to know what one is. But in Haskell, more often than not they're being used to solve problems other languages don't even have in the first place.
> Disagree. Certainly it's a lot easier than solving the same problem by hand.
How? Anyone knows how to write a test double by hand. Most people can use a mocking framework. The visitor pattern is in the GoF book. These are well known and don't involve completely changing the type signature of the code you're writing just so you can interpret it. And let's not even mention Lisp.
> Agreed, but again, easier than achieving the same defect rates without them.
Not really. Write pure code, test it, have a thin layer of side-effects around it. The transformer stack is solving, again, problems only Haskell has. I don't need the state monad anywhere else, so I don't need to layer it on top of anything. In a language with exceptions, ditto. And so on.
> Easier than understanding dozens of ad-hoc informally specified implementations of half of them, which is what working in an ecosystem without them boils down to.
I would bet good money that in my old team I wouldn't be able to explain what an applicative functor is to 90% of them if I had one month of full time training.
> Perhaps not, but easier than achieving the same defect rate without them.
> Monads in other languages _can_ be helpful. It pays off to know what one is. But in Haskell, more often than not they're being used to solve problems other languages don't even have in the first place.
Every language faces the same problems; your only options are to not solve the problem at all, solve it in an ad-hoc way, or solve it with a general feature.
E.g. every language faces the problem that error handling can be quite boilerplatey. Bad languages don't solve this at all. Somewhat better languages offer an ad-hoc language feature like exceptions, which are superficially nice (indeed superficially nicer than monadic error handling). But every ad-hoc language feature is one more thing to keep in mind when working or debugging, so they all come at a penalty to your defect rate: exceptions lead to "magic" control flow which can lead to resources not getting closed on error paths, seemingly-innocuous refactors changing behavior, and so on.
> Anyone knows how to write a test double by hand. Most people can use a mocking framework.
Until you ask them to test code that relies on something async, and then they'll write flaky tests, or no tests at all. And I don't think I've ever seen a decent test double for something like a database (most people give up and use an in-memory one). Even those who write tests tend to end up with tests that are much more boilerplatey than main code and not kept up to date, because mocking frameworks rely on magic reflection that means you can't refactor code that uses them the normal way.
> Write pure code, test it, have a thin layer of side-effects around it.
And when the business logic you're implementing is inherently coupled to effectful questions and effect actions, what then? Distort your code, invent some ad-hoc datastructure to represent a half-complete business decision that needs to rely on the answer from some other service?
> I don't need the state monad anywhere else, so I don't need to layer it on top of anything. In a language with exceptions, ditto. And so on.
Well, you need state, you need error-handling; either you do these things manually and directly (painfully verbose and unmaintainable, leading to high defect rates), you do them in a "magical" unmanaged way (high defect rates because you can no longer know what code interferes with what other code), or you manage them explicitly, at which point you need a way to interleave managed effects.
> I would bet good money that in my old team I wouldn't be able to explain what an applicative functor is to 90% of them if I had one month of full time training.
Bet you could. I simply don't believe that anyone who can understand the "visitor pattern" or "observables" or "factories" or whatever the OO pattern of the week is could struggle with applicative functors. It's an interface consisting of two functions of at most two arguments. It's really not that hard.
> Solves, yet again, a problem only Haskell has.
Not so; every substantial system I've seen has ended up needing some way to pass a "patch" or "change" around between different bits of code. The only choice is whether you do it in an ad-hoc way or a standard one.
And that would prove what, exactly? That no language is perfect we already know. My assertion is that, for the vast majority of programmers, Haskell is fundamentally harder than the languages they already know or could learn instead.
Comparing Haskell to brainfuck is a little hyperbolic -- you do not have fully understand Monads and their underpinnings to be productive in Haskell. Just like when new programmers write "public static void main" and have no idea what it does, or how JAR files work, or how cout works, newbies can type `do` and get off and running. In Haskell most of your time is spent writing non-monadic code -- writing pure functions that don't care about the outside world.
Another important distinction I want to draw is that while you're that Haskell isn't easy, the difficulty contained within is fundamental to computation itself.
Haskell uses it's type system to buddy-up with category theory which is considered a dual to typed lambda calculus[0]. There are spooky (to me at least, less so to actual mathematicians) natural equivalencies here in that there are certain things you can talk about at the level of Haskell types (category theory) that are universal, and since we're talking about computation, they start becoming universal truths about computations. This is a powerful concept because it changes the way you think about problems. Regardless of whether it's useful in your day job (which might be plumbing two RESTful APIs together) -- I'm not trying to convince anyone to use haskell for everything (you should use the right tool for the job given the constraints), I'm trying to encourage people to try it at least once to see the difference.
Haskell in my mind offers a step stool to understand the deeper theory/mathematics underpinning of computation as a whole, by providing a language that can essentially do both -- express complex concepts and actually do useful things -- it's somewhere on the opposite end of the spectrum from assembly, but not as far as a pen and paper/chalkboard/your mind.
To follow on the point above, all the concepts you're discussing as hard are what we're building in other languages -- the other languages are just giving us versions of the power/expressiveness with corners cut for ergonomics sake, or in the worst cases, drastically reduced power. There is a tradeoff, and it obviously does not always make sense to use Haskell's version of some abstraction, if you do not understand it or it takes you 5x as long to code up. That said, knowing the underlying concepts that underpin these things is the kind of knowledge you can take from project to project -- concepts like traversable show up in file/folder walking, graph traversal, all these problems that you might think were not generically solvable if you only ever dealt with libraries that worked at the lower level (as in one library for walking graphs and the other for walking the filesystem).
You're right that understanding the concepts isn't easy, but this is the best kind of understanding to struggle and overcome -- it's fundamental to the field itself. No one is going to come up with a new Monad/Foldable/Traversable tomorrow, they can generally at best discover new concepts.
> I have personally worked with dozens of programmers that would never be able to write "proper" Haskell.
As long as these programmers learn the paradigms I don't care, use another language that's easier to write. The problem is that you can happily write language x for years and never scratch the surface of these deeper truths/paradigms/approaches, but that doesn't mean it's a good idea for your own personal development as a developer/computer scientist/whatever else. If you don't see value in exploring different, likely fundamental ways of thinking about computing then you do you.
Maybe it's just me, but I see value in languages that expand the way I think -- just like that aha moment when I figured out how to use map/filter/reduce instead of writing for loops. I learned to think in transformations, one step abstracted from the imperative reality.
Haskell may not be for everyone, but the paradigms it exposes are for all computer scientists (as far as I can tell anyway), and thus software developers/engineers who want to work smarter and not harder.
> you do not have fully understand Monads and their underpinnings to be productive in Haskell
I don't think that's true. Most programming is done in a team, and someone will have written a transformer stack. I was told to use the free monad to solve my testing problem in my first Haskell program. "Hello world" doesn't count.
> the difficulty contained within is fundamental to computation itself.
Not always. The examples I gave are all of what I consider to be "unforced errors". I don't need monad transformers in any other language I've written code in. Or lens. Or arrows. Or free monads. Or...
> I'm trying to encourage people to try it at least once to see the difference.
I did. Hence me even knowing what lens is.
> concepts like traversable show up in file/folder walking, graph traversal, all these problems that you might think were not generically solvable if you only ever dealt with libraries that worked at the lower level (as in one library for walking graphs and the other for walking the filesystem).
Honest question: how is knowing Haskell's traversables (which are hard) going to add to one's knowledge if one has already worked with any of the following:
* C++ iterators
* Python iterables
* D ranges
* C++20 ranges
* (I think) Clojure's transducers?
Sure, Python and Clojure are less applicable due to being dynamic, but D and C++? Generic. Same goes for foldable.
> As long as these programmers learn the paradigms
They wouldn't be able to. I say this as someone who somehow managed to take a student who didn't know how to add fractions together and tutor him to the point of (just) passing a university Linear Algebra class.
> Maybe it's just me, but I see value in languages that expand the way I think
So do I. Trying to learn Haskell was enlightening. I can't see me picking it as the tool of choice for pretty much any project though.
> software developers/engineers who want to work smarter and not harder.
When I wrote Haskell I found myself working harder, writing bugs anyway and wondering what the point was. It's intellectually interesting, and worth looking into. I just don't want to write in it.
> I don't think that's true. Most programming is done in a team, and someone will have written a transformer stack. I was told to use the free monad to solve my testing problem in my first Haskell program. "Hello world" doesn't count.
Right but there isn't any other language where a production application will be obvious to newcomers anyway. There is always some cruft/enterprise design pattern/whatever else that will need to be learned or at least hand-waved for early productivity, and will be deeply understood later. My point was that you don't have to understand what "do" does. If you get some simple feature let's say update the user wiggle count when any friend wiggles), the chance you're going to be editing legible, clear haskell (given you understand how to read haskell) is high. Something like:
-- | do a user wiggle and return the wiggle count
doWiggle :: User -> IO WiggleCount
doWiggle user = do
_ <- saveWiggleToDB user
wiggle <- updateWiggleCount user
-- here's where the newbie would probably start thinking to add some code
return wiggle
While this code is just a fake example, it's concise, and very legible -- you don't need to understand the complexities of the IO monad or any other stack that would be there, haskell can mimic the simplicity of a completely imperative monad-less environment, down to the `return`.
> Not always. The examples I gave are all of what I consider to be "unforced errors". I don't need monad transformers in any other language I've written code in. Or lens. Or arrows. Or free monads. Or...
But see this is the the whole point of the discussion of the relationship between lambda calculus and category theory. You're using most of these concepts already, you're just calling them something different, and wrapping them in some unnecessary ceremony/enterprise design pattern. Or even worse, you're using weaker, partial, worse approximations of these concepts and don't know it. Ignorance is not bliss in our line of work -- whether or not you use the concepts I (at least) expect good programmers to know the underpinnings.
> I did. Hence me even knowing what lens is.
I can't argue with your personal experience -- if you don't think learning that stuff was beneficial then there's nothing I could say to change your mind. My argument hinges on the fact that those concepts are beneficial and haskell shows them to you without much adulteration faster than other languages.
> Honest question: how is knowing Haskell's traversables (which are hard) going to add to one's knowledge if one has already worked with any of the following:
It adds to one's knowledge the same way knowing the Iterator/Iterable pattern exists instead of having used for loops in any of those patterns. It adds to one's knowledge the same way reading the gang of four book does, or any theoretical computer science does. Traversable is roughly identical to the iterator patterns but it is the purified form of the concept -- you don't get bogged down with how python iterates or some other language and how they built their iterator implementation.
> They wouldn't be able to. I say this as someone who somehow managed to take a student who didn't know how to add fractions together and tutor him to the point of (just) passing a university Linear Algebra class.
I don't consider myself an optimist, but I think you're wrong. If you were right, growth would be impossible for humankind. A more useful statement IMO might be that the amount of effort and focus one would have to put in is beyond what they could exert in the amount of time they have to devote (or their lifetime) -- but even then that's a hard statement to prove.
> So do I. Trying to learn Haskell was enlightening. I can't see me picking it as the tool of choice for pretty much any project though.
I mean that's OK though -- hopefully you picked up some things that were useful to you. Haskell ergonomics or practicality (basically, language features) is something that the Haskell community has to work on and make better.
> When I wrote Haskell I found myself working harder, writing bugs anyway and wondering what the point was. It's intellectually interesting, and worth looking into. I just don't want to write in it.
Again, I can't argue with your personal experience, but I can tell you that if you're using haskell's type system at all correctly what you're saying just can't be true. It's just not how it works -- for example, you probably have never gotten an NPE in Haskell -- it's a class of errors that doesn't exist for the most part. The errors you were getting (or maybe compiler feedback?) isn't the same as in other languages.
> For example I'd argue that when people when people first hear "enum", what they really want is a sum type
I'll be honest, I'm pretty sure the first time I header "enum" I had exactly 0 idea of what it would be, so it can be whatever the language designer wants, it will always work out for someone.
Well, both System Fc and Haskell Core are quite simple and consistent. It's much more concise than any imperative language with dozens of corner cases, yet quite powerful.
> Really, I feel like everyone who is going to read that article will either already know that, or will have no idea what that sentence means.
Yeah, that seems to explain the reaction to it here. But I'm not sure writing things you expect your audience to understand is such a bad thing. This is probably just the wrong audience for the post.
"Haskell, at its core, is simple: it is just a polymorphic lambda calculus with lazy evaluation plus algebraic data types and type classes."
In a different context, I would have interpreted that as a sarcastic parody of Haskell evangelists.
Really, I feel like everyone who is going to read that article will either already know that, or will have no idea what that sentence means.