> For example, when writing an Elm program I might at some point decide that users should have an admin flag. I will then try to use that flag in a function at which point the compiler will tell me that I have failed to add it to the User model. I will add it to the model at which point the compiler will tell me that I have failed to account for it in my main update function.
This beautifully explains why types are so useful. I don't know if the number of bugs goes down, but it sure is useful to have an assistant check all the implications of a change I want to make, and does that in a second.
Absolutely agree, this is one of the biggest benefits I have found since I started working with Typescript (that, and the ease of refactoring).
Of course Flow/Typescript's approach to typing isn't as complete as Elm's, but you can still gain some of the benefits by using one of them; and for many of us, introducing Flow/TS at work is a lot more likely to happen than introducing Elm.
Ironically, Elm's type system is actually one of the simplest out there. No subtyping (unions or intersections), no flow-sensitive typing, just plain first-order type constructors, Hindley-Milner type inference and nice records. Much of the complication in Flow and TypeScript's type systems comes from the need to accommodate existing JavaScript idioms.
Good point. I suppose what I meant was more that types are at the core of Elm and pervade every part of the language, whereas TS/Flow are adding types on top of a dynamic language and there are parts where it may be difficult or impossible to maintain type safety to the same degree (especially when interacting with non-TS/Flow code)
No one has really figured out nominal subtyping + type parameters with good type inference support. The best we can achieve seems to be scala, which still requires more type annotations than many are comfortable with.
The real issue is not nominal vs structural typing. Its the subsumption rule (which says that you can freely use a subtype anywhere a supertype would be expected).
Mixing subtyping with subsumption and parametric polymorphism (type parameters) is really very difficult though. From a theoretic point of view, type inference and type checking are harder. From an usability perspective, the programmer now has to deal with variance annotations (for example, in Java `class List[+T]` means the T parameter is covariant) and bounded polymorphism (for example, things like <? extends T> in Java).
Sure. As a member of the extended ML/Haskell family, Elm is inherently typeful, whereas in TS/Flow types are opt-in. But, as it turns out, pragmatically, typefulness actually limits the extent to which a language can become complicated, because the complexity ends up showing in the type system, and that's useful feedback for the language designer that he or she is going in the wrong direction. OTOH, the designer of a non-typeful language can “hide the ball” by not reflecting complexity in types.
They go down and also allow for doing good AOT code generation to native code in the languages that have such toolchains, whereas dynamic languages really need a JIT.
After working on a startup that had a server architecture similar to AOL Server and also seeing some heavy Zope deployments (late 90's), I never got to understand the use of dynamic languages for large codebases.
It's weird. I can't imagine working on large (and critical) code bases in dynamic languages just from pats experiences on working on those systems, yet the project I'm working on now (that has the potential to be critical if the MVP goes well and I can put it in production) I've resigned to using Elixir.
Even though I keep getting bit by bugs that a proper type checker would catch (Dialyzer is not sufficient for some of these bugs unfortunately) the problem is such a perfect use case for the BEAM VM that I don't really have a lot of choice without increasing the complexity by a lot.
Lisp also has good AOT compilers, but you need to provide the necessary (optional) type annotations.
So in the end you end up with code that looks no different than from static languages.
With ML-like type inference and dynamic type support, there is hardly any disadvantage with static type languages and you get the tooling support.
Dynamic languages can have very nice tooling, and we are yet to regain what was possible in Xerox and Genera environments, but your environment needs a live image of the code to produce good results, hence JIT.
It's refreshing to see a piece written by someone who's new to functional programming, discovering the beauty of types and pure functions for the first time.
I very much agree with the author; learning functional programming idioms have had a profound impact on my ability to model problems in code, regardless of the language I'm using.
I admire Elm so much for putting an emphasis on the user experience, recognizing that it has been one of the biggest blockers to making functional programming mainstream.
At first glance, Elm's type system looks like it borrows heavily from Haskell [0]. Learning Haskell's type system is a great mental exercise, even if you don't even up coding in the language.
Elm's type system is a lot simpler than Haskell's. No higher kinds, no type classes, and certainly no crazy GHC extensions. OTOH, Elm has much nicer records, though Haskell sets the bar very low.
Elm has two things in common with JavaScript: strict evaluation and being designed for client-side Web development. In every other respect, Elm is closer to Haskell and diametrically opposite to JavaScript:
(0) statically and strongly typed, with type inference
(1) algebraic data types with pattern matching and exhaustiveness checking
That's good to know. I wasn't sure if it's just a thin layer on top of javascript, or if it was stricter about this. Thanks!
So the way it interacts with the dom seems to be similar to what I've read about React, in that it seems to always map the model to a view, rather than messing around with existing dom elements. That seems like something that is easy to understand and to reason about.
The downside seems to be that it comes with quite a bit of js size, slightly larger than jQuery, if I understand it correctly.
PureScript is a fully static strict functional language with a type system that is more advanced than that in Haskell as it allows to model all JS side-effects. It is based on the ideas for the type system in Koka [1]. That in turns allows more easily to integrate PureScript with JavaScript than Elm, but the language itself is anything but a thin layer over JavaScript.
This is not snark. Why Koka when ML and Haskell exist? What does it bring to the table that is new? What Haskell calls monads Koka calls effects? But Haskell has lazy evaluation, right -- and Koka is strict (like ML) and tracks effects (like Haskell). Does Koka complete this reductive functional programming matrix?
/ Monads / No monads†
-----------------------
S | Koka | ML
L | Haskell | Miranda
In a lazy language like Haskell, you often define pure functions whose result is an effectful monadic action. In a strict language, it makes more sense to qualify functions themselves as being pure or effectful. So you end up working with Kleisli arrows, which are still related to monads.
Also, the way Haskell handles composing effects (monad transformers), while precise, sometimes leaves a lot to be desired in terms of usability. That's why other approaches to managing effectful computations are being explored, like algebraic effects.
Koka tags effects using something like row types. This may very well be easier to use. There's no reason Haskell couldn't do this if it had row types, but it doesn't, so it doesn't.
My google-fu is not turning up any definition of `row type'. Are we talking record types here? Probably not because doesn't Haskell have a record type (much maligned though it is).
MkVariant :: Row Type -> Type
Cases :: Row Type
ParseResult :: Type
Rows may contain things other than proper types, for example:
row Effects = [foo: State s, bar: Except e]
type Action = MkFunction Effects
Which have kind:
-- function types now take three arguments:
-- - row of effects
-- - argument type
-- - return type
MkFunction :: Row Effect -> Type -> Type -> Type
Effects :: Row Effect
Action :: Type -> Type -> Type
row = sequence
record = named conjunctive sequence
variant = named disjunctive sequence
:)
Can you explain what you mean when you say "proper" types? (So I can get what you mean from there on out.) Are all your examples using regular standard Haskell syntax? (I get that you are making up type (and kind) names for illustration purposes). Thanks!
I got a little bit creative with syntax. Haskell doesn't have rows at all, and its kind of proper types is called “*”, rather than “Type”. Other than that, the syntax is pretty similar.
Yes, "record type" is probably better way of describing it. I think "row type" might be used when using record types in the context of database tables.
When I was gathering information about typed languages that compile to JS, I found Elm rather nice, but calling JS libs was rather cumbersome. Somewhere I read that PureScript has a better integration with JS, so I thought it was a "thinner" layer :D
Purescript doesn't have a runtime, unlike Elm, so in that sense it may be a thinner layer. However, the language itself is more advanced than Elm, allowing for more powerful abstractions but also having the side effect (haha) of having the same steep learning curve as Haskell.
Also, Purescript's FFI is really easy to use (unlike Elm's), afaik this was one of its original design goals.
One interesting application of effect types ala Koka/PureScript is that they can model type-safe private exceptions that can only be caught by code that have full access to implementation. In Haskell exceptions are unsafe and can trivially be used to subvert the type system.
"At first glance, Elm's type system looks like it borrows heavily from Haskell"
I stumbled on Elm looking for a good language (Haskell) to build a new language. Try compiling Elm. hint: it's written in Haskell and if you squint, the type system is a simpler derivation of Haskell.
I feel jealous instead. Being a bit old myself, I learned static typing from C and Pascal. It somewhat naturally led me to a conclusion that static type systems are these primitive, repetitive annotations which guarantee almost nothing while cluttering the code. It took me many years - and learning many languages - to recover from this perception.
Probably because even in 2016 most programmers never heard of the other possible choices?
I started with C instead of SML, which was available then, I think, because everyone I knew coded in C and they all said it's ok and a good language to learn.
The same is probably true today: you first need to know that a language exists before trying to learn it, and beginner programmers are going to only know about languages their peers (other beginners, probably) use.
As if "strong" and "weak" weren't overridden enough when it comes to type systems, many people will use "stronger" and "weaker" when talking about the relative amount of guarantees a static type system can give you. "C has a fairly weak static type system, and Haskell has a very strong static type system."
Since the "strong" vs "weak" axis of type systems is already not extremely useful, as very few languages are truly weakly typed, it's usually fine.
Okay, then let me define a new axis: useful vs. useless type systems. A type system is useful to the extent types rule out implementation errors and/or miscommunication between the implementer and the user of an abstraction:
(0) Algebraic data types, pattern matching and exhaustiveness checking rule out forgetting to handle all the qualitatively different possible outcomes of an operation.
(1) Statically typed effects rule out sneaking effects into computations without making them knowable to the user.
(2) Substructural type systems rule out resource leaks and uses after free.
(3) Rust's borrow checker rules out concurrent modifications of the same object in memory.
Sigh. You're right. There's also a matter of how you use the language. One can make C a lot stronger by using structs as ADTs. Just have the struct include only one field (example, a minutes struct that's only one int). That gives you a lot of compile time guarantees (you can't add minutes and seconds without using a function designed to do so).
> A big gotcha for me was understanding the -> syntax. How can a function that accepts two arguments possibly have a type annotation like this?
connectWords : String -> String -> String
This is one of the most maddening things for me about Haskell-like languages. I can never remember if -> is left or right associative. I mean there is only one way that makes sense:
# String -> (String -> String)
But it could also be
# (String -> String) -> String
Of course you get used to it after a while, but a nagging feeling remains. I would really prefer a bit of syntactic sugar.
If you consider the case of currying and the fact that Haskell functions can be partially applied by just not providing all of the parameters, it becomes clear that only the first version you wrote is correct. It is a single-parameter function with a return type of another single-parameter function.
I think when these type annotations are used, at least traditionally in e.g. Haskell, the functions are curried by default. In other words, all functions only take one argument. If all functions only take one argument, then it has to be the first interpretation, not the second.
Thanks to ncd for clarifying, here are my own comments too. When you write
f: Int -> Int -> Int
f a b = a + b
the compiler (at least in Haskell) creates f as a function that takes a single argument, a, and outputs a function. That outputted function takes a single argument, b and outputs an Int.
When you apply the function f:
f 5 7
what's really happening here is that f is first applied to 5. This creates a new function, call it g, taking in a single integer and outputting that integer plus 5. Then, this new function is applied to 7. (The compiler may optimize a lot of this away, but that is how you should think of what's happening in my opinion.)
I guess this comes back full circle to the original point: you should think of everything being left-associative here, so by default the above is interpreted as
It's a function that takes a String and returns a new function which takes a String and returns a String. All functions in Haskell/Elm are arity 1.
So in order to construct functions that accept more than one argument, you actually return successive functions that apply successive arguments, known as currying.
This is really important when looking at it from "the outside" (or, one is too lazy to lookup the documentation ;-). Without knowing that, it could be a function that takes a string and compiles it to a function that takes two strings (ie: a macro, I guess). Or one string, and returns a string. Or...
Does this kind of 1/arity have performance implications for Haskell? Does the compiler lower (or is it rise? inline?) the functions to produce efficient machine code?
In this case, you think of `connectWords` as a function that takes two arguments. But since it is curried, you can also do this:
let prefix = connectWords "Hello "
world = prefix "world"
bob = prefix "bob"
in ...
`world` is "Hello world", and `bob` is "Hello bob". That is the power of currying. A maybe more useful example is specifying the mapping function in `List.map` without supplying the list to map over. This allows you to use the same map with multiple lists.
Having curry by default allows you to create very readable code using function composition, which is the primary way of transforming data through multiple steps in FP.
For example of a typical imperative approach to applying functions:
Compared to a haskell-style JS which combines currying and function composition (. combines functions in Haskell):
price :: Product -> (Int)
price p =
p == 'book' && () => 10
p == 'laptop' && () => 20
shipping :: Product -> (Int -> Int)
shipping p =
p == 'book' && (x) => x + 10
p == 'laptop' && (x) => x + 25
tax :: Int -> Int
tax x = x + (x * 0.13)
total :: Product -> Int
total p = tax . shipping(p) . price(p)
Alternatively, you can easily create object specific total functions:
Note how in the FP `total` version the data is not held in temporary variables but passed directly to the next function, which could be a performance gain.
Another benefit is how it's easier to work with curried functions when using map/reduce and list comprehensions - two other fundamental building blocks of FP programs. For example, you can pass a curried `shipping` function directly to map whereas `addShipping` would required an anonymous function (since it has two arguments).
The performance question is largely a question of the implementation and compiler optimizations. But considering we're working in a browser environment the "bottle-neck" is going to be DOM interaction not passing around curried functions everywhere.
Additionally, you are much more like to create functions in FP with single arguments rather than multiple in order for composition to work cleanly.
They work in the live editor on elm-lang.org. But not locally.
Maybe my environment isn't set up correctly?
I tried the first one (the counter). Copy/pasted directly into a dir as 'counter.elm'. Initialized the dir with 'elm package install' to get the core library.
$ elm make counter.elm
I cannot find module 'Html'.
Module 'Main' is trying to import it.
Potential problems could be:
* Misspelled the module name
* Need to add a source directory or new dependency to elm-package.json
Edit/update: source-directories is ["."] in elm-package.json. Dependency "elm-lang/core": "4.0.1 <= v < 5.0.0" is in elm-package.json. elm-version is "0.17.0 <= v < 0.18.0". elm-make is elm-make 0.17 (Elm Platform 0.17.0).
macbook:elm-learning me$ which elm
/usr/local/bin/elm
Edit2: so, I then guessed that Html wasn't a part of the core lib, and tried to install evancz/html, but apparently that doesn't work with 0.17 (I get an error about version constraints). I'm guessing the elm-lang.org editor doesn't use 0.17?
Got the same issue initially. The guide does not document the additional packages that needs to be installed. You need to install elm-lang/html package.
This beautifully explains why types are so useful. I don't know if the number of bugs goes down, but it sure is useful to have an assistant check all the implications of a change I want to make, and does that in a second.