Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How it feels to join an all-Haskell startup (wagonhq.com)
89 points by begriffs on June 12, 2015 | hide | past | favorite | 55 comments


How was that snippet an improvement to your coding style? The original was extremely clear, and now someone who reads your code has to understand what (,) <$> and <*> do. IMO this tendency for code golf one-upmanship is a huge detractor for anyone wanting to learn the language.


I was about to try to talk about the genuine elegance of Applicatives—and I'm more than happy to defend them in general—but then, yeah. In this instance, I'm not sure I'd be able to argue that the change was such a good choice.

    liftIO (liftA2 (,) randomIO getCurrentTime)
Eh, I'm not sure I could defend it. I'd honestly probably actually do

    bid <- liftIO randomIO
    now <- liftIO getCurrentTime



It's worth noting that there's a small potential performance penalty to this if it's being called all the time and it's being lifted through a large transformer stack.


The change they made was from a monad to an applicative, which is less "hands on", thus preferred. Kind of preferring map over hand-written loops. (Although the golfing is definitively there and in the community, see pointfree/pointless style https://wiki.haskell.org/Pointfree)


I prefer comprehensions over either mapping or explicit recursion. Mapping is a win for abstraction but a loss for clarity.


This to me is the biggest barrier to reading anyone else's Haskell code. It seems like it's always full of unfamiliar, ungooglable infix operators.


If you are hired at a company exclusively doing Haskell there is no way you haven't seen <$>, <*>, or (,) before.

In this case, the reason the OP didn't think of writing it with Applicative isn't because he didn't know about them (I assume), but just because he didn't think to use it in that particular case.



thanks


One of our Haskellers once tried to explain the bizarre internal consistency of the many Haskell's lens operators. It's basically just trolling in API form:

https://hackage.haskell.org/package/lens-3.8.5/docs/Control-...


It's tragic people feel that way. They're actually very well-designed, but if you don't like operator soup then no amount of design will convince you otherwise. Which is fine and why the Lens package is able to be imported operator-free.


My problem with the operators is they screw up function composition. `over`, `set`, `view`, `firstOf`, `toListOf` compose normally with the datastructure on the right. The operators flip this and therefore require the dumb `&`.

It's kind of like diagrams with `#`. I don't accept the argument that if it's a lens or a diagram, our meager brains can't comprehend right-to-left composition anymore.

Worst of all is when there isn't a non-infix alternative, effectively forcing you to use the soup. Case in point: `%=`.


The non-infix names are a bit more conservative, yes, but it's easy to define your own of course. And to even argue back about $ and & is pretty bike-sheddy. It's just a choice that was made.


That is one point of having code reviews, to get everyone in a team or an organization to write (mostly) the same type of code.


It's somewhat debatable whether the refactor is an improvement, but there is at least one good argument that it is, and (in my opinion) only a much poorer argument that it makes things less clear.

If the applicative code is better, it's because applicatives are strictly less powerful than monads, in the sense that everything you can do with applicatives you can also do with monads, but not vice-versa.

Many Haskellers prefer the abstraction that is "just powerful enough", or what's the same thing, they prefer the abstraction that is "least powerful"—that is, Haskellers try to write code in a way that lets them do what they want want to do, but at the same time in a way that doesn't allow them do things they didn't mean to do. And for just that reason: using the least powerful abstraction prevents you from introducing bugs by keeping you from writing some of the things that you didn't intend to write.

Thus the least powerful abstraction is the best abstraction, in the sense that it's the "best fit" (and, yes, also in an aesthetic sense). So if you can write what you want using applicatives rather than monads, you should prefer applicatives, the argument goes. (And if you can write it with functors, prefer functors to applicatives.) For the same reason, many Haskellers prefer to use restrictions of the IO type that limit what you can do rather than IO itself, whenever it's convenient.

Applicative style, in this case, also keeps you from having to name a couple of things, which means two fewer "hard things" (in Dijkstra's estimation) for the programmer to deal with—and just two fewer things, which means two fewer things to screw up, period.

The counter-argument, that applicatives are somehow more complicated or less clear than monads, is a poor one, I think. Haskell is not a superficial language. It's not one you're meant to be able to read and understand just because you're a competent programmer in some other, unrelated language. If you want to program in Haskell, you need to understand the languange's core abstractions—and, yes, applicative style is one of them at this point. That does mean there's more to learn. But Haskell is all about giving programmers the ability to recognize and use the right abstraction; the time spent learning is worth your while.

Further, if you're not a Haskell programmer, I think you should beware of pretending that you understand the monadic "before" code and therefore thinking that it's simpler than the applicative "after" code that you don't pretend to understand. First, because (syntactic subterfuge notwithstanding) you don't really understand the monadic code, unless you've learned the underlying concepts; and second, because it's not simpler. It's just not. There is more going on in the monadic code, not less. Now, if you're a beginner and you understand monads but not applicatives, you have an argument—the monadic code is familiar, and the applicative code isn't—but that just means you have more to learn. Go learn it. We're here to help.


> at this point

Which is a huge problem. Optimizing around the current fashion is generally a mistake.

As for the rest: the argument seems to come down to "there are no junior developers in Haskell, because if you don't understand the deep and dark abstractions you're really hardly better than a barbarian anyway."

This translates to: Haskell is doomed as a production language in the real world beyond a few niche/fetish applications, because "real" Haskell devs are always going to be scarce and expensive.

This was the selling point of Java back in the day: you could hire junior Java devs to do much the same things as senior C++ devs because the language was so much safer. I saw this in action. It was impressive. It was also around the time when the current fashion in Haskell was lazy evaluation, which I've seen modern Haskell fashionistas pronounce "not really so important after all".

Fashions change, and Haskell is a highly fashion-driven langauge, which means there will be a lot of unmaintainable Haskell code out there in five or ten years.

That's my prediction at least. Let's look at the issue again in a decade and see if I'm right or wrong!


> > at this point

> Which is a huge problem. Optimizing around the current fashion is generally a mistake.

I think this invokes the invention vs. discovery debate. There are many "design patterns" in Haskell, some of which might be regarded as clever inventions (for example conduits and pipes), in which case I agree that there's an element of "fashion" involved, which leads to incompatibilities and churn.

However, Monad and Applicative are clear examples of discoveries. Even if we try to write code without them, the pattern will still be lurking in there somewhere. Applicative is basically a "static" data-flow graph; ie. the graph is fixed, data flows through. Monad is a "dynamic" graph; parts of the graph can be generated from the data. These patterns will still be present, even if we use a completely different language.


Fashion? Haskell's core abstractions are not design patterns du jour. They're math. They're not going away.

The fashionistas are the programmers who want to pick up a bit of Haskell to impress their colleagues but don't want to learn the abstractions that make it a languange worth learning in the first place. They're the people who read about it on Hacker News and decide it's worth a few hours—enough that they can slap it on their résumé and garner a few more calls from recruiters—but no more. They're the people who insist there's something wrong with your code if a Javascript programmer can't immediately comprehend what's going on, and who can't be bothered to learn how and why to use applicatives.

Despite its minor vogue, Haskell as a language is, by its nature, about as far from fashionable as you can get. And, yes, that's true in part because it takes a lot of time and effort to learn it to a productive level. Haskell has never claimed Java's selling points. There's a lot to learn. But it rewards the effort you put in, eventually.


>Fashion? Haskell's core abstractions are not design patterns du jour. They're math. They're not going away.

That doesn't really make any sense. Just because the abstractions have some kind of Platonic existence doesn't mean that people are going to keep using them.

Significant use of applicatives in day-to-day code is a relatively new development in the Haskell community. I used to write a fair bit of Haskell code around five or six years ago (well beyond basic Haskell 98), but the operators in that code were not familiar to me. (They may have been once, but I clearly didn't come across them often enough to remember them.) It's easy enough to understand the code once you look up the operators, but Haskell does seem to be gradually collecting a lot of abstractive cruft.

There's something to be said for deploying the fancier abstractions when they significantly reduce code size, rather than whenever you possibly can.


Yes, applicative code is a relatively new development in Haskell: applicative functors were discovered only in 2008.


It has nothing to do with fashion. The Haskell language has been around for decades and has grown in different directions based on experience and knowledge earned throughout it's lifetime. There are apps and libraries on Hackage which are 5-10 years old which are still in wide use among Haskell and others (Parse,XMonad, and Pandoc to name a few).


You have some points which are to a point, and in general, true. But you present them in such a ridiculously hyperbolic way that it seems divorced from reality.

Applicative is a pretty standard type class which are often used in introductory resources. It is not likely to be replaced by anything any time soon. If you think that it is a fashion then you should have an idea about what it can/is likely to be replaceable by. So, please do tell.

If anything, Applicative might be less controversial than Monad. I haven't really seen much complaints about the downsides of the usages of Applicative.

"Barbarian" - of course any fairly level-headed explanation of Haskell concepts gets regarded as elitist. It's practically a cliché at this point.

Lazy evaluation - this was pretty much the whole point of the language. A fashion? It permeates the language, being the default evaluation strategy after all. But it is controversial whether it is better than strict (eager?) evaluation. If opinions change about this it might be because someone unearths some way to get more of the benefits of lazy evaluation, and less of the space leaks. And potential discoveries are kind of the point of research languages like this.


(,) <$> and <*> are very common functions that every Haskeller knows... the revision is understandable if you write Haskell.


This is common complaint I see of the Haskell community where there is always an attempt at cleverness over clarity.


The use of Applicative is actually an attempt at clarity, by restricting the kinds of things that the code can actually do.

As it name implies, Applicative is a kind of "effectful application". There have been proposals to implement a more natural syntax that makes the connection more clear.


Unfortunately I think it's the language, not the community. It definitely happenes with my own code as well, no hints from external developers are required :-) I miss Python in this regard.


Kind of ironic seeing as Python takes quite a few things from Haskell, like comprehensions, and zip.


They're about equivalent IMO, which yes, means golf.

However, I don't see what's so terrible about learning 3 new functions. Sorry it doesn't look like Javascript anymore.


I feel that it's important to have code that is immediately readable and understandable, without requiring much context. Smalltalkers had it right:

> If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual.

Imagine you're reading a musical score. Suddenly there's an unfamiliar symbol. Oh right, you need to turn to page 5 to figure out how it's defined. Oh no, it's defined in terms of more symbols. There goes your flow.

The tools that enable the most creativity are the tools that you can hold in your head completely, while you focus on the task at hand. C is a tool like that, regardless of its other drawbacks. Unfortunately, Haskell isn't quite like that, due to its culture of extracting every little thing and giving it a unique name. Now you need to memorize all these names, instead of e.g. understanding the concept of a "for loop" once and applying it forever.


Sheet music is positively packed with unfamiliar, unpronouncable, ungoogleable squiggles.

Imagin you're reading a musical score, with no fancy symbols, just a waveform of the sound you should produce. What could be simpler, you say?


I think this is a bad comparison. If you're, say, reading music in order to play it, why would you need things like functions and renamings in order to read it? Just read it straight off - one bar at the time. There isn't any need to have some kind of "pattern" or "abstraction" that compresses 20 bars into "this one thing, more or less". And you would have to read all of it to play the piece faithfully anyway. Maybe there is some utility to be able to say "I don't care about the nuances; just show me the general structure right now". I don't know, I'm not a composer.

On the other hand, it is of course tremendously useful in programming to look at some lines of code and be able to say "Oh, so this code this and that", then moving on. If you want more details, dive into that section more. You soar over the code to get an overview and dive down for more specifics when you need to. A music piece can be read (and played) from beginning to end, but that is less useful in programming. Maybe for a late evening with a bottle of wine when you want to appreciate the beauty of a code base that you really like, I guess.


And imagine tying to compose a 45-minute sonata, laboriously penciling one note at a time, instead of an overlay of broad strokes -- melody lines, crescendos, codas, ect.


I don't think the issue is learning 3 new functions. The author appears to be an experienced Haskeller, so I'm sure they're aware of applicative functors. I think the issue is whether the usage of applicative functors was necessary or just for the sake of being clever for no added benefit.


Of the three, I'd contend <$> is the most reasonable to do since it's very similar to $ (which is commonly used), but applied to a boxed argument (Functor).


On the other hand, <*> is closer in spirit to what $ is.


I agree. Could understand that first snippet, the second, not so much. I'm appreciating Javascript more and more because stuff like that doesn't happen there.


That refactor is of a common pattern (I've seen it before; I've probably written it myself) into a bog-standard use of Applicative code. This use is pretty much the introductory example for Applicative. Do you use Haskell? Because Applicative is fairly well-known, and the operators etc. aren't considered esoteric and they are not something you are likely to have to learn when you encounter them. If you don't use Haskell, on the other hand, I don't see how you are in a position to critique the readability of a refactor.

The philosophy at play here is probably not code golf as much as it is about the principle of least power[1]. Applicative is strictly less powerful than Monad. So when reading Applicative code, there is less stuff to look out for -- it is great to be able to know what an expression can't do, when reading it.

Maybe the do-notation makes it look prettier, but I don't know if it makes it more readable. Maybe more superficially look like imperative code which makes one think Oh, I get this. But it might be a false sense of safety. In any case, I guess Applicative do notation can be used (at some point).

[1] http://en.wikipedia.org/wiki/Rule_of_least_power


As someone who also recently joined an all haskell team, but hadn't coded professionally in haskell before, I'll add the biggest thing I've noticed to the "how it feels" list:

On a biweekly basis I'm shown something new causing my brain to melt, leak out of my ear & pool onto the floor near my desk. It's awesome[0]!

Today's was using impredicative types + `$` has a special case for dealing with it[1].

[0]: ...some number of hours later when I've collected my thoughts and think "OHHH... duh" [1]: https://ghc.haskell.org/trac/ghc/wiki/ImpredicativePolymorph...


Habitually rewriting things in applicative and/or pointfree is a sure sign of a junior Haskell dev. One who has never had to maintain someone else's code.

There are cases where it helps (e.g. structured parsing), but all too often this is simply obfuscation. Even if motivation is good -- learning new structures -- you still have to focus on cost/benefit of each change in structure.

Keep it simple, folks.


I'm generally in favour of introducing applicatives, but I don't think the code in the article is a good example:

    (bid, now) <- liftIO $ do
        b <- randomIO
        n <- getCurrentTime
        return (b, n)

    (bid, now) <- liftIO $ (,)
        <$> randomIO
        <*> getCurrentTime
The applicative code could be considered better, since it avoids one-shot variables. However, that's not the biggest smell with this code. To me, the problems are:

- Constructing-then-destructing a pair; is it really necessary?

- Use of `$`; is there a way to avoid having to alter the precedence?

- Combining two seemingly unrelated actions into one; `randomIO` should be completely uncorrelated to the current time, so why put them in the same action?

- Potential laziness issues; it looks like we're using the `(bid, now)` combination to ensure the two IO actions are executed together, but will they be? WHNF will give us `(_ , _)`; if we force `bid`, will `now` also be forced?

Not saying I have answers to these, but I would say those are more "smelly" than the do-notation with a return


Code review 4 lines: hundreds of comments

Code review 1000 lines: "Looks fine"


Well, my main comment is that the most pungent smells in this code depend on its context and what it's trying to achieve. If I encountered it, I would immediately look at the surrounding context and usage to see if it's a reasonable approach to the problem.

Without that context, all that can be done are superficial changes like monad/applicative; which in this case are very minor. In other words, the original code wasn't a bad approach to the problem; but is it solving the right problem?


How would you write it? Creating a pair does seem weird, unless those two values are going to be passed around through multiple method calls together for use in multiple locations.


> How would you write it?

As I said, I don't know; I'd have to see what the context is. As I've written in another comment, these few lines don't seem worth "fixing"; they're not bad. Yet they might be part of some larger arrangement that's overly complex, redundant, etc.

> Creating a pair does seem weird, unless those two values are going to be passed around through multiple method calls together for use in multiple locations.

Yet the pair is destructed immediately into two variables `bid` and `now`; if we want to pass a pair around, we'd need to create a new one using `(bid, now)`, just as if they were created separately.

If we need to use the pair, we should keep it; eg. `bidNow = liftIO $ ...`.


Could someone explain 'applicative'? To me it seems like throwing some magic syntax at a simple usecase to make it more terse and less readable - but maybe I misunderstand and it's something every Haskell programmer knows.


There's no actual magic syntax, just some ordinary operators.

    (<$>) is infix "fmap"
    (<*>) is infix "ap"
The "fmap" function lets you apply something of the form (a -> b) to a value of the form (m a), to get something of the form (m b). Specialized to lists, it's the familiar "map" function, but there are a lot of other things it can apply to.

The "ap" function lets you apply a "wrapped" function to a "wrapped" value - "apply this list of functions to that list of values".

The way these combine, along with currying, means you get a well known pattern for applying a many argument function to many wrapped values:

    <$> before the first argument
    <*> before every remaining argument
It works out like this:

    let add3 :: Integer -> Integer -> Integer -> Integer
        add3 x y z = x + y + z

    (add3 5) :: Integer -> Integer -> Integer
    (add3 <$> Just 5) :: Maybe (Integer -> Integer -> Integer)
    (add3 <$> Just 5 <*> Just 3) :: Maybe (Integer -> Integer)
    (add3 <$> Just 5 <*> Just 3 <*> Just 9) :: Maybe Integer


An Applicative functor is a type constructor which lets you lift functions of arbitrary numbers of arguments to functions whose arguments are wrapped with that type constructor.

Concretely, if you think about promises in JavaScript, for example, if you have a function A -> B -> C -> D, say, and you have three promises of types Promise A, Promise B and Promise C, you can construct a Promise of type Promise D by running the three in parallel, and applying your three-argument function when they are all done. So you've turned a function of type A -> B -> C -> D into one of type Promise A -> Promise B -> Promise C -> Promise D. If you can do that sensibly for any number of function arguments, you have an Applicative. ("sensibly" here means that there are type class laws which have to hold)

Every Monad is also an Applicative functor, since you could use do notation to compose your promises instead, but there are other interesting Applicatives which do not come from Monads.


In most cases (like the example), it simply allows you to call "effectful" code (often monadic, IO etc) to populate arguments of a function.

My favorite example is "I want to grab three web pages concurrently and return them in a single triple." To the haters who think this is just golf, just TRY to write this as beautifully in any other language.

  (page1, page2, page3)
     <- runConcurrently $ (,,)
     <$> Concurrently (getURL "url1")
     <*> Concurrently (getURL "url2")
     <*> Concurrently (getURL "url3")
(taken from the excellent haddocks for Control.Current.Async https://hackage.haskell.org/package/async-2.0.1.4/docs/Contr...)


Alice ML:

    val (page1, page2, page3) = (spawn getURL "url1",
                                 spawn getURL "url2",
                                 spawn getURL "url3")
The requests are concurrent. Spawn returns a future and 'page1', etc are futures immediately. When the value of 'page1', etc is requested later the requesting process either blocks until the getURL for that value is complete, or transforms into the value implicitly if it's already done.

I think it's just as nice as the Haskell example. I do agree that Haskell is a great language though.


I don't know Alice ML but one advantage of using abstractions like Applicative is that you can write code which is polymorphic in the particular applicative you choose. So, you can mock your concurrent requests using the Identity or ZipList applicative, and use the type class laws to prove things which are true about both. I'm guessing, but it looks like the spawn syntax is built-in here. Not that it's not as elegant as the Concurrent version, but likely not as powerful from an abstraction point of view.


Correct, spawn is built in in Alice ML. Haskell wins on all the ways it can be used beyond built in functionality compared to it.


Applicatives are definitely something that every Haskell programmer knows. I would expect everyone on the team to understand the recommended new version, but it's less clear whether it is actually an improvement. There is a proposal to allow applicatives to use the same syntax as monads (the original code, before the change), which would render the question moot: https://ghc.haskell.org/trac/ghc/wiki/ApplicativeDo


Love it or hate it, those 2 screenshots are life of a Haskeller:

1. ever-more compact abstractions with awkward names

2. chasing down space leaks with janky gnuplot'd reports.


As much as I would love to use Haskell professionally, I always wonder if teams/projects using Haskell don't also struggle with all the sociology involved in developing software ..

Sure, being able to use Applicative reduces lots of code duplication, brings in better clarity and all .. but what if the product owner still remains a jerk without a vision?

In all the teams I have been so far, I can honestly say that the _language_ itself was never _the_ problem. It was always a combination of communication, skill or product vision.

The only thing I can imagine is that by choosing Haskell you tend to get better skilled developers - so only communication and product vision remain that could ruin your project/product.

Any thoughts?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: