Oh god no, this draft is actually pretty good, don't ruin it by infecting it with context. Context is the worst library in Go: it's a pile of hacks to get cancellation and goroutine-local storage by manually passing new heap-allocated objects through every function interface in every library between low-level IO up to task management. It infects everything, obliterating separation of concerns making even libraries that should have nothing at all do do with timeouts include code for it. And it includes an untyped key-value store implemented as a linked-list of pairs (!!!!), because why not?
If you can't tell, I don't like context. I've said before [0] I really hope that Go 2 comes up with an actual solution to the cancellation and task-local storage problems and deprecates context. Some comments in that thread pointed to alternatives that looked pretty decent, I wonder what state they're in these days.
I quite like the concept of contexts, I just wish they were implicit rather than explicit. The idea of a function taking place in a dynamic context makes perfect sense (because, after all, it does); the only thing which doesn't really make sense is having to pass it along by hand. Imagine how terrible it would be if we had to explicitly pass the return stack everywhere[0]!
It needs to 'infect' (as you put it) everything because it needs to pervade everything, so a library can pass it through to anything it calls. This is a good argument for making it implicit.
> And it includes an untyped key-value store implemented as a linked-list of pairs (!!!!), because why not?
That is just a cons chain, famous for example as the foundation of the Lisp programming language, and there is nothing wrong with it. Note that it is unfair to say that it is untyped: the values themselves are strongly typed, and the cells too are typed — interface{} (or T in Lisp terms) is still a type!
A cons chain has advantages for inheritance of values in a DAG of calls. It is not necessarily the most efficient, but simplicity is a virtue.
0: Continuation-passing style is both really powerful and well-nigh unreadable, for a reason.
I love contexts. They're a simple solution to a complicated problem. They take up space in the functions signatures but they're sooo easy to read and to use.
I'm familiar with go team's compatibility plan for "Go 2", or whatever it will be called, which is why I used "deprecate" not "delete". Though I agree that there's no requirement to have a new major version in order to introduce big fixes like this.
Everything old is new again. This challenge is basically why monads exist, because the monad allows you to cleanly separate between the state in which the algorithm is running and the algorithm itself. Reminds me of: https://philipnilsson.github.io/Badness10k/posts/2017-05-07-...
I wonder what poor abstraction the Go authors will come up with instead?
Every "monadic solution" prints the same code block without explaining how it would work, the types of the various variables, the semantics of the <- operator. I didn't leave the page with an understanding of how monads achieve these tasks.
Monads are just monoids in the category of endofunctors! /s
The issue is pretty much a language barrier. All these articles talking about benefits of / interesting ways to use monads are written by people who speak the language, assuming the reader speaks the language as well. As with many functional programming topics, the fundamentals aren't incredibly easy to wrap your head around. But if you already understood monads, can you imagine how annoying it would be if every resource, discussion, article, etc. relating to them started with a pages-long introduction on What is a Monad?
In Haskell, this is a fundamental topic. Monads are used everywhere. If people always explained how they worked when discussing them, it would be like looking up sorting algorithms and having every algorithm description start with a long-winded explanation of how for loops work.
> But if you already understood monads, can you imagine how annoying it would be if every resource, discussion, article, etc. relating to them started with a pages-long introduction on What is a Monad?
I agree that it would be unreasonable for every article that uses monads to describe what they are. But I don't think it would be unreasonable for every one of them to link to another article that does explain them.
Would you say the same about for loops? Depending on your audience, it's totally reasonable to expect they know certain things. Further, figuring out what to recommend isn't always easy, and I'd usually rather authors put their efforts into presenting the material that they have to share.
I just skimmed but IIUC, the article is a little more of an in-joke than an explanation and does seem to expect its audience to already understand (or maybe be intrigued enough to learn more from other sources?).
That said, I can try to explain here:
Haskell has a bit of syntax available called "do notation". You can write Haskell without it, but it makes some things read better (as a matter of common but not universal opinion).
There's a simple, purely syntactic translation from do notation into regular application of functions. "Syntactic sugar causes cancer of the semicolon." There are four rules, none of which is complicated, and only two of which are relevant here:
First, a single expression is just that expression, nothing magic happens.
do
expr
simply becomes
expr
Next, the arrows:
do
x <- m
... more stuff, which might use x ...
becomes
bind m (\ x -> do
... more stuff, which might use x ...
)
or in a few other syntaxes:
bind(m, x => do ... more stuff, which might use x ...)
(bind m (lambda (x) do ... more stuff, which might use x ...)
m.bind(|x| { do ... more stuff, which might use x ...)
bind(m, [] (auto x) { do ... more stuff, which might use x ...)
That internal do is then expanded recursively.
So to translate the whole block that's repeated throughout the article:
do
a <- getData
b <- getMoreData a
c <- getMoreData b
d <- getEvenMoreData a c
print d
becomes
bind getData (\ a -> do
b <- getMoreData a
c <- getMoreData b
d <- getEvenMoreData a c
print d)
which becomes
bind getData (\ a ->
bind (getMoreData a) (\ b -> do
c <- getMoreData b
d <- getEvenMoreData a c
print d))
which then becomes:
bind getData (\ a ->
bind (getMoreData a) (\ b ->
bind (getMoreData b) (\ c -> do
d <- getEvenMoreData a c
print d)))
and then:
bind getData (\ a ->
bind (getMoreData a) (\ b ->
bind (getMoreData b) (\ c ->
bind (getEvenMoreData a c) -> (\ d -> do
print d))))
and finally
bind getData (\ a ->
bind (getMoreData a) (\ b ->
bind (getMoreData b) (\ c ->
bind (getEvenMoreData a c) -> (\ d ->
print d))))
Which is "just" a bunch of chained functions combining lambdas.
So... why bother? and how does it do so many different things? and how does it know which to do? and what does it even do?
The key is that we're overloading "bind", picking the behavior that we want. You could do this in most languages by passing in a choice of function. We could have "do" take a parameter, like
do(bind)
or in an OO language you might hang bind on the objects involved. We often see this for particular instances - for instance, .then for promises/futures.
Haskell does it with a mechanism called "type classes", where you can specify an implementation of an interface for a given type, and the compiler will figure out which implementation to provide. This is very similar to Traits in rust, implicits in Scala, etc. You can usually avoid specifying the types manually because of inference.
So in Haskell, Monad is an interface providing two functions:
class Monad m where
bind :: m a -> (a -> m b) -> m b
pure :: a -> m a
We've already come across `bind`, which lets us operate "inside" a context in a way that combines the contexts.
The other function included, `pure`, takes a value and gives it the minimal possible context. What that means is implied by the "monad laws", which tell us how `bind` and `pure` must interact.
For each type that "is a monad", we tell the Haskell compiler how that type implements the interface:
// optionality:
instance Monad Maybe where
bind Nothing _ = Nothing
bind (Just x) f = f x
pure x = Just x
// state:
newtype State s a = State (s -> (a, s))
instance Monad State where
// pure gives the State action that produces x and leaves the state unchanged
pure x = State (\ s -> (x, s))
// bind threads the state through the chained actions
bind (State f0) f1 = State (\ s ->
let (x, s') = f0 s
State f2 = f1 x
in f2 s')
// lists:
instance Monad [] where
pure x = [x]
// bind for lists is concatMap
bind [] _ = []
bind (x:xs) f = f x ++ bind xs f
... that got long. Please ask any questions you're left with :)
newtype Reader r a = Reader { runReader :: r -> a }
instance Monad (Reader r) where
pure x = Reader (\ _ -> x)
m >>= f = Reader (\ r -> runReader (f (runReader m r)) r)
which allows us to define:
getContext :: Reader r r
getContext = Reader (\ r -> r)
and which spares us threading the context manually.
I don't think that's such a big deal, so long as there's only one thing we're threading.
In some languages, we can be generic over "contexts which provide the thing I want, whether they do other things or not", which is sometimes a much bigger win.
Context is like Schrödinger's Cat, you don't know if the Context you're working in is alive or dead (i.e. cancelled, timed out, etc.), but you have to keep passing it forward, one intermediate network call after the next, as long as its status is uncertain. Only when the request fully returns, or is actively cancelled, do you know whether the request is alive or dead.
The Maybe monad deals with the same issue. You have a series of function calls, each of which might or might not be legal, because you never know if you actually have the parameter for the next function call. Maybe you do, maybe you don't.
If you take the naive approach to solving the issue, you pass along Schrödinger's Cat each step along the way. You intertwine the concern of the algorithm you're trying to write with the uncertainty of whether you're carrying a live cat or a dead one. It can work, but it's ugly.
The monadic approach allows you to separate these concerns. You write your algorithm as if you know for a fact that the cat is alive. If the cat were ever to be revealed to be dead along the path, it doesn't matter, the monad separated the consequences of dealing with the live or dead cat from the rest of your algorithm. The rest of your algorithm simply doesn't get run.
Context is the same. You get to write code as if the context is always valid. You don't have to worry about context being cancelled, or timed out, or anything else. You take all those concerns and relate to them separately, in one place, where they can be neatly dealt with.
The whole point of the monadic pattern is to propagate state in such a way that it doesn't interfere with the pure algorithm which you're trying to write. You write the pure algorithm separately, and then use it within the monadic context of Context, so to speak.
If you can't tell, I don't like context. I've said before [0] I really hope that Go 2 comes up with an actual solution to the cancellation and task-local storage problems and deprecates context. Some comments in that thread pointed to alternatives that looked pretty decent, I wonder what state they're in these days.
[0]: https://news.ycombinator.com/item?id=18561884