There Will Be Monads

My sabbatical is coming to an end, and I just finished off my “modern programming languages” course at the Ho Chi Minh City University of Technology with a dose of monads. The good news is that I am not going to tell you how monads are just like Vietnamese spring rolls. Just think of them as a design pattern for composing actions. 

I finished my “modern programming languages” course at the Ho Chi Minh City University of Technology. We covered metaprogramming (with Ruby and Rails), continuations (with Racket and Scala), concurrency (with Scala and Clojure), and finished off with a dose of Haskell to see type inference and type classes in action. Here are the hardy souls who stuck it out until the end. (All but one, that is—one is behind the camera.)

What's the fun of teaching if you don't also get to learn something new? It was about time for me to understand the fuss about monads. The blogosphere is awash in articles proclaiming that monads are elephants, monads are burritos, monads are space suits, and so on. Naturally, these analogies left me unenlightened.

Along the way, I made the mistake of looking for tutorials that tried to explain monads using Scala (such as this one). I should have known better. A few years ago, I had been baffled by blog posts that tried to teach continuations with C++ or Java, with a lot of pointless machinery and little to show for. Finally, I took a deep breath and figured out how continuations work in Scheme. It is always best to learn these things in their natural habitat.

Ok, time to figure out monads in Haskell. This article is very good, but I kept scratching my head at the monad laws

    return x >>= f ≡ f x
    c >>= return ≡ c
    c >>= (λx . f x >>= g) ≡ (c >>= f) >>= g

I could verify the laws for a bunch of monads, but they seemed arbitrary and ugly. Then I ran across this page, with a casual remark that the laws look much better when formulated in terms of the “Kleisli composition operator” >=> (defined as λx . (f x) >>= g):

    f >=> return ≡ f
    return >=> g ≡ g
    (f >=> g) >=> h ≡ f >=> (g >=> h)

My ah-ha moment came when I saw the signature of that mysterious operator:

(a -> m b) -> (b -> m c) -> (a -> m c)

Here, we have two functions that take ordinary values and turn them into values in some alternate universe (Maybe t or IO t or whatever). And now we want to compose two of these functions to get another. Of course we do.

I instantly realized that, like so many other people, I had invented monads before. It was in a cell phone library that was shared between Android and BlackBerry apps. I had to deal with multi-step user interface actions. In each step, the user was asked some question. The answer only emerged when the user did some platform-specific input. I needed a mechanism for doing multiple steps in a row. I built a mini-framework of actions, each of which had an event handler for delivering the answer. Actions could be composed. The first handler kicked off the second action and the second handler returned the final answer. Yes, composition was associative :-) Unbeknownst to myself, that was my first monad.

What's the point? Had I known what monads are, I would have arrived at my design more quickly. I would have picked off-the-shelf convenience functions instead of inventing my own as the need became apparent. Monads aren't burritos, they are a design pattern.

I could go on and explain, but I won't, for the reason that is so ably expressed here.