In my previous post I talked about categories, functors, endofunctors, and a little about monads. Why didn’t I just start with a Haskell definition of a monad and skip the math? Because I believe that monads are bigger than that. I don’t want to pigeonhole monads based on some specific language or application. If you know what to look for you’ll find monads everywhere. Some languages support them better, as a single general abstraction, others require you to re-implement them from scratch for every use case. I want you to be able to look at a piece of C++ code and say, “Hey, that looks like a monad.” That’s why I started from category theory, which sits above all programming languages and provides a natural platform to talk about monads.

In this installment I will talk about some specific programming challenges that, on the surface, seem to have very little in common. I will show how they can be solved and how from these disparate solutions a pattern emerges that can be recognized as a monad. I’ll talk in some detail about the Maybe monad and the List monad, since they are simple and illustrate the basic ideas nicely. I will leave the really interesting examples, like the state monad and its variations, to the next installment.

Challenges of Functional Programming

You know what a function is: called with the same argument it always returns the same result. A lot of useful computations map directly to functions. Still, a lot don’t. We often have to deal with some notions of computations that are not functions. We can describe them in words or implement as procedures with side effects. In non-functional languages such computations are often called “functions” anyway– I’ll enclose such usage with quotation marks. Eugenio Moggi, the guy who introduced monads to computing, listed a few examples:

  • Partial “functions”: For some arguments they never terminate (e.g., go into an infinite loop). These are not really functions in the mathematical sense.
  • Nondeterministic “functions”: They don’t return a single result but a choice of results. Non-deterministic parsers are like this. When given an input, they return a set of possible parses. Which of them is right depends on the context. These are not really functions because a function must return a single value.
  • “Functions” with side effects: They access and modify some external state and, when called repeatedly, return different results depending on that state.
  • “Functions” that throw exceptions: They are not defined for certain arguments (or states) and they throw exceptions instead of returning a result.
  • Continuations.
  • Interactive input: getchar is a good example. It’s result depends on what hits the keyboard.
  • Interactive output: putchar is a good example. It’s not a function because it doesn’t return anything (if it returns an error code, it doesn’t depend on the argument). Still, it can’t be just optimized away if we want any output to appear on the screen.

The amazing thing is that, using some creative thinking, all these problems can be solved using pure functional methods. I won’t discuss all of them, but I’ll show you a series of progressively more interesting examples.

Error Propagation and Exceptions

Who doesn’t deal with error propagation? Let’s dissect the problem: You are defining a computation that can return a valid result only for a subset of possible arguments. So what happens when the argument is “wrong”? Well, you can return an error.

In the simplest case you might add one extra bit, success or failure, to your result. Sometimes you can spare a bit in the result type: If the correct result is always a non-negative number, you may return negative one as an error. Or, if the result is a pointer, you might return a null pointer instead. But these are just small hacks introduced for the sake of performance. And, like most hacks, they can lead to dangerous code. Consider this (likely incorrect) code:

size_t off = fileName.find('.');
string ext = fileName.substr(off, fileName.length() - off);

If fileName contains no dot, the result is a special value npos signifying an error. The problem is that npos is of the same type as a non-error result, so it can be passed quietly to string::substr causing undefined behavior.

A more general and safer solution is to change– extend– the type of the result to include an error bit. You can do it for any type of result. In Haskell, you just define a type constructor called Maybe:

data Maybe a = Nothing | Just a

It takes an arbitrary type a and defines a new type that adds the special value Nothing to the set of possible values of a. In C++ that would be equivalent to a template:

template<class T>
struct Maybe {
    T just; // valid if 'nothing' is false
    bool nothing;
};

(This is just an example. I’m not arguing that Maybe would be very useful in C++ which has other mechanisms for error propagation.)

So here we have a way to turn a computation that’s not defined for all values into a function that’s defined over the whole domain, but which returns a new richer type.

The next question is, do these new things– functions returning Maybe— compose? What should the caller do with the result of such a function when it’s part of a larger computation? One thing is for sure, this result cannot be passed directly to an unsuspecting function–the error cannot be easily ignored– which is good. If we replace find by a new function safe_find that returns a Maybe<size_t>, the client won’t call substr with it. The types wouldn’t match. Instead, the result of safe_find must be unpacked and (much more likely than before) tested.

Maybe<size_t> off = safe_find(fileName, '.');
string ext;
if (!off.nothing)
    ext = fileName.substr(off.just, fileName.length() - off.just);

Notice what happened here: By changing the return type of a function we broke the natural way functions compose– the result of one becoming the input of the next. On the other hand, we have come up with a new way of composing such functions. Let me chain a few of such compositions for better effect:

Maybe<Foo> foo = f(x);
if (!foo.nothing) {
    Maybe<Bar> bar = g(foo.just);
    if (!bar.nothing) {
        Maybe<Baz> baz = h(bar.just);
        if (!baz.nothing) {
            ...
        }
    }
}

I have highlighted the elements of the “glue,” which is used to compose our new, more error-conscious functions. Notice how this boilerplate glue gets in the way of code clarity. Ideally, we would like to write something like this:

DO
{
    auto y = f(x);
    auto v = g(y);
    auto z = h(v);
    return z;
}

where DO would magically provide the glue that’s implicit in the chaining of f, g, and h.

It’s not easy to do this– abstract the glue away– in C++. I’m not even going to try. But in Haskell it’s a different story. Let’s start with the almost direct translation of the C++ code into Haskell:

compose n =
    let m = f n in
    case m of
    Nothing -> Nothing
    Just n1 ->
        let m1 = g n1 in
        case m1 of
        Nothing -> Nothing
        Just n2 -> 
            let n3 = h n2 in
            n3

The if statements are replaced by Haskell’s pattern matching (case x of). The ms are the Maybes and the ns art their contents (if they aren’t Nothings).

Notice the characteristic cascading style of this code– the nested conditionals or pattern matches. Let’s analyze one level of such a cascade. We start with a Maybe value (one returned by f n). We unpack it and examine the contents. If the result is not an error (Just n1), we continue with the rest of the body of compose. I have highlighted the “rest of the body” in blue (the “glue” is still in red).

What’s also important is that, if the result is an error (the Nothing branch of the pattern match) the whole “rest of the body” is skipped. In order to abstract the glue, we have to abstract the “rest of the body” as well. Such an abstraction is called a continuation. Let’s write this continuation as a lambda (lambdas in Haskell are written using the backslash, which is nothing but the Greek letter λ missing one leg):

\ n1 -> 
    let m1 = g n1 in
    case m1 of
    Nothing -> Nothing
    Just n2 -> 
        let n3 = h n2 in
        n3

And here’s the trick: We can abstract the glue as a (higher-order) function that takes a Maybe value and a continuation. Let’s call this new function bind and rewrite compose with its help:

compose n =
    let m = f n in
    -- the first argument is m, the second is the whole continuation
    bind m (\n1 ->
                let m1 = g n1 in
                case m1 of
                Nothing -> Nothing
                Just n2 -> 
                    let n3 = h n2 in
                    n3)

Here’s how bind is implemented:

bind m f = 
    case m of 
    Nothing -> Nothing  -- bypass the continuation
    Just n -> f n       -- pass n to the continuation

or, more compactly,

bind Nothing cont  = Nothing
bind (Just n) cont = cont n

Figure 1 illustrates the complete code transformation. The result of f n, which is a Maybe, is passed to bind, represented by a blue box. Inside bind the Maybe is unpacked. If its value is Nothing, nothing happens. If its value is Just n1, the rest of the code, the continuation, is called with the argument n1. The continuation itself is rewritten using bind, and so on. The final continuation calls return, which I will explain shortly.

Fig 1. Composition of functions that return Maybe results

The position of bind, the blue box in Figure 1, between its Maybe argument and the continuation suggests that infix notation might be more appropriate. Indeed, in Haskell bind is represented by the infix operator, >>=:

Nothing  >>= cont = Nothing
(Just x) >>= cont = cont x

(The left-hand side of the equal sign is the operator between its two arguments [actually, patterns], and the right-hand side is the body of the function.) We can express the type signature of the bind function as:

(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b

It takes a Maybe a and a function from a to Maybe b (that’s our continuation). It returns a Maybe b. When dealing with higher order functions I find type signatures extremely useful; so much so that I often use them as comments in my C++ meta-code.

The complete rewrite of compose using >>= looks like this:

compose1 n =
    f n >>= \n1 ->
        g n1 >>= \n2 ->
            h n2 >>= \n3 ->
                return n3

Now is the time to explain the mysterious return at the end of the function. No, it’s not the keyword for returning a result from a function. It’s a function that takes an argument of type a and turns it into a Maybe a:

return n = Just n

We need return because the result of compose is a Maybe. If any of the functions returns Nothing, compose returns Nothing (it’s part of the definition of >>=). Only when all functions succeed do we call return with the correct result, n3. It turns n3 into Just n3, thus announcing the success of the computation and encapsulating the final result.

You might say that using return instead of Just n3 is an overkill, but there are good reasons to do that. One is to encapsulate and hide direct access to the implementation of Maybe. The other has to do with the ways this pattern is generalized beyond Maybe.

The Maybe Monad

What does Maybe have to do with monads? Let’s see what we have done so far from a more general perspective.

We started from a computation which didn’t behave like a function– here, it was not defined for all arguments. We had found a clever way to turn it into a function by enriching its return type. Such general “enriching” of types can be expressed as a type constructor.

From a category-theoretical point of view, defining a type constructor, which is a mapping from types to types, gets us half way towards defining an endofunctor. The other part is the mapping of functions (morphisms). For any function that takes values of type a and returns values of type b we need a corresponding function that acts between the enriched types. In Haskell, the mapping of functions to functions can be expressed as a higher-order function. When this mapping is part of an endofunctor, the corresponding higher-order function is called fmap.

Let’s call the type constructor M. It constructs the enriched type M a from any type a. The function fmap has the following signature:

fmap :: (a -> b) -> (M a -> M b)

It maps a function from a to b to a function from M a to M b. In the case of Maybe we already know the type constructor part, and fmap will follow soon.

Let’s go back to our computation turned function returning an enriched type. In general, its signature is:

f :: a -> M b

Since we care about composability of computations, we need a way to glue the enriched functions together. In the process of gluing we usually have to inspect the enriched type and make decisions based on its value. One of such decisions might be whether to continue with the rest of the computation or not. So the general bind operation must take two arguments: the result of the previous enriched function and the rest of the computation– a continuation:

bind :: M a -> (a -> M b) -> M b

If you squint enough, you can see a similarity between bind and fmap.

Here’s how you do it: Imagine that you supply bind with its second argument, the continuation. There is still one free parameter of the type M a. So what’s left is a function that takes M a and returns M b. (This is not exactly currying, since we are binding the second argument rather than the first, but the idea is the same.) That’s exactly the type of function that occurs on the right hand side of the signature of fmap. Seen from that point of view, bind maps functions into functions:

(a -> M b) -> (M a -> M b)

The function of the left hand side is our continuation, the one on the right hand side is bind with the second argument plugged by the continuation.

This signature is really close to that of fmap, but not quite. Luckily, we still have a second family of functions at our disposal, the polymorphic return. The signature of return is:

return :: a -> M a

Using both bind and return we can lift any function f:

f :: a -> b

to a function g:

g :: M a -> M b

Here’s the magic formula that defines g in terms of f, bind, and return (the dot denotes regular function composition):

g ma = bind ma (return . f)

Or, using infix notation:

g ma = ma >>= (return . f)

This mapping of f into g becomes our definition of fmap. Of course, fmap must obey some axioms, in particular it must preserve function composition and interact correctly with unit (which I will discuss shortly). These axioms translate back into corresponding conditions on bind and return, which I’m not going to discuss here.

Now that we have recovered the functor, we need the two other parts of a monad: unit and join. Guess what, return is just another name for unit. For any type, it lifts an element of that type to an element of the enriched type in the most natural manner (incidentally, the word natural has a very precise meaning in category theory; both unit and join are natural transformations of the functor (M, fmap)).

To recover join, let’s look at its type signature in Haskell:

join :: M (M a) -> M a

It collapses one layer of the type constructor. In the case of Maybe it should map a double Maybe to a single Maybe. It’s easy to figure out that it should map Just (Just x) into Just x and anything else to Nothing. But there is a more general way of defining join using bind combined with the identity function:

join :: M (M a) -> M a
join mmx = mmx >>= id

We are binding a value of type M (M a) to a function id. This function, when acting on M a returns M a. You can convince yourself that these signatures match the general signature of bind (see Appendix); and that this definition, when applied to Maybe, produces the correct result.

To summarize: What we have done here is to show that Maybe together with bind and return is a monad in the category-theory sense. Actually, we have shown something far more general: Any triple that consists of a type constructor and two functions bind and return that obey certain identities (axioms) define a monad. In category theory this triple is called a Kleisli triple and can be used as an alternative definition of a monad.

Syntactic Sugar

So far you’ve seen only the Maybe example, so it might seem like we are pulling out a monadic cannon to kill a mosquito. But, as you’ll see later, this same pattern pops up in a lot of places in various guises. In fact it’s so common in functional languages that it acquired its own syntactic sugar. In Haskell, this sugar is called the do notation. Let’s go back to the implementation of our function compose:

compose n =
    f n >>= \n1 ->
        g n1 >>= \n2 ->
            h n2 >>= \n3 ->
                return n3

Here’s exactly the same code in do notation:

compose n = do
    n1 <- f n
    n2 <- g n1
    n3 <- h n2
    return n3

This looks deceptively similar to an imperative program. In fact here’s a C++ version of it (for the sake of the argument let’s assume that f takes a pointer to Foo and h returns an integer) :

int compose(Foo * n)
{
    auto n1 = f(n);
    auto n2 = g(n1);
    auto n3 = h(n2);
    return n3;
}

Uh, one more thing. This is how you would call it:

try {
    compose(pFoo);
}
catch(...) {
    // error handling
}

In C++ you get virtually the same functionality not by modifying the return type, but by throwing an exception. (Now you may utter your first cry, “Hey, that looks like a monad!”, when looking at C++ code.)

Just like in our Haskell example, once any of the functions reports an error (throws an exception), the rest of the body of compose is skipped. You might say that C++ exceptions offer more power than the Maybe monad and you’ll be right. But that’s because I haven’t shown you the Error monad and the Exception monad.

Where Haskell’s Exception monad beats C++ exceptions is in type checking. Remember the unfortunate attempt at adding exceptions specification to the C++ type system? (Java went the same way.) Here’s what the C++ standard says:

An implementation shall not reject an expression merely because when executed it throws or might throw an exception that the containing function does not allow.

In other words, exceptions specifications are bona fide comments. Not so in Haskell! If a function returns a Maybe or Exception, that becomes part of its type signature which is checked both at the call site and at the function definition site. No cheating is allowed, period.

But the major difference between Haskell’s and C++’s approach is that the do notation generalizes to all monads, whereas C++ neat try/catch syntax applies only to exceptions.

A word of caution when reading Haskell monadic code. Despite similarities in usage, Haskell’s left arrow is not an assignment. The left hand side identifier corresponds to the argument of the continuation (which is the rest of the do block). It is the result of unpacking the outcome of the right hand side expression. This unpacking always happens inside bind.

Non-deterministic Computations

In the previous post I introduced the list monad by defining a functor and two families of functions. The type constructor part of the functor mapped any type a into the list of a, [a]. The part that acted on functions (now we know that, in general, it’s called fmap; but for lists it’s just map) worked by applying the function to each element of the list. The unit and the join were polymorphic functions. unit was defined as:

 unit x = [x]

and join as concat.

Now I can tell you that the list monad provides the solution to the problem of implementing non-deterministic computations. Imagine parsing a non-deterministic grammar. A production might offer multiple alternative parse trees for the same input.

These types of computations may be simulated with functions returning lists of possible results. Here’s the same trick again: whatever type is returned by a deterministic parser (e.g., a parse-tree type), it’s now turned into an enriched type (a parse tree list).

We’ve already seen the category-theory construction of the list monad but here we are facing a slightly different problem: how to string together functions that return lists. We know that we have to define bind. It’s signature, in this case, would be:

bind :: [a] -> (a -> [b]) -> [b]

It takes a list (which is the result of a previous function) and a continuation, (a -> [b]), and must produce another list. The obvious thing is to apply the continuation to each element of the input list. But since the continuation also produces a list, we’ll end up with a list of lists. To get a single list as the result, we’ll simply concatenate the sublists. Here’s the final implementation:

xs >>= cont = concat (map cont xs)

The application of the continuation to each element of the input list is done using map.

Is this a coincidence that we were able to define bind in therms of join and fmap? Not at all. The general formula for converting the functor definition of a monad to a Kleisli triple is:

  • Take the object-mapping part of the functor (the type constructor)
  • Define bind as
    bind x f = join ((fmap f) x))

    where fmap is the part of the functor that maps morphisms.

  • Define return as unit

Now we know how to move between those two definitions back and forth.

Just as in the case of Maybe, we can apply the do notation to functions returning lists:

toss2Dice = do
    n <- tossDie
    m <- tossDie
    return (n + m)

Here, tossDie returns the list of all possible outcomes of a die toss, and toss2Dice returns the list of all possible sums of the outcomes of a two-die toss.

An interesting observation is that the list monad is closely related to list comprehensions, down to the use of left arrows (in Haskell). For instance, the above example is equivalent to:

toss2Dice = [n + m | n <- tossDie, m <- tossDie]

Conclusions

There is a large class of computations that convert input to output in a non-functional way. Many of those can be expressed as functions that return “enriched” output. This enrichment of the output can be expressed in terms of a type constructor. This type constructor defines the first component of a monad.

Computations have to be composable, so we need a way of composing the enriched functions. This introduces the second component of the monad, the bind family of higher-order functions.

Finally, we should be able to construct functions that return enriched types, and for that we need the third component of the monad, the return family of functions. They convert regular values into their enriched counterparts.

I’ve shown you two examples of Haskell monads in action, but the real treat will come in the third installment of my mini-series. I’ll show you how to define and compose functions that, instead of returning regular values, return functions that calculate those values.

Appendix: Join From Bind

By popular demand (see comments), I decided to explain in more detail the typing of the formula:

join :: M (M a) -> M a
join mmx = mmx >>= id

At first look it seems like id doesn’t have the right signature for the second parameter to bind:

bind :: M a' -> (a' -> M b') -> M b'

(I changed the names of type arguments to a' and b' to avoid confusion.) However, notice that here we are calling bind with the first argument, mmx, of type M (M a). It means that a' in this particular instantiation of bind is (M a). The second argument to bind is id with the signature:
id: c -> c
Here, id will be called with c equal to (M a), so its return type will also be (M a). With those substitutions, bind will also return type (M a):

M (M a) -> (M a -> M a) -> M a

Thus the whole combination (mmx >>= id) ends up with the right type signature for join.

Bibliography

  1. Eugenio Moggi, Notions of Computation and Monads. This is a hard core research paper that started the whole monad movement in functional languages.
  2. Philip Wadler, Monads for Functional Programming. The classic paper introducing monads into Haskell.
  3. Tony Morris, What Does Monad Mean?
  4. Brian Beckman, Don’t fear the Monad
  5. Graham Hutton, Erik Meijer, Monadic Parsing in Haskell.
  6. Brian McNamara, Yannis Smaragdakis, Syntax sugar for FC++: lambda, infix, monads, and more. A C++ template library for functional programming that imitates Haskell pretty closely.