Try   HackMD

Recursion schemes, categorically!

Traditional algebras deal with two things:

  1. Building and parsing expressions
  2. Evaluating expressions

Building expressions

Example of an expression:

x2+3x+4

When we parse such an expression we create an expression tree. We start with

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Expanding it recursively, we get

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

3 and
4
are constants of type Double. Variables like x are strings. Plus and times are binary operators.

sealed trait Expr
case class Const(r: Double) extends Expr
case class Var(a: String) extends Expr
case class Times(l: Expr, r: Expr) extends Expr
case class Plus(l: Expr, r: Expr) extends Expr

This is a recursive definition.

We have non-recursive leaves, Const and Var, and recursive binary nodes Times and Plus.

Our example is encoded as

def expr: Expr =
  Plus(Times(Var("x"), Var("x")), 
      Plus(Times(Const(3.0), Var("x")), 
          Const(4.0)))))

which can be visualized as

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Evaluating expressions

An obvious choice to evaluate this expression is to turn it into a single Double value.

We evaluate bottom up starting with leaves. A Const leaf can be evaluated to the value it contains.

def eval(expr: Expr): Double = {
  expr match {
    case Const(x) => x
    ???
  }
}

For evaluation purposes, the variable node can be assigned any value of type Double, for instance x = 5. For simplicity, we'll assume that any other variable is initialized to zero

def eval(expr: Expr): Double = {
  expr match {
    ???
    case Var(s) => s match {
        case "x" => 5
        case _   => 0
      }
    ???
  }
}

In general, we can write many evaluators, or we can define an evaluator that is parameterized by the value of "x".

To evaluate the nodes, we have to first recursively evaluate the children nodes producing values of the type Double

def eval(expr: Expr): Double = {
  expr match {
    ???
    case Times(l, r) => eval(l) * eval(r)
    case Plus(l, r)  => eval(l) + eval(r)
  }
}

The complete evaluator is then given by

def eval(expr: Expr): Double = {
  expr match {
    case Const(x)    => x
    case Var(s)      => s match {
                          case "x" => 5
                          case _   => 0
                        }
    case Times(l, r) => eval(l) * eval(r)
    case Plus(l, r)  => eval(l) + eval(r)
  }
}

But there are other choices for the result type of the evaluator, for instance, a string. A pretty printer is such an evaluator. The Const leaf is evaluated to a string representation of the Double it contains, a Var leaf to the name of the variable, and the nodes are evaluated recursively

def pretty(expr: Expr): String = {
  expr match {
    case Const(x)    => s"$x"
    case Var(s)      => s
    case Times(l, r) => s"${pretty(l)} * ${pretty(r)}"
    case Plus(l, r)  => s"${pretty(l)} + ${pretty(r)}"
  }
}

The problem is that different evaluators mix the recursive logic with the logic of combining the results of evaluating the child trees. We are repeating the same boilerplate. We would like to factor these things out.

Factorizing out the recursion

When implementing an evaluator, we had to first choose the result type and then define a function of a particular type. We had a pair Double and eval and another pair String and pretty.

There are two orthogonal concerns here: one is the recursive nature of evaluation, and the other is combining the results of the evaluation at every step. If the children are evaluated to Double, we add or multiply the numbers. If they are evaluated to String we concatenate them with the appropriate operator symbol between them.

We'd like to create a data structure that would have placeholders for the results of the evaluation of the children. To do that, let's parameterize our Expr by the result type R

sealed trait ExprF[R]

The leaves have no children

sealed trait ExprF[R]
case class ConstF[R](a: Double) extends ExprF[R]
case class VarF[R](a: String) extends ExprF[R]

whereas the nodes do

case class TimesF[R](l: R, r: R) extends ExprF[R]
case class PlusF[R](l: R, r: R) extends ExprF[R]

In the original recursive definition the children were full-blown expression trees; here they are replaced by values of some general type R. R can stand for the result of the evaluation, or it could stand for the whole expressionas we'll see later when we define a fixed point.

Altogether we get a data structure parameterized by R

sealed trait ExprF[R]
case class ConstF[R](a: Double) extends ExprF[R]
case class VarF[R](a: String) extends ExprF[R]
case class TimesF[R](l: R, r: R) extends ExprF[R]
case class PlusF[R](l: R, r: R) extends ExprF[R]

With this definition we can implement our two evaluators as

def evalF(expr: ExprF[Double]): Double = {
  expr match {
    case ConstF(x)    => x
    case VarF(s)      => s match {
                          case "x" => 5
                          case _   => 0
                        }
    case TimesF(l, r) => l * r
    case PlusF(l, r)  => l + r
  }
}
def prettyF(expr: ExprF[String]): String = {
  expr match {
    case ConstF(x)    => s"$x"
    case VarF(s)      => s
    case TimesF(l, r) => l + " * " + r
    case Plus(l, r)   => l + " + " + r
  }
}

Notice that these implementations are no longer recursive. The idea is that such evaluators deal with a single level (either leaf or node) of an expression tree. They assume that child trees have already been evaluated, and they only combine those results. They only implement the business logic of the evaluator.

But how do we use them to evaluate recursive expressions? For that we need a bit of category theory.

F-algebras

We now have two evaluators, one producing a Double and another a String. We know that there are functions that can turn a Double into a String. Can we use such a function, for instance toString, to define a mapping of one evaluator to another?

We can try two different ways of doing that:

  • We can take an ExprF Double, evaluate it using evalF and then apply our function toString to obtain a string.
  • Or we can take an ExprF Double, transform it to ExprF String, and then use the evaluator prettyF.

In the latter, we have to transform ExprF Double to ExprF String. We can do this if ExprF is a functor. And, indeed, it is

import cats._
import cats.implicits._

implicit val functorForExprF: Functor[ExprF] = new Functor[ExprF] {
  override def map[A, B](fa: ExprF[A])(f: A => B): ExprF[B] = fa match {
    case ConstF(x)    => ConstF(x)
    case VarF(a)      => VarF(a)
    case TimesF(l, r) => TimesF(f(l), f(r))
    case PlusF(l, r)  => PlusF(f(l), f(r))
  }
}

Note: we are using cats library instead of implementing our own functor.

We can now use map(_.toString) to turn ExprF Double to ExprF String.

Unfortunately, the two ways of evaluating ExprF Double to a String produce different results.

For instance, take PlusF(1.0, 2.0)

  • Apply evalF to it. You get 3.0, and then toString turns it to "3.0".
  • Apply map(_.toString) to it. You get PlusF("1.0", "2.0"). Then prettyF turns it to "1.0 + 2.0".

Try it!

val sum: ExprF[Double] = PlusF(1.0, 2.0)

val s1 = evalF(sum).toString
val s2 = prettyF(sum.map(_.toString))

We have to conclude that these two evaluators cannot be transformed into each other using toString in a consistent way. In other words, toString doesn't preserve "the structure" of our evaluator.

On the other hand, some evaluators can be transformed into each other.

Here's an evaluator which produces a pair (String, Double)

def evalPair(expr: ExprF[(String, Double)]): (String, Double) = expr match {
  case ConstF(x) => ("Done", x)
  case VarF("x") => ("Done", 5)
  case VarF(_)   => ("Done", 0)
  case TimesF((s, l), (_, r)) => (s, l * r)
  case PlusF((s, l), (_, r))  => (s, l + r)
}

We'll use the following function to map one type to another

def mkDone(x: Double): (String, Double) = ("Done", x)

We can check that the two ways of evaluating our test expression coincide

v1 = mkDone(evalF(sum))
v2 = evalPair(sum.map(mkDone))

We'll say that mkDone preserves the structure of the evaluator.

A combination of a type and an evaluator for a given functor F is called an algebra

type Algebra[F[_], A] = F[A] => A

The type A is called the carrier of the algebra, and the evaluator is called the structure map.

So far we've been concentrating on one such functor, ExprF, but algebras can be defined for any functor.

We've seen three algebras for ExprF, with the carriers, respectively, Double, String and (String, Double)

def evalF   : Algebra[ExprF, Double]
def prettyF : Algebra[ExprF, String]
def evalPair: Algebra[ExprF, (String, Double)]

Functions between carriers that preserve the algebra structure are called algebra morphisms. We've seen one such algebra morphism, mkDone that maps the ExprF-algebra <Double, evalF> to <(String, Double), evalPair>. The function toString, on the other hand, is not an algebra morphism.

The condition that has to be satisfied by an

F-algebra morphism
(A,a)(B,b)
corresponding to a function
f:AB
is often represented by a commuting diagram

where

F(f) is the lifting of
f
by functor
F
(which is done with map in Scala). We say that a diagram commutes if the two paths yield the same result. Here it is

bF(f)=fa

In our example,

f was mkDone,
a
was evalF, and
b
was evalPair. Composition of functions is written as
fg
and read "f after g". In Scala we often invert the order and say g andThen f.

By pasting two such diagrams, we can easily convince ourselves that a composition of two algebra morphisms

gf is again an algebra morphism. The outer rectangle commutes because the two squares commute

The identity function is also an algebra morphism

because

ida=aid, and using the fact that a functor acting on identity is again an identity.

Composition and identity are part of a definition of a category and, indeed, algebras and algebra morphisms for a given functor form a category (there is also the requirement of associativity of composition). The important part is that, in a category, we can define something called an initial object. Here, objects are algebras. An initial algebra is an algebra that has a unique outgoing algebra morphism to any other algebra.

If we call the initial algebra <I, j>, with the carrier

I and the structure map

j:FII

then for any other algebra <A, a> we have a unique

m:IA such that the following diagram commutes

This might seem unlikely, considering how hard it is to satisfy the commutation condition. But it turns out that our functor ExprF has an initial algebra. In fact, we've seen it already: its carrier type is the recursive data type Expr.

Indeed, we can easily define an evaluator ExprF Expr => Expr

def j: Algebra[ExprF, Expr] = {
  case ConstF(x)    => Const(x)
  case VarF(s)      => Var(s)
  case TimesF(l, r) => Times(l, r)
  case PlusF(l, r)  => Plus(l, r)
}

The key result that explains this is the Lambek's Lemma. It states that the structure map (the evaluator) of the initial algebra is an isomorphism. Indeed, if you look at our implementation of j, you see that it can be easily inverted. The proof of the Lambek's lemma is pretty straightforward and we leave it to the appendix.

Since

j is an isomorphism, we can write

FII

In other words,

I is a fixed point of
F
: the action of
F
on
I
gives us back the same
I
.

This is how we can express the fixed point of a functor F in Scala

case class Fix[F[_]](unfix: F[Fix[F]])

The isomorphism is witnessed by two functions, in and out.

object Fix {
  def in[F[_]]: F[Fix[F]] => Fix[F] = ff => new Fix[F](ff)
  def out[F[_]]: Fix[F] => F[Fix[F]] = f => f.unfix
}

Notice that in is the evaluator for the F-algebra with the carrier Fix[F], and out is its inverse.

Fix[ExprF] is equivalent to our recursive data structure Expr. It contains exactly the same information.

type Ex = Fix[ExprF]

We can define helper functions called smart constructors to help us build recursive expressions of the type Fix[ExprF]

def v(s: String): Ex = Fix(VarF(s))
def num(d: Double): Ex = Fix(ConstF(d))

def mul(l: Ex, r: Ex): Ex = Fix(TimesF(l, r))
def add(l: Ex, r: Ex): Ex = Fix(PlusF(l, r))

Our original expression

x2+3x+4 can be written as

val expr = add(mul(v("x"), v("x")),
               add(mul(num(3), v("x")), 
                   num(4)))

These smart constructors can be also used to implement a mapping from Expr to Ex. We'll implement the inverse mapping later using a catamorphism.

Catamorphisms

By definition of an initial object, there exists a unique mapping from the initial algebra to any other algebra

Using Lambek's lemma, we can express the carrier of the initial algebra as a fixed point Fix F, with the evaluator in and its inverse out

By following the arrows, this diagram allows us to express m, recursively, as a composition

m=algF(m)out

The unique function

m is called a catamorphism for a given algebra. This formula translates directly to code.

def cata[F[_] : Functor, A](alg: Algebra[F, A]): Fix[F] => A = {
  ex => Fix.out[F].andThen(_.map(cata(alg))).andThen(alg)(ex)
}

Let's analyze what happens when we apply a catamorphism to a particular expression

val expr = add(mul(v("x"), v("x")),
               add(mul(num(3), v("x")), 
                   num(4)))

First, we apply out, which exposes the top level node

def out: Fix[F] => F[Fix[F]]

In this case, expr was created by applying Fix to a PlusF node containing two expressions

def add(l: Ex, r: Ex): Ex = Fix(PlusF(l, r))

Applying out to it exposes PlusF e e'.

We then apply the catamorphism to this node's children using _.map(cata(alg)). This is where the recursion kicks in. But since the children are smaller than the original tree, the recursion is well foundedwe are eventually bound to hit the leaves, at which point the recursion terminates (see the action of map on leaves).

This recursive application reduces the children to values of the carrier type. We can then apply the evaluator to our top-level node and obtain the final value.

So that's the idea: you evaluate the children and then combine the results within a node. The recipe for combining the results is the algebra.

We are now ready to apply catamorphisms to the algebras we have previously defined. For instance, we can use our prettyF

def prettyF: Algebra[ExprF, String] = {
    case ConstF(x)    => s"$x"
    case VarF(s)      => s
    case TimesF(l, r) => l + " * " + r
    case PlusF(l, r)  => l + " + " + r
}

to pretty print the expression

val expr =
  add(mul(v("x"))(v("x")))
     (add(mul(num(3))(v("x")))
         (num(4)))
cata(prettyF).apply(expr)

As we mentioned before, the conversion from the fixed point form to Expr can also be done using a catamorphism

val toExpr: Algebra[ExprF, Expr] = {
  case ConstF(x)    => Const(x)
  case VarF(s)      => Var(s)
  case TimesF(l, r) => Times(l, r)
  case PlusF(l, r)  => Plus(l, r)
}

def mkExpr = cata(toExpr)

You can find the code here

Appendix. Lambek's lemma

Suppose that

I,j is the initial
F
-algebra with the structure map
j:FII
. It means that for any other algebra
A,a
there is a unique function
m:IA
such that the following diagram commutes

It so happens that

FI,Fj is also an
F
-algebra, because the lifting of
j
has the right type

Fj:F(FI)FI

There must, therefore, be a unique algebra morphism from

I,j to it making the following diagram commute

We could draw another diagram

which trivially commutes, because the two paths are identical.

We can paste the two diagrams together to form a larger commuting rectangle

The bottom of this diagram,

jm is therefore also an algebra morphism, but it's an algebra morphism from the initial algebra to the initial algebra. The initiality condition tells us that there can be only one such morphism, and we know that the identity function satisfies this condition. Therefore
jm
must be the same as the identity

jm=id

We can now lift this equation to get

FjFm=id

(functor laws ensure that the lifting of a composition is a composition of liftings, and that the lifting of identity is an identity). But we have established earlier that this diagram commutes

This is only possible if

mj=id.

This proves that

m is both the left and right inverse of
j
and, therefore, that
j
is an isomorphism.