Try   HackMD

ScalaUA, 2020, Plan

Traditional algebra deals with two things:

  1. Building and parsing expressions
  2. Evaluating expressions

Building expressions

Example of an expression:

x2+3x+4

When we parse such an expression we create an expression tree

Draw an expression tree

3 and
4
are constants of the type Double. Variables like x are strings. Plus and times are binary operators.

sealed trait Expr
case class Const(r: Double) extends Expr
case class Var(a: String) extends Expr
case class Times(l: Expr, r: Expr) extends Expr
case class Plus(l: Expr, r: Expr) extends Expr

This is a recursive definition.

We have non-recursive leaves, Const and Var, and recursive binary nodes Times and Plus.

Our example is encoded as


// expr = x*x + 3*x + 4
def expr: Expr =
  Plus(Times(Var("x"), Var("x")), 
      Plus(Times(Const(3.0), Var("x")), 
          Const(4.0)))))

Evaluating expressions

An obvious choice is to evaluate this expression to Double.

We evaluate bottom up. We have to start with leaves. A Const leaf can be evaluates to the value it contains.

def eval(expr: Expr): Double = {
  expr match {
    case Const(x) => x
    ???
  }
}

The variable node can be assigned any value of type Double, for instance, let's say x=5 and any other variable is zero

def eval(expr: Expr): Double = {
  expr match {
    ???
    case Var(s) => s match {
        case "x" => 5
        case _   => 0
      }
    ???
  }
}

In general, we can write many evaluators or an evaluator that is parameterized by the value of "x".

To evaluate the nodes, we have to first evaluate the children to Double

def eval(expr: Expr): Double = {
  expr match {
    ???
    case Times(l, r) => eval(l) * eval(r)
    case Plus(l, r)  => eval(l) + eval(r)
  }
}

The complete evaluator is then given by

def eval(expr: Expr): Double = {
  expr match {
    case Const(x)    => x
    case Var(s)      => s match {
                          case "x" => 5
                          case _   => 0
                        }
    case Times(l, r) => eval(l) * eval(r)
    case Plus(l, r)  => eval(l) + eval(r)
  }
}

But there are other choices for the result type of the evaluator, for instance, a string. A pretty printer is such an evaluator.

The Const leaf would be evaluated to a string representation of a Double, a Var to the name of the variable, and the nodes would be evaluated recursively

def pretty(expr: Expr): String = {
  expr match {
    case Const(x)    => s"$x"
    case Var(s)      => s
    case Times(l, r) => s"${pretty(l)} * ${pretty(r)}"
    case Plus(l, r)  => s"${pretty(l)} + ${pretty(r)}"
  }
}

The problem is that different evaluators mix the recursive logic with the logic of combining the results of evaluating the children trees.
And we are repeating the same boilerplate.

We would like to factor these things out.

Factorizing out the recursion

When implementing an evaluator, we had to choose the result type and a function of a particular type. We had a pair Double and eval and another pair String and Pretty.

There are two orthogonal concerns here: one is the recursive nature of evaluation, and the other is combining the results of the evaluation at every step. If the children evaluate to Double, we add or multiply the numbers. If they evaluate to String we concatenate them with the appropriate operator symbol between them.

We have to create a data structure that would have placeholders for the results of the evaluation of the children. In our case, this would be

Let's parameterize our Expr by the result type R

sealed trait ExprF[R]

The leaves have no children

sealed trait ExprF[R]
case class ConstF[R](a: Double) extends ExprF[R]
case class VarF[R](a: String) extends ExprF[R]

whereas the nodes become

case class TimesF[R](l: R, r: R) extends ExprF[R]
case class PlusF[R](l: R, r: R) extends ExprF[R]

Altogether we get a data structure parameterized by an arbitrary type R

sealed trait ExprF[R]
case class ConstF[R](a: Double) extends ExprF[R]
case class VarF[R](a: String) extends ExprF[R]
case class TimesF[R](l: R, r: R) extends ExprF[R]
case class PlusF[R](l: R, r: R) extends ExprF[R]

With this definition we can implement our two evaluators as

def evalF(expr: ExprF[Double]): Double = {
  expr match {
    case ConstF(x)    => x
    case VarF(s)      => s match {
                          case "x" => 5
                          case _   => 0
                        }
    case TimesF(l, r) => l * r
    case PlusF(l, r)  => l + r
  }
}
def prettyF(expr: ExprF[String]): String = {
  expr match {
    case ConstF(x)    => s"$x"
    case VarF(s)      => s
    case TimesF(l, r) => l + " * " + r
    case Plus(l, r)   => l + " + " + r
  }
}

Notice that these implementations are no longer recursive. The idea is that such evaluators deal with a single level (either leaf or node) of an expression tree. They assume that child trees have already been evaluated and they only combine those results.

But how do we use them to evaluate recursive expressions? For that we need a bit of category theory.

F-algebras

We now have two evaluators, one producing a Double and another a String. There are functions that can turn a Double into a String. Can we use such a function, for instance toString, to map one evaluator to another? But there are two ways of doing that. We can take an ExprF Double, evaluate it using evalF and then apply our function to obtain a string. Or we can take an ExprF Double, transform it to ExprF String and then use the evaluator prettyF.

But how do we transform ExprF Double to ExprF String? We can do this if ExprF is a functor. And, indeed, it is

import cats._
import cats.implicits._

implicit val functorForExprF: Functor[ExprF] = new Functor[ExprF] {
  override def map[A, B](fa: ExprF[A])(f: A => B): ExprF[B] = fa match {
    case ConstF(x)    => ConstF(x)
    case VarF(a)      => VarF(a)
    case TimesF(l, r) => TimesF(f(l), f(r))
    case PlusF(l, r)  => PlusF(f(l), f(r))
  }
}

However, the two ways of evaluating ExprF Double to a String produce different results.

Example: take PlusF 1.0 2.0 and apply map toString to it. You get PlusF "1.0" "2.0". prettyF turns it to "1.0 + 2.0". On the other hand, if we first apply evalF to it, we get 3.0, and toString turns it to "3.0".

s1 = prettyF (fmap show (PlusF 1.0 2.0))
s2 = show (evalF (PlusF 1.0 2.0))
val plus: ExprF[Double] = PlusF(1.0, 2.0)

val s1 = prettyF(plus.map(_.toString))

val s2 = evalF(plus).toString

These two evaluator cannot be transformed into each other using toString in a consistent way. In other words, toString doesn't preserve "the structure" of our evaluator.

On the other hand, some evaluators can be transformed into each other

evalF' :: ExprF (String, Double) -> (String, Double)
evalF' (ConstF x)   = ("Done", x)
evalF' (VarF "x")   = ("Done", 5)
evalF' (VarF s)     = ("Done", 0)
evalF' (TimesF (s, l) (_, r)) = (s, l * r)
evalF' (PlusF (s, l) (_, r))  = (s, l + r)

mkDone :: Double -> (String, Double)
mkDone x = ("Done", x)

plusNode = PlusF 1.0 2.0

v1 = evalF' (fmap mkDone plusNode)
v2 = mkDone (evalF plusNode)

def evalF_(expr: ExprF[(String, Double)]): (String, Double) = expr match {
  case ConstF(x) => ("Done", x)
  case VarF("x") => ("Done", 5)
  case VarF(_)   => ("Done", 0)
  case TimesF((s, l), (_, r)) => (s, l * r)
  case PlusF((s, l), (_, r)) => (s, l + r)
}

def mkDone(x: Double) = ("Done", x)

evalF_(plus.map(mkDone))
mkDone(evalF(plus))

A combination of a type and an evaluator for a given functor F is called an algebra

type Algebra[F[_], A] = F[A] => A

The type is called the carrier of the algebra, and the evaluator is called the structure map.

So far we've been concentrating on one such functor, ExprF, but algebras can be defined for any functor. We've seen three algebras for ExprF, with the carriers, respectively, Double, String and (String, Double)

evalF   :: Algebra ExprF Double
prettyF :: Algebra ExprF String
evalF'  :: Algebra ExprF (String, Double)
def evalF: Algebra[ExprF, Double]
def prettyF: Algebra[ExprF, String]
def evalF_: Algebra[ExprF, (String, Double)]

Functions between carriers that preserve the algebra structure are called algebra morphisms. We've seen one such algebra morphism, mkDone that maps the ExprF-algebra <Double, evalF> to <(String, Double), evalF>. The function toString, on the other hand, is not an algebra morphism.

The condition that has to be satisfied by an

F-algebra morphism
(A,a)(B,b)
corrsponding to a function
f:AB
is often represented by a commuting diagram

​​​​F[A] --F(f)--> F[B]
​​​​  |             |
​​​​  a             b
​​​​  |             |
​​​​  v             v
​​​​  A --- f --->  B

where

F(f) is the lifting of
f
by functor
F
done with map in Scala.

In our example,

f was mkDone,
a
was evalF, and
b
was evalF'.

Using such diagrams, we can easily convince ourselves that a composition of two algebra morphisms,

gf is again an algebra morphsism

​​​​F[A] --F(f)--> F[B]--F(g)--> F[C]
​​​​  |             |             |
​​​​  a             b             c
​​​​  |             |             |
​​​​  v             v             v
​​​​  A --- f --->  B --- g --->  C

and that the identity function is an algebra morphism

​​​​ F[A] ---id--> F[A]
​​​​  |             |
​​​​  a             a
​​​​  |             |
​​​​  v             v
​​​​  A --- id ---> A

(using the fact that a functor acting on identity is again an identity).

Composition and identity are part of a definition of a category and, indeed, algebras and algebra morphisms for a given functor form a category. The important part is that, in a category, we can define somethgin called an initial object. Here, objects are algebras. An initial algebra is an algebra that has a unique outgoing algebra morphism to any other algebra.

If we call the initial algebra <I, j>, with

j:FII

then for any other algebra <A, a> we have a unique

m:IA such that the following diagram commutes

​​​​F[I] -- F(m) -> F[A]
​​​​ |               |
​​​​ j               a
​​​​ |               |
​​​​ v               v
​​​​ I  ---  m  ---> A

This might seem unlikely, considering how hard it is to satisfy the commutation condition. It turns out that our functor ExprF has an initial algebra. In fact, we've seen it already: its carrier type is the recursive data type Expr.

We can easily define an evaluator ExprF Expr => Expr

j :: Algebra ExprF Expr

j (ConstF x)   = Const x
j (VarF s)     = Var s
j (TimesF l r) = Times l r
j (PlusF l r)  = Plus l r
def j: Algebra[ExprF, Expr] = {
  case ConstF(x) => Const(x)
  case VarF(s) => Var(s)
  case TimesF(l, r) => Times(l, r)
  case PlusF(l, r) => Plus(l, r)
}

The key result that explains this is the Lambek's Lemma. It states that the structure map (the evaluator) of the initial algebra is an isomorphism. Indeed, if you look at our implementation of j, you see that it can be easily inverted. The proof of the Lambek's lemma is pretty straightforward and we leave it to the appendix.

Since

j is an isomorphism, we can write

FII

In other words,

I is a fixed point of
F
: the action of
F
on
I
gives us back the same
I
.

This is how we can express the fixed point of a functor F in Scala

case class Fix[F[_]](unfix: F[Fix[F]])

The isomorphism is witnessed by two functions, in and out.

object Fix {
  def in[F[_]]: F[Fix[F]] => Fix[F] = ff => new Fix[F](ff)

  def out[F[_]]: Fix[F] => F[Fix[F]] = f => f.unfix
}

Notice that in is the evaluator for the F-algebra with the carrier Fix[F], and out is its inverse.

Fix ExprF is equivalent to our recursive data structure Expr.
It contains exactly the same information.

type Ex = Fix ExprF
type Ex = Fix[ExprF]

We can define helper functions called smart constructors to help us build recursive expressions of the type Fix[ExprF]

var :: String -> Ex
var s = Fix (VarF s)

num :: Double -> Ex
num x = Fix (ConstF x)

mul :: Ex -> Ex -> Ex
mul e e' = Fix (TimesF e e')

add :: Ex -> Ex -> Ex
add e e' = Fix (PlusF e e')
def v: String => Ex = s => Fix(VarF(s))
def num: Double => Ex = d => Fix(ConstF(d))

def mul: Ex => Ex => Ex = l => r => Fix(TimesF(l, r))
def add: Ex => Ex => Ex = l => r => Fix(PlusF(l, r))

Our original expression

x2+3x+4 can be written as

expr' = add (mul (var "x")(var "x"))
            (add (mul (num 3) (var "x")) 
                 (num 4))

expr = add(mul(v("x"))(v("x")))
          (add(mul(num(3))(v("x")))
              (num(4)))

These smart constructors can be also used to implement a mapping from Expr to Ex. We'll implement the inverse using a catamorphism.

Catamorphisms

By definition, there exists a unique mapping from the initial algebra to any other algebra

​​​​F[I] -- F(m) -> F[A]
​​​​ |               |
​​​​ j              alg
​​​​ |               |
​​​​ v               v
​​​​ I  ---  m  ---> A

We can express the carrier of the initial algebra as a fixed point Fix F with the evaluator in and its inverse out

​​​​F[Fix F]--F(m)-> F[A]
​​​​  ^               |
​​​​  |               |
​​​​ out             alg
​​​​  |               |
​​​​  |               v
​​​​Fix F --- m  ---> A

This diagram allows us to express m, recursively, as a composition

m=algF(m)out

This formula translates directly to code

def cata[F[_] : Functor, A](alg: Algebra[F, A]): Fix[F] => A = {
  ex => Fix.out[F].andThen(_.map(cata(alg))).andThen(alg)(ex)
}

Let's analyze what happens when we apply a catamorphism to a fixed-point expression, like

expr = add(mul(v("x"))(v("x")))
          (add(mul(num(3))(v("x")))
              (num(4)))

First, we apply out, which exposes the top level node

def out: Fix[F] => F[Fix[F]]

In this case, expr was created by applying Fix to two a PlusF node containing two expressions

add :: Ex -> Ex -> Ex
add e e' = Fix (PlusF e e')
def add: Ex => Ex => Ex = l => r => Fix(PlusF(l, r))

Applying out to it exposes PlusF e e'.

We then apply the catamorphism to this node's children using map cata alg. This is where recursion kicks in. But since the children are smaller than the original tree, the recursion is well foundedwe are eventually bound to hit the leaves, at which point the recursion terminates (see the action of map on leaves).

This recursive application reduces the children to values of the carrier type. We can then apply the evaluator to our top-level node and obtain the final value. So that's the idea: you evaluate the children and then combine the results within a node. The recipe for combining results is the algebra.

We are now ready to apply catamorphisms to the algebras we have previously defined. For instance, we can use our prettyF

def prettyF: Algebra[ExprF, String] = {
    case ConstF(x) => s"$x"
    case VarF(s) => s
    case TimesF(l, r) => l + " * " + r
    case PlusF(l, r) => l + " + " + r
}
 

to pretty print the expression

*O: in Scala you have to write it in one line though*
val expr =
  add(mul(v("x"))(v("x")))
     (add(mul(num(3))(v("x")))
         (num(4)))

O: unfortunately, we have to do apply

cata(prettyF).apply(expr)

Conversion from the fixed point form of an expression to Expr can be done using a catamorphism

toExpr :: Algebra ExprF Expr
toExpr (PlusF l r)  = Plus l r
toExpr (TimesF l r) = Times l r
toExpr (ConstF x)   = Const x
toExpr (VarF x)     = Var x

mkExpr :: (Fix ExprF) -> Expr
mkExpr = cata toExpr
val toExpr: Algebra[ExprF, Expr] = {
  case ConstF(x) => Const(x)
  case VarF(s) => Var(s)
  case TimesF(l, r) => Times(l, r)
  case PlusF(l, r) => Plus(l, r)
}

def mkExpr = cata(toExpr)

O: find code here https://scastie.scala-lang.org/oli-kitty/IeCA2xIIRjWN7XH3N3zoow/147

Appendix. Lambek's lemma

Suppose you have an Algebra <A, f> where f is a morphism F[A] => A.
We call an algebra <I, j> where j is a morphism F[I] => I; an initial algebra if there is a unique homomorphism between this algebra and any other algebra [in our category] m: I => A.

​​​​ F[I]--Fm--> F[A]
​​​​ |            |
​​​​ j            f
​​​​ |            |
​​​​ V            V
​​​​ I -----m---> A

[Fm is the same F?]

Now let's construct an algebra with FI as a carrier <F[I], g> where g is a morphism F[F[I]] => F[I].

​​​​ F[I]--Fm--> F[F[I]]
​​​​ |            |
​​​​ j            g = Fj
​​​​ |            |
​​​​ V            V
​​​​ I -----m---> F[I]

We could draw another diagram

​​​​F[F[I]]--Fj--> F[I]
​​​​ |            |
​​​​ Fj           j
​​​​ |            |
​​​​ V            V
​​​​ F[I] ---j---> I

The diagram in fact is tautological, it uses the same morphisms therefore commutes! Let's combine two diagrams together

​​​​ F[I]--Fm--> F[F[I]]----Fj--> F[I]
​​​​ |            |                 |
​​​​ j            g = Fj            j
​​​​ |            |                 |
​​​​ V            V                 V
​​​​ I -----m---> F[I] -----j-----> I