Try   HackMD

Structure Preserving Maps

Universal Algebra

Algebraic Data Types

(not in final form)

Part 2 of a short course on monads.

In the previous lecture, we discussed monoids as a paradigmatic example of an algebraic structure. The importance of paradigmatic examples is that they can be generalised in different ways. One important way to generalise monoids is to say that a monoid is made from operations and equations: Operations for neutral element (empty word) and mutiplication (concatenation) and equations for identity and associativity.

To capture more diverse structures, we should generalise to other kind of operations and equations. The area of mathematics that studies mathematical objects defined by operations and equations is known as Universal Algebra, or also General Algebra. The objects of study are called algebras.

An important application of algebra to programming are algebraic data types. We have seen examples in the previous lecture and will see more in this one.

In the lecture we went trough the Example, the Axioms, some More Examples and then discussed the program algebraic-data-types. At the beginning of the next lecture I explained the notion of termalgebra and that it makes mathematically precise on what domain we do induction when we program with algebraic data types. Finally, I briefly touched upon the remainig points made in these notes.

Programs: induction, algebraic-data-types, magma.

Example

The clock is a function

(We make various assumptions here such as that time can be measured in discrete units, stretches from negative infinity to infinity.)

A clock that is neither fast nor slow has the additional property of preserving the structure of time. We can model this mathematically, by equipping

Z with the structure
(Z,0,+)
of addition. Similarly, we introduce the notation of
Z/12Z
for the set
{0,11}
with addition as on the clock.

That the clock is preserving the structure of addition can now be expressed by saying that diagram

commutes. Algebraically, this is just an equation

clock(x+1)=clock(x)+1

where

+1 on the left refers to integer addition and
+1
on the right refers to addition modulo 12. Moreover, it seems reasonable to require
clock(0)=0

Let us call, just for the purposes of this example, a function that satisfies these equations a proper clock.

If we put these equations into words we can say that "one unit of time advances the clock by one unit". So the following should be more or less obvious, but one can also prove it by induction.

Exercise: There infinitely many functions

ZZ/12, but there is only one function satisfying the equations above. [1]

In many programming and mathematical problems, to determine the relevant structure can be a big step towards a solution. Even in the example of the clock, we can think of different structure that we may want to preserve.

For example, both

Z and
Z/12Z
are monoids under addition. So we could ask, in addition to
clock(0)=0
, for the following diagram to commute.

Algebraically, that would be written as

clock(x+y)=clock(x)+clock(y)

The purpose of the exercise is to show that this defines a more permissive notion of structure preserving map. But let us first pause and recognise that we just met what is known as a monoid homomorphism. [2]

Definition: Let

(M,e,) and
(M,e,)
be two monoids. Then
f:MM
is a monoid homomorphism if
f(e)=f(e)f(xy)=f(x)f(y)

Exercise: Find the 5 different monoid homomorphisms

(Z,0,+)(Z/12Z,0,+). [3]

We have introduced monoids as a paradigmatic example of an algebra. The importance of paradigmatic example is that they can be generalised in various different ways. A monoid is made from operations and equations: Operations for neutral element (empty word) and mutiplication (concatenation) and equations for identity and associativity.

The area of mathematics that studies mathematical objects defined by operations and equations is known as Universal Algebra, or also General Algebra. The objects of study are then just called algebras. This is what we will look at next.

Axioms: Universal Algebra

(essential)

The purpose of this section is to generalise the previous example to algebraic structures defined by arbitrary operations.

First, let us go back to automata-as-algebras, the introductory example of the first lecture. As we have seen, we can think of the alphabet

Σ as a set of unary operation symbols. An automaton interprets each input symbol as a function
QQ
. In other words, automata can be seen as algebras of unary operations. We now generalise this to operations which can take any number of arguments.

Definition: A signature is a set

Σ of operation symbols together with a function
arity:ΣN
.

Example: The signature of

  • monoids is

    Σ={e,} and
    arity(e)=0
    and
    arity()=2
    .

  • lists is

    Σ={Nil}{Cons xxa}

    where

    {Nil} is a constant (arity
    0
    ) and the
    Cons x
    are unary (arity
    1
    ).

Remark: Once we have a signature we can form terms from operation symbols and variables.

Example: In the signature of

  • monoids we have terms such as
    ex
    and
    x(yz)
    .
  • lists terms are lists.

Remark: An equation is a pair of terms.

Definition: An algebra for a signature

Σ, or a
Σ
-algebra, consists of a set
A
together with an
n
-ary function (=operation)
AnA
for each
n
-ary operation symbol of
Σ
.

We will see many examples of algebras and structure preserving maps (homomorphisms) in the next section. But first another definition.

Definition: Let

Σ be a signature,
A
and
B
two algebras and
f:AB
a function between between their underlying sets.
f
is called a homomorphism, if for all operations
σ
of arity
n
, we have
f(σA(x1,xn)=σB(f(x1),f(xn))

where

σA is the interpretation of
σ
on
A
and
σB
is the interpretation of
σ
on
B
.

Definition:

f:AB is an isomorphism if
f
has an inverse, that is, if there is a homomorphism
g:BA
such that
gf=id
and
fg=id
.

Remark:

f is an isomorphism iff
f
is a bijective homomorphism.

Remark on many-sorted universal algebra: For programming applications of universal algebra we want "heterogeneous" or "many-sorted" universal algebra. Indeed, we need algebras that do not have only on "carrier"

A but a family
(A1,A2,)
of carrier sets (or sorts or types). For example, the signature of the length function involves two sorts (types) and the signature of the member function three:

len :: List a -> Integer
member :: a -> List a -> Bool

Some mathematics texts go the extra mile and extend universal algebra to this more general setting. But all the maths we need can be more easily explained in the one-sorted setting. As a rule of thumb, all results extend to the many-sorted setting, so we are free to apply them to programming with many types.

More Examples: Induction

(essential)

The examples of this section explore the following important ideas.

  • The notion of free monoid generalises to algebras for arbitray signatures.
  • Free algebras come with a principle of induction that can be used for reasoning and computation.
  • Free algebras are unique up to isomorphism.

Again, all of our inductive definitions are implemented in Haskell for you to experiment with in induction.

Natural Numbers via Zero and Successor

Choose the signature

Σ={O,S} with
arity(O)=0
and
arity(S)=1
, that is,
O
is a constant and S is a unary operation symbol.

The so-called

Σ-termalgebra, or intial algebra (see next section for more on termalgebras, initial algebras and free algebras) consists of all terms one can form from the operations of the signature:
O
,
SO
,
SSO
, etc. The operation
S
takes a term
SO
to
SSO
. In Haskell, this intial algebra can be defined as

data NN = O | S NN

Maths Exercise: Show that the natural numbers

N with 0 and
+1
are isomorphic to the
Σ
-termalgebra.

Answer. [4]

Induction on natural numbers: The principle of definition by induction goes as follows. For any

iN and
s:NN
there is a unique funtion
f:NN

f(0)=if(n+1)=s(f(n))

Maths Exercise: More generally, for any

Σ-algebra
(A,O,S)
, there is a unique
Σ
-homomorphism
f:NA
defined by the same two equations above.

Answer. [5]

In pseudo Haskell, we can define this induction principle as (see also the code in induction)

induction :: a -> (a -> a) -> (Integer -> a) induction i s 0 = i induction i s (n+1) = s (induction i s n)

Note how lines 2 and 3 are exactly the same as the two equations above if we identify

f with induction i s.

Natural Numbers as a Free Monoid

This example shows that we get a different induction principle for the natural numbers if we change the signature.

Maths Exercise: The natural numbers

(N,0,+) are isomorphic to the free monoid over
Σ={s}
.

Answer. [6]

The induction principle: For any monoid

(M,e,) and any
mM
there is a unique monoid morphism
f:NM
satisfying
f(0)=ef(n+1)=mf(n)

f(n) is more commonly written as
mn
. What the inductive definition explains is why this notation not only makes sense for numbers but generalises to any monoid.

Maths Exercise: Prove the uniqueness of

f.

Answer. [7]

In pseudo Haskell, the induction principle can be defined as (see also the code in induction)

induction2 :: m -> (m->m->m) -> m -> Int -> m induction2 e mult m 0 = e induction2 e mult m (n+1) = mult m (induction2 e mult m n)

Again, identifying

f in the two equations above with induction2 e mult m we have a perfect match between code and maths.

Exercise: Adapt the definitions above to the situation where one does not have a neutral element. [8]

Induction over Words/Lists

Words (= lists) are algebras for two different signatures. First, we have the signature where each letter is a unary operation. Second, we have the signature of monoids. Induction over words usually refers to the first signature. Let us review an example.

If we write

data List a = Nil | Cons a (List a)

we define List a as a particular algebra (in fact the initial algebra or termalgebra, see below) for the signature [9]

Σ={Nil}{Cons xxa}

where

{Nil} is a constant (arity
0
) and the
Cons x
are unary (arity
1
). When we define length of a list via

len :: List a -> Integer
len Nil = 0
len (Cons x xs) = 1 + len xs

we define len as a structure preserving map, in fact a

Σ-algebra homomorphism.

Exercise: For which algebraic structure on Integer is len a

Σ-algebra homomorphism?

Answer. [10]

We used the following induction principle to extend the transition function of an automaton from letters to words.

Let

(M,e,) be a monoid. Any function
f:ΣM
has a unique extension
f:ΣM
.

In more detail, for any

f:ΣM there is a unique monoid homomorphism
f:ΣM

such that

f([a])=f(a).

In Haskell, the induction principle can be defined as (see also the code in induction)

fold :: m -> (m->m->m) -> (a->m) -> List a -> m
fold e mult f Nil = e
fold e mult f (Cons x xs) = mult (f x) (fold e mult f xs)  

The first two arguments are the monoid

M and the third argument is the function
f:ΣM
.

Programming Exercise: Implement the following functions using fold.

  • a function List Int -> Int computing the sum of a list of integers.
  • a function List Int -> Int computing the product of a list of integers.
  • a function List a -> Int computing the length of a list.
  • a function List (List a) -> List a that flattens a list (like concat does for Haskell's predefined lists).
  • a function List (List a) -> Int that computes the sum of the lengths of the lists in a list.

Btw, instead of a programming exercise, this can also be seen as an exercise in writing down mathematical defintions.

Programming Exercise: Redo the exercises above with Haskell's predefined lists.

Recursion over Algebraic Data Type

Every signature gives rise to a recursion principle. We can take for example binary trees.

By the way, here we see again why for applications it is usually necessary/convenient to generalise our definitions so that they allow for many-sorted/typed algebras. For example, for binary trees over integers, we want to sorts/types, say,

T for trees and
Z
for integers and operations [11]
Nil:1TNode:Z×T×TT

The first makes an empty tree and the second makes a bigger tree from a left subtree, a right subtree and an integer.

See the Homework on Algebraic Data Types below for more.

Theory

(initial and free algebras will play an essential role in the next two lectures)

We continue from the section on axioms and universal algebra.

The principle of induction we have seen at work in the previous examples is induction over the set of all terms formed from the operations of a signature. The set of all terms these terms forms again an algebra. For example, the set

{O,SO,SSO,} is closed under taking successors.

Definition: Let

Σ be a signature and let
X
be a set of "variables". The set
TermΣ,X
of all "terms" is the smallest set closed under the rules

t1,tnTermΣ,Xσ(t1,tn)TermΣ,XσΣ,arity(σ)=nxTermΣ,XxX

We write

TermΣ if
X
is empty.
TermΣ,X
is a
Σ
-algebra, the termalgebra over
Σ
and
X
, under the operations
σ:(TermΣ,X)nTermΣ,X

defined by the rule on the left above.

Proposition: Fix a signature

Σ.

  1. For any algebra

    A there is a unique homomorphism
    TermΣA

  2. For any algebra

    A, set of variables
    X
    , and function
    f:XA
    , there is a unique homomorphism
    f:TermΣ,XA

    such that

    f(x)=f(x).

Maths Exercise: Prove the proposition by induction on the structure of the termalgebra.

Maths Exercise: In the section on examples on induction and the program induction, we saw the definition

fold :: m -> (m->m->m) -> (a->m) -> List a -> m fold e mult f Nil = e fold e mult f (Cons x xs) = mult (f x) (fold e mult f xs)

Show that fold is the unique homomorphism from the free monoid List a over a to a monoid (m,e,mult) wrt f:a->m, that is,

fold e mult f=f

Answer. [12]

The existence and uniqueness of the homomorphism is called the universal property of the termalgebra. An algebra with Property 1 is also called an initial algebra and an algebra with Property 2 a free algebra over

X.

Notice that the fact that the termalgebra has the universal property is proved by induction. Conversely, when we will study category theory in the next lecture, we will take the universal property of an initial algebra as an axiomatisation of induction.

Applications

(can be skipped if the importance of structure preserving maps is appreciated)

We will present two applications of the concept of structure preserving maps, one to mathematics and one to theoretical computer science. An application to programming will be presented as homework below.

  1. The mathematical application uses the notion of structure preserving map to answer the question whether there are more fractions than integers in two different, seemingly contradictory ways. The apparent paradox will be solved by paying attention to the notion of struture implicit in the two answers.

  2. The theoretical computer science application uses the notion of structure preserving map to give a mathematical answer to the following question. What is the meaning of a programming language?

  3. The application to programming is to show that the unique homomorphism from the termalgebra can be programmed by recursion over an algebraic data type.

Are there more rational numbers than integers?

What do you think: Are there more fractions than integers?

(This is one of those questions where both yes and no can be a correct answer depending on how you justify it.)

Convention: I restrict my attention to positive rationals and positive integers and I denote them by

Q and
N
, respectively. The answers remain true if we extend to all rationals and integers.

Answer 1: There are strictly more rationals than integers

It seems obvious that there are more rationals than integers.

We can visualise this by looking at the numberline where there are gaps between integers, but the rationals are dense.

In symbolic notation, we look at the inclusion

NQ

which is injective but not surjective, hence not bijective: Every integer is a rational, but there are rationals that are not integers.

Answer 2: There are at least as many integers as rationals

Can we encode rationals by integers? After all, we know that rationals are stored in a computer as sequences of bits and that every sequence of bit is an integer in binary notation. This suggests that there is an injective function

QN

showing that there are at least as many rationals as integers. Equivalently, and easier to explain, we can also look for a surjective function

NQ

We guess, from the fact that rationals can be represented on a digital computer, that such a map must exist.

But what is an easy way to describe such a map explicitely?

It was found by Cantor and is most easily given by a picture. You can derive from the picture an algorithm that computes the functions

QN and
NQ
via going through a while loop. You can also challenge your math-muscles and derive an explicit formula that does not need going through a loop.

Remark: The map

NQ given by the picture assigns more than one code to each rational as eg
1/2=2/4
. Thus the picture gives a surjective but not bijective
NQ
. But we can define a bijective map from it by eliminating duplicate codes.

It seems that we reached a contradiction between Answer 1 and Answer 2.

Thinking about it, it looks like Answer 2 is winning. Because, by reaching Answer 1, we made a mistake. We argued that an map

AB that is injective but not surjective proves that there are strictly more
B
than
A
. This is true for finite sets, but wrong for infinite sets.

Nevertheless, Answer 1 looks intuitively correct. So can we restore our intuition and justify it by an explanation that stands up to scientific standards of scrutiny?

The answer is yes and what we need for the justification is the notion of structure preserving map.

Exercise/Activity: Can you lead the development we abandoned with the last sentence to a satisfying conclusion?

Denotational Semantics of Programming Languages

The abstract syntax of a programming language can be described in terms of universal algebra. For example, the abstract syntax of

λ-calculus

data Exp = Var String | App Exp Exp | Abs String Exp

corresponds to a two-sorted universal algebra, one sort String for variables and one sort for expressions, as well as two operations, one for application and one for abstraction.

Before we come back to

λ-calculus let us first look at the example of arithmetic. For example, if we take as abstract syntax of a simple language of arithmetic

data Exp = 1 | Add Exp Exp | Mult Exp Exp

this corresponds to a signature

Σ={1,+,}. The meaning of this language can be described as the unique homomorphism
TermΣ(N,1,+,)

For more on this perspective see my lecture on operational and denotational semantics.

Now let us quickly come back to the question why we need to go beyond universal algebra, if we want to extend these techniques from arithmetic to

λ-calculus and other programming languages.

  • Universal algebra only supports operations of type
    AnA
    . To treat typed functional programming languages, we need to extend universal algebra with a more expressive type system, and this is one reason why category theory, to be introduced in the next lecture, is of such importance in programming language research.
  • A more subtle point is that universal algebra does not provide us with tools to deal with variable binding, the distinction between free and bound variables,
    α
    -equivalence, capture avoiding substitution. [13] [14] These issues are pursued in an area of theoretical computer science known as nominal sets or nominal techniques. [15]

For more on variables and binding see my lecture .

Homework

(Maths)
X
is the free monoid over
X

(prove that List a is the free monoid over a)

In the previous lecture we defined

X (or List a) and called it the free monoid. In the next lecture we will see an independent definition of free algebra. To show that these two definitions agree, one has to show the following.

Let

(M,e,) be a monoid. For any map
f:XM
there is a unique monoid homomorphism
f:XM
such that
f([x])=f(x).

  • Define multiplication (=concatenation) on lists inductively.
  • Define
    f
    inductively.
  • Show that
    f([x])=f(x)
    .
  • Show that
    f([])=e
    .
  • Show that
    f
    preserves multiplication.

Remark: We will see in Lecture 4 that all free algebras correspond to monads, and vice versa. But there is something special about the example of lists:

  • X
    is the free algebra over the empty set for the signature
    Σ={[]}X
    where the arity of
    []
    is 0 and the arity of the elements
    xX
    is
    1
    .
  • X
    is the free monoid over
    X
    .

It may be interesting to think about the analogy List a and BTree a, see the below, and to try to turn BTree into a monad/free algebra over a.

(Programming) Recursion over Algebraic Data Types

(if you want to do some programming, this homework illustrates that inductively-defined structure-preserving maps on termalgebras can be run on a computer)

As an application I will show that we can program with the univeral property of initial algebras, at least if no equations are involved. I will use Haskell, but everything is entirely general and can be made to work in any programming language (possibly in a more cumbersome notation).

Head over to algebraic-data-types to run and implement some Haskell functions defined by the univeral property of initial algebras.

Type :load main.hs into the console on the right to call individual functions and use "run" to execute the test defined in main.

We have seen that the natural numbers come with at least two induction principles, one for the signature of zero and successor and one for the natural numbers as monoids. The first one has the advantage that definition of natural numbers via zero and successor does not involve any equations. For such structures Haskell can run definitions by induction automatically.

The first line defines the natural numbers as an intial algebra (algebraic data type) in Haskell:

data NN = Zero | Succ NN

Mathematically, we would write this as two functions

Zero:1NSucc:NN

If you already know about addition on

N you want
Succ(n)=1+n
. But the perspective we are taking here is that we define the natural numbers, from scratch.

So we go the other way round and define addition by induction on the structure that defines

N, namely zero and successor:

Zero+y=ySucc(n)+y=Succ(n+y)

Exercise: Show, by induction on

n, that
Succ(n)=1+n
.

In Haskell, the definition of addition looks almost the same: [16]

add Zero y = y
add (Succ n) y = Succ (add n y)

One advantage of the Haskell syntax is that it actually runs. Enter in the console

add (Succ Zero) (Succ Zero)

to obtain the result

Succ (Succ Zero)

Next, we define in Haskell the free monoid over a type a

data List a = Nil | Cons a (List a)

Mathematically, replacing now a with

X, we would write this as two functions [17]
Nil:1XCons:X×XX

At this point, I slowly start to prefer Haskell to maths. Judge for yourself, but if you head over to adt.hs, dedinitions such as

len Nil = 0
len (Cons x xs) = 1 + len xs

are neat enough that I don't want to repeat them in mathematical notation. (But try it as an exercise, just to make sure you can translate between the two ways of writing.)

After we have done this, we will also be in a position to understand better the remark that these operations are generic. For example, replay what we did so far for the data type of binary trees

data BTree a = Nil | Node (BTree a) (BTree a) a 

and compare BTree and List.

With binary trees one can define heap sort. The idea is to transform a list into a heap and then to flatten the heap into a sorted list.

Here is a definition of a heap:

  • Nil is a heap.
  • Node t a n is a heap if all nodes in t are
    n
    and all nodes in s are
    n
    .

Define the following functions.

data BTree a = Nil | Node (BTree a) (BTree a) a 

member :: Eq a => BTree a -> a -> Bool

max_tree :: BTree Int -> Int

insert :: BTree Int -> Int -> BTree Int

heapify :: [Int] -> BTree Int

is_heap :: BTree Int -> Bool

flatten :: BTree Int -> [Int]

They should satisfy the property that

flattenheapify(xs) is sorted as well as the equations

flattenheapify(xs)=xs if xs is sortedis_heapheapify(xs)=True...

(Programming) A Binary Tree monad

For the bintary trees above there is no obvious way to define a mu :: BTree (BTree a) -> BTree. But if we store data only in the leaves that changes. At magma, you can implement a monad structure on

data Magma a = Leaf a | Node (Magma a) (Magma a)

From a programming point of view Magma a is a kind of binary tree. But Magma a is also the free magma over a. [18]

Summary and Outlook

In this lecture, we already have seen a lot of category theory, without calling it that. Indeed, by now, many of the main protagonsits have appeared: categories, functors, natural transformations, initial objects, free algebras and left adjoints, monads, univeral properties. So it is time to reveal this structure in the next lecture.

References and Further Reading

ask me if you are interested


  1. Why is

    x2x not a proper clock? Which function corresponds to the clock that run out of battery? ↩︎

  2. Is constructed from two Greek words meaning "similar form". ↩︎

  3. We use the same symbols

    0 and
    +
    for two different operations. Both in mathematics and in programming is this kind of overloading widely used. ↩︎

  4. The

    Σ-termalgebra has elements
    O
    ,
    SO
    ,
    SSO
    , etc. Define a function from
    N
    to the termalgebra by induction:
    f(0)=O
    and
    f(n+1)=Sf(n)
    . This function is bijective, hence an isomorphism. ↩︎

  5. By the property of a homomorphism, we must have

    f(0)=O and
    f(n+1)=s(f(n))
    . This determines
    f
    uniquely. ↩︎

  6. The free monoid

    M over
    Σ={s}
    has elements
    ϵ,s,ss,sss,
    . Define
    f:MN
    as
    f(w)=length(w)
    .
    f
    is a bijection (why?). And
    f
    preserves the monoid structure. Indeed,
    f(ϵ)=0
    and
    f(wv)=length(wv)=length(w)+length(v)=f(w)+f(v)
    . ↩︎

  7. Consider

    f with
    f(1)=m
    . We have to show
    f=f
    . We know $f(0)=f'(0). (Why?) Morevover, for
    n=1+1
    ,
    n>0
    , we have
    f(n)=f(1+1)=f(1)f(1)=f(1)f(1)=f(1+1)=f(n)
    . ↩︎

  8. The structure arising from dropping the neutral element and identity laws from the definition of monoids is known as semi-group. ↩︎

  9. Instead of

    xa we could also write
    x::a
    , using Haskell's notation x :: a to indicate that x is an element of type a. ↩︎

  10. To understand len as a

    Σ-algebra homomorphism, we need to think of Integer as a
    Σ
    -algebra, that is, we need an interpretation on Integer of the operation symbols
    {Nil}
    and
    Cons x
    . The Haskell definition of len does this implicitely: Nil is interpreted as 0 and all Conx x are interpreted as 1+. ↩︎

  11. From a universal algebra point of view, there are two ways to treat

    Z in the example of binary trees. We can take
    Z
    as a parameter of the definition. Or we can take
    Z
    as a second sort and explicitely add the integer-operations with their equations to a 2-sorted algebra. There can be subtle differences between these two ways of modelling, but they do not need to concern us here. In case you want to know, for what we are doing here, I find it easier to consider
    Z
    as a parameter. ↩︎

  12. We let

    Σ be the signature of monoids (one unary operation, one binary operation). Then List a is isomorphic to the termalgebra
    TermΣ,a.

    If e :: m and mult :: m->m->m satisfy the equations of a monoid, then

    (m,e,mult) is a monoid. Therefore, by the universal property of the termalgebra, any f::a->m induces a unique monoid homomorphism
    f::TermΣ,a(m,e,mult).

    It remains to show that

    f is indeed foldMon e mult f. This is similar to this exercise. ↩︎

  13. For example, the capture avoiding substitution of

    λ-calculus is very different from the substitution into free variables one has in universal algebra. ↩︎

  14. Nevertheless, universal algebraic techniques can be used to prove powerful results about

    λ-calculus, see eg the work of Antonino Salibra. ↩︎

  15. See the work by Gabbay and Pitts. ↩︎

  16. If you want to understand why Haskell syntax is different there are two (related) reasons. The first is historical: Haskell inherited its syntax from the

    λ-calculus. The second is that Haskell syntax is superior to the conventional maths syntax if one works with higher order functions. ↩︎

  17. If you wonder why the Haskell notation and the mathematical notation are much closer for the functions like addition than for the definition of the data types: We will be able to remedy this in the next lecture when we introduce functors. In fact, a definition like data List a is best understood as defining a functor

    List(X). ↩︎

  18. Let

    Σ be a signature with one binary operation symbol. Then Magma a is
    TermΣ,a
    . ↩︎