Try   HackMD

Term Rewriting Systems (TRSs)

Introduction

In this lecture we can put almost everything together that we have seen so far. Let us look again at one of our examples.

  • In one of the first lectures, we discussed the high-school algebra problem that was asking whether the equations we know from high-school algebra are enough to do all possible computations allowed in the natural numbers.

  • In the meanwhile, we have done enough work to be able to unpack the question above. By now we can be precise about all the following.

    • "equations"
    • "are enough"
    • "all possible"
    • "computations"
    • "allowed in the natural numbers"

Exercise: Explain precisely the meaning of "equations", "are enough", "all possible", "computations", "allowed in the natural numbers".

Part of the answer to the exercise above, was computation as rewriting to normal form. So far we took as a model of computation an abstract reduction system (ARS). While each computer, or virtual machine, or program can be understood as an ARS, there are many ARSs that are not models of computation. Therefore, in computer science various refinements of this notion are studied.

In this lecture, we are refining ARSs as follows. The elements of an ARS

(A,)
are now terms, that is,
A=TermΣ(X)
for some signature
Σ
and some set
X
of variables. A rewrite rule will be an equation in
TermΣ(X)
. And the reduction relation
is now generated by a set of rewrite rules/equations
E
. To emphasise that
is generated by
E
we write
E
.

To summarise the lecture, we will explain the move from ARSs

(A,)

to TRSs

(TermΣ(X),E)

As it will turn out, a TRS is completely specified by its equations

E. In a previous lecture, we presented the mathematical background on equations, but I hope that most of the lecture will be readable with an intuitive understanding of equations.

Aims

The aim of the lecture is to answer precisely, among others, the following questions.

  • How are the ARSs from the exercises examples of TRSs?

  • In what sense is computation in a TRS a particular case of reasoning in equational logic?

  • In what sense are TRSs models of computation?

After the lecture you should test yourself by answering these questions.

Examples of TRSs

To see what is at stake let us look back at the exercises.

Finitely many rewrite rules vs infinitely many reductions

In the exercises on string rewriting we defined an ARS

(A,) via

​​​​    ab -> ba
​​​​    ba -> ab
​​​​    aa ->
​​​​    b ->

Notice that by writing these 4 rewrite rules, we define infinitely many possible reductions

wv for infinitely many words
w,v
. For example the first rule not only defines a reduction
abba
but also reductions such as
aabbabab
, etc. In fact, for all words
w,w
, we have a corresponding reduction
wabwwbaw
.

Summary: The reason to go from an ARS

(A,) to a TRS is that we can make precise how finitely many rewrite rules generate infinitely many reductions. From a computational point, this reduction is crucial. To implement an ARS, we should be able to decide whether there is a possiblity to reduce a term or not. If there are only finitely many rewrite rules, then this can be done mechanically. We will have to say more about this in the Theorem below.

Signature, Variables, Equations

Recall that an equation is a pair

(t,t)TermΣ(X)×TermΣ(X). In this lecture, the set of equations of interest will be denoted by
E
and we write, as before,
tEt
to say that
(t,t)E
. The font has been chosen in order to make
tEt
resemble
tt
.

Note that, because an equation is just a pair of terms, every equation can also be understood as a rewrite rule. And vice versa, every rewrite rule should be an equation.

Let us look how this plays out in an example.

Activity 0: In the example

​​​​    ab -> ba
​​​​    ba -> ab
​​​​    aa ->
​​​​    b ->

what is the signature

Σ and what is the set of variables
X
and what is the set of equations? (Hint: Imagine you want to program a text editor; think about what basic operations are needed to create a text file one certainly needs to be able to create an empty text and then to add letters as one types this should lead to the answer about the signature once you have the signature and variables you know what are the terms, hence also pairs of terms, hence also equations.) [1]

Termination

An important question in the discussion of ARSs was termination.

Let us see what we have achieved so far by refining ARSs to TRSs.

In most interesting examples of an ARS

(A), the reduction relation
has infinitely many elements. In general, if there are infinitely many reductions, we cannot be sure to be able to determine whether an element of
A
is a normal form, as there could be infinitely many reductions to check.

On the other hand, we have seen in Activity 0 an infinite set

which is generated by a finite set of equations
E
. In this case checking whether
aA
is a normal form is decidable:
a
is not a normal form if and only if some left hand side of some of finitely many equations matches
a
.

We will come back to this observation in the Theorem below.

Computing in a TRS

Maybe you have noticed that by going to a TRS, we have turned the meta-variables into object-variables.

In previous examples, for instance in the equations of the lectures on syntax and semantics, we wrote variables

X,Y,Z in capital letters instead of the more familiar
x,y,z
in order to emphasise that these variabels are not part of the object language but rather are place holders for terms of the object language.

On the other hand, in the term algebra

TermΣ({x,y,z}), the variables
x,y,z
are themselves terms, so part of the object-language.

(In software engineering, sometimes the term reification is used for "turning meta into object". So we could say here that we reified variables.)

This opens up the possibility that we can perform now the operation of pattern matching inside the calculus of equational logic.

In fact, as will see, the rules of substitution and congruence are exactly what is needed for pattern matching.

Let us do a simple computation. (I drop some brackets to make the notation easier to read.)

​​​​            max(ss0,sss0) -> s(max(s0,ss0))
​​​​                          -> ss(max(0,s0))
​​​​                          -> sss0

Activity 1: Justify each of the steps. [2]

Summary: As an outcome of the activity, we understand now that when we formulate our rewrite-rules/equations with variables and use the rules of equational logic called substitution and congruence, then we can generate infinitely many rewrite-rules from a finite set of equations containing variables using pattern matching.

As a slogan, the rules of substitution and congruence compute pattern matchings.

What we still need to explain better is what we mean by "generating new rewrite-rules from equations using substitution and congruence". But first we should recall again what we mean by "use the rules of equational logic"

Equational logic, recalled

As we have seen in the example above, using just the three rewrite-rules/equations

​​​​            max(sx,sy) -> s(max(x,y))
​​​​            max(0,x) -> x
​​​​            max(x,0) -> x

we can compute the max of aribtrarily large numbers. For example, we can compute max(1024,2048)=1024, where I write 1024 for a string of 1024 s followed by a 0.

As we explained earlier each step in such a computation involves the application of a rule of equational logic. Here are these rules again.

The first three rules say that

is an equivalence relation. From a computational point of view the important rule is (trans) which allows to chain computations together: Transitivity correpsonds to having a loop in computational terms. The last two rules are what will allow us to do pattern matching, as explained below.

tt (refl) t1t2t2t1 (sym) t1t2t2t3t1t3 (trans) 
and, for all
n
-ary operations
f
a congruence rule
t1t1tntnf(t1,tn)f(t1,tn) (cong) 

and a rule for substitution
t1t2t1[xt]t2[xt] (subst) 

Also recall
E

for the set of equations that can be derived from a set of "axioms"

E using the rules above.

Pattern matching

Informally, I understand by pattern matching the following. If we have a structure

S and a pattern
P
containing variables from a set
X
, is there a substructure
S
of
S
and a substitution such that
P
equals
S
.

It is common to think of the variables in

P as "holes" and of the substitution as "filling the holes".

In our situation, we can be more precise.

The structure will be a term

t, typically not containing variables and the pattern will be the left hand side
l
of a rewrite rule/equation
(l,r)
typically containing variables. We then are looking for a subterm
t
of
t
and a substitution
σ
such that
t=lσ
.

t=lσ is also called a redex, for reducible expression.

To decide wether, given

t and
l
, there is a substitution such that
t=lσ
, and to compute
σ
if it exists, is of called the matching problem, see also Baader-Nipkow, Section 4.1.

If there are only finitely many equations, then deciding whether a term

t contains a redex is decidable.

The rewrite relation associated to a set of equations

Activity 2: Looking back at the derivation

​​​​            max(ss0,sss0) -> s(max(s0,ss0))
​​​​                          -> ss(max(0,s0))
​​​​                          -> sss0

from the point of view of equational logic, how many steps of equational reasoning does it take to prove that max(ss0,sss0) = sss0? [3]

The number coming out of the activity is considerably bigger than 3. But we would like to say that we took 3 rewrite steps. The solution is to define from the equations

E a new relation
E
so that, in the example above, if we let
E
to be the equations

​​​​            max(sx,sy) = s(max(x,y))
​​​​            max(0,x) = x
​​​​            max(x,0) = x

then

E is defined in such a way that taking -> to be
E
, the derivation considered in Activity 2 takes exactly three steps.

So let us define

E.

Definition: Let

E be a set of equations in a signature
Σ
and a set
X
of variables. Let
t1,t2TermΣ(X)
. We define
t1Et2

if there is one equation

l=r in
E
and the equation
t1=t2
can be derived from
l=r
with one application of the subsitution rule and any number of applications of the congruence rule.

Activity 3: What is the relationship between

E and
E
? [4]

We can now summarise what we learned from Activity 1 by saying that if

E is finite and
E
is confluent and terminating, then it is decidable whether two terms are equal according to
E
.

We will formulate this as a theorem in the next section.

Term rewriting

We summarise the work we have done in the examples above by listing again the most important definitions and results. As we only have time to present the bare bones of term rewriting, we recommend Baader-Nipkow to the reader interested in more detail.

Our definition of a TRS is the same as Def.4.2.1 in Baader-Nipkow.

Definition (TRS): Fix a signature and a set of variables. Let

Term be the corresponding term-algebra. Then a rewrite rule
lr
is an equation on the
Term
.

Recall the section substitution.

Definition (matching): We say that a term

l matches
t
if there is a substitution
σ
such that
σ(l)=t
.

Proposition: Matching is decidable.

For more on matching see Section 4.1 in Baader-Nipkow.

The rewrite relation

E associated to a set of equations
E
is Def.3.1.8 in Baader-Nipkow. We give a more conceptual, less technical definition.

Definition (

E): Let
E
be a set of equations in a signature
Σ
and a set
X
of variables. Let
t1,t2TermΣ(X)
. We define
t1Et2

if there is an equation

(l,r) in
E
and the equation
t1=t2
can be derived from
l=r
with one application of the subsitution rule and any number of applications of the congruence and reflexivity rule.

The next theorem shows that every confluent and terminating TRS is a model of computation. This is Thm 4.1.1 in Baader-Nipkow.

Theorem: If

E is finite and
E
is confluent and terminating, then it is decidable whether two terms are equal according to
E
.

Proof: We have seen in Activity 3 that

E    equals
E
. So in order to decide whether
t1Et2
we use confluence and termination to rewrite
t1,t2
to normal form using
E
. This takes a finite amount of time because of the following reasons. First,
E
is finite. Second, to check whether an equation
l=r
in
E
can be used to generate a rewrite
tEt
for a term
t
takes only a finite amount of time. Third,
E
is terminating. Now
t1Et2
is true if and only if the normal forms of
t1
and
t2
are identical.


  1. Σ={ϵ,a,b} where
    ϵ
    is a constant and
    a,b
    are unary. For the set of variables we can choose
    X={x}
    and then the equations are
    a(b(x))Eb(a(x))b(a(x))Ea(b(x))a(a(x))Exb(x)Ex
    ↩︎

  2. In the first step, we apply the rewrite-rule/equation max(sx,sy) -> s(max(x,y)). In order to this we need to apply a substitution x:=s0,y:=ss0, or, written in more mathematical notation,

    xs0,yss0. This gives us the rewrite-rule/equaton max(ss0,sss0) -> s(max(s0,ss0)) as needed for the first step.
    In the second step, substitution gives us (max(s0,ss0)) -> s(max(0,s0)), from which, in turn, we derive the rewrite-rule/equation s(max(s0,ss0)) -> ss(max(0,s0)). The rule of equational logic that justifies this step is called the congruence rule. ↩︎

  3. If I counted correctly, it is 8 applications of rules of equational reasoning: (subst), (subst), (cong), (subst), (cong), (cong), (trans), (trans). ↩︎

  4. The answer is that

    E    equals
    E
    (this is Theorem 3.1.12 in Baader-Nipkow.). For the definition of
    see the lecture on ARSs. ↩︎