$\newcommand{\sem}[1]{[\![#1]\!]}$ # Computation as Proof (under construction) There is a deep connection between computation and mathematical proof. In the lecture on [Denotational and Operational Semantics](https://hackmd.io/@alexhkurz/Hkf6BTL6P) we discussed the adequacy of operational and denotational semantics. In this lecture we will recast this as a result of logic: The soundness and completeness of the high-school algebra equations for natural numbers. The connection between the computational view and the logical view comes from the fact that in this example every computation corresponds to a mathematical proof and, vice versa, every mathematical equality has a proof "by computation". In more sophisticated programming languages and logics this correspondence is knows as **Curry Howard Isomorphism**. In case you want to pursue this fascinating topic, a quick look at [Wikipedia](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence) can give you a rough idea of the scope of the subject. This [Gentle Introduction to Curry-Howard Isomorphism](https://dev.to/donaldkellett/a-gentle-introduction-to-curry-howard-isomorphism-10in) picks up on ideas we have already seen in this course and should be a good starting point. Let me know if you are interested in following this up. ## Introduction ### Recap: Denotational Semantics As discussed in [a previous lecture](https://hackmd.io/@alexhkurz/BJkd1qSDS), we often describe the meaning of a language (or more general of any kind of formalism) with the help of a meaning function $$\sem{-} : \cal L \to \cal D$$ where $\cal L$ is a "language" that needs to be explained and $\cal D$ is the domain of interpretation. **Examples:** When Descartes said in $\mathcal L=$ Latin "cogito ergo sum" we interpreting this in $\cal D=$ English by "I think therefore I am". Here are some more examples. $\cal L$ | $\cal D$ |:---:|:---:| German | English Java | Java Virtual Machine C++ | LLVM Arithmetic | Mathematical Numbers **Exercise:** Make your own examples. Explain the caclulator you implemented in terms of the table above. ### Meaning via a Computable Equivalence Relation To take up the example of natural language translation again: How did you learn your first language? Certainly not by translating it into another language. Or how does a [thesaurus](https://www.thesaurus.com/browse/small) work? Or, for example, if you wanted to look up a work like [epistemology](https://www.merriam-webster.com/dictionary/epistemology) in a dictionay a translation in another language would probably not help. Instead we are given an explanation in the same language, that is, we are given an equivalent expression. **Remark:** Note that when I write "is equivalent to" above, I am referring to an [equivalence relation](https://hackmd.io/@alexhkurz/SJ1cc-dDr). We will denote equivalence relations by various symbols such as $=$ or $\approx$ or $\equiv$. One idea of this lecture is that we can discribe a meaning function $$\sem{-} : \cal L \to \cal D$$ without having to rely on an a priori understanding of a known domain $\cal D$, if we can describe the equivalence relation $t\equiv t'$ defined as $$ t\equiv t' \ \stackrel{\mathrm{def}}{\Longleftrightarrow} \ \sem{t}=\sem{t'}$$ in a computable way. **Notation:** We used before $\approx$ as a symbol for equivalence in our concrete examples, but it is also quite common to use $\equiv$ as a symbol for an equivalence relation. We tend to use $\equiv$ if we think of the equivalence relation as specified by a meaning function. But this is only a difference in point of view. As we will see in this lecture, every meaning functions gives an equivalence relation and, conversely, every equivalence relation defines a meaning functions. ## Recap: Equivalence Relations **Exercise:** - Let $f:A\to B$ be a function. Show that $$a\equiv a' \ \stackrel{\rm def}{=} \ f(a)=f(a')$$ defines an equivalence relation, the ***equivalence relation induced by*** $f$. - Show that if $f$ is onto, then the induced function $$A/{\equiv}\to B$$ is a bijection. Why is it important to know that we get a bijection? Because this says that there is a perfect correspondence (one-to-one and onto, also known as an ***isomorphism***) between $A/{\equiv}$ and $B$. In other words, up to a bijective renaming, the set $A/{\equiv}$ is exactly the same set as $B$. ## Soundness and Completeness of Equational Logic We will now come to the main topic of this lecture, namely a logical reformulation of the adequacy of [operational and denotational semantics](https://hackmd.io/@alexhkurz/BJkd1qSDS). Let us [recall](https://hackmd.io/@alexhkurz/BJkd1qSDS) the formal language exp ::= 1 | exp + exp | exp * exp and its denotational semantics $$\sem{-} :\mathcal L \to \mathbb N$$ which maps an expression to the corresponding number. For example, $(1+1)+1$ and $1+(1+1)$ are mapped the same number 3, but are different expressions. On the other hand, we also had an operational semantics by a set of rewrite rules X + ( Y + Z ) -> ( X + Y ) + Z X * 1 -> X X * ( Y + Z ) -> X * Y + X * Z The connection between the computational rewrite rules and logic is simply made by turning the rewrite rules into equations \begin{align} X + ( Y + Z ) & \approx ( X + Y ) + Z \\ X \cdot 1 &\approx X \\ X \cdot ( Y + Z ) & \approx X \cdot Y + X \cdot Z \\ X \cdot ( Y \cdot Z ) & \approx ( X \cdot Y ) \cdot Z \\ X + Y & \approx Y + X \\ X \cdot Y & \approx Y \cdot X \end{align} **Question:** Are all equations above needed? Is there one missing? **Remark:** In the [discrete maths lecture](https://hackmd.io/@alexhkurz/SJ1cc-dDr) we not only learned about equivalence relations but emphasised equivalence relations that are the equivalence closure $\stackrel{\ast}{\leftrightarrow}$ of a "one-step-computation" relation $\to$. During the next lectures, we will investigate when such a relation can be used to "rewrite to unique normal form". This is important: If every equivalence class has a unique normal form, then this normal form can be used to represent every element in the class. For example, write $n$ for the normal form of the equivalence class of elements $e$ such that $\sem{e}=3$. We then can identify $n$ with $3$, or, in other words, we can consider $n$ just as a different notation, or encoding, of 3 itself. Thus, as a slogan, what was syntax has become semantics. We have found meaning in syntax. ## Soundness and Completeness Having put in the work of making precise the meaning of our formal language, we can now also make precise our two quesitons from the beginning of the lecture. First recall the equations \begin{align} X + ( Y + Z ) & \approx ( X + Y ) + Z \\ X \cdot 1 &\approx X \\ X \cdot ( Y + Z ) & \approx X \cdot Y + X \cdot Z \\ X \cdot ( Y \cdot Z ) & \approx ( X \cdot Y ) \cdot Z \\ X + Y & \approx Y + X \\ X \cdot Y & \approx Y \cdot X \end{align} We now replace `=` by $\approx$ in order to emphasise that these equations are equations we impose on syntax. And on syntax 3+2 is different from 2+3. Nevertheless we can impose as a requirement that $3+2\approx 2+3$ and then use this knowledge to compute with or reason about the syntax. A consequence of the equations above is that the syntax "modulo the equations" agrees exactly with semantics. For example, while $((1+1)+1)+(1+1)$ is different, as a string or a tree, from $(1+1)+((1+1)+1)$, we can now use the equation $X + Y \approx Y + X$ to compute/reason that $$((1+1)+1)+(1+1) \approx (1+1)+((1+1)+1)$$ which we know to be true for natural numbers. In programming languages jargon, this agreement between syntax and semantics is formulated by saying that the equations are sound and complete. But we still need to say what precisely we mean by equations being sound and complete: ##### Soundness: $\quad t\approx t' \ \Longrightarrow \ \sem{t}=\sem{t'} \quad$ for all $\ t,t'\in\mathcal L$. ##### Completeness: $\quad \sem{t}=\sem{t'} \ \Longrightarrow \ t\approx t' \quad$ for all $\ t,t'\in\mathcal L$. ##### Exercise: Convince yourself that the equations are sound. ##### Remark on Completeness: Remember how we came up with the language and the equations. We started from our knowledge of natural numbers and defined the formal language accordingly. Then we discovered the equations by needing them for computations/proofs. For example, we needed associativity to get 3+2=5. After adding multipliction, we also needed distributivity. And then we added some more equations we knew that were true. But how do we know that we have enough? How can we be sure that we can now do all computations in our language with this small number of equations? Or can we do with fewer equations? [^first]: Of course, everybody who has ever used a calculator knows that the answer is yes. But let us forget for a moment that we know the answer in this example. Let us start thinking from scratch and discover a big idea that also applies to much more difficult examples.