# Automatic Differentation - Lecture 1
##### Author: Luc Paoli (lucpaoli)
Today will mostly focus on scalar examples
## 1. Simple scalar example
### Problem:
$$\frac{dz}{dx} + \frac{dz}{dy}$$
For a program where
> Input $x,y$
> 1) $a = \sin x$
> 2) $b = a/y$
> 3) $z = b + x$
>
> return $z$
This can be done manually without too much difficulty. as it can be seen
$$z = \frac{\sin x}{y} + x$$
The derivatives can be computed as
$$\frac{dz}{dx} = \frac{\cos x }{y} + 1$$
$$\frac{dz}{dy} = \frac{-\sin x}{y^2}$$
## 2. Computational (Directed Acyclic) Graph
I do not know how to draw graphs in latex, sorry. It would be great if someone could do this as it is very instructive.
$$x\to a \to b \to z$$
with
$$y \to b$$
$$x \to z$$
This is a visual interpretation of the computation that just occured. Any computer program can be expressed as a computational graph.
The 1-step derivatives are then represented on the edges. To do this, we can look at each individual operation, breaking it down into each step.
$x \to a$ : $\cos x$
$a \to b$ : $1/y$
$b \to z$ : 1
with
$y \to b$ : $-a/y^2$
$x \to z$ : $1$
We can then say that there are two paths from $x \to z$
1) $1 +$
2) $1\cdot \frac{1}{y}\cdot{\cos x}$
Note that this is the symbolic derivative $$\frac{dz}{dx}$$.
There is one path from $y \to z$
1) $1\cdot\frac{-a}{y^2}$
giving us the derivative $$\frac{dz}{dy}$$.
Make sure you note that the computations are written **right to left**.
This can be justified from the chain rule.
What you have seen here is **forward-mode automatic differentation** (Auto Diff).
### Implementation details
Note: Some may be wondering about: dual numbers, epsilon notation, the adjoint method. These are other ways of "visualising" this, and are coming in the future.
#### One arrow case
For the one arrow case
$$x \xrightarrow{f^\prime(x)} z$$
We accumulate an **ordered pair** (value now, product of edge weights), or $(x, p)$. This is a **dual number**. The transformation from $x\to z$ is given by
$$(x, p) \to (f(x), f^\prime(x)\cdot p)$$
A key question is **how should this be initialised?**
We should begin with $(x, 1)$. (I think, missed this slightly but pretty sure this should be correct)
#### Multi-arrow case
To do the multi-arrow case, we begin with another problem
> input $x$
> 1) $a = \sin x$
> 2) $b = \log x$
> 3) $z = ab$
>
> return $z$
To do this the "18.01 way", we use the product rule to compute the entire expression:
$$z = \sin x \log x$$
$$\frac{dz}{dx} = \cos x \log x + \frac{\sin x}{x}$$
In the graph representation, this problem is seen as
$$x\xrightarrow[\sin x]{\cos x} a \xrightarrow[ab]{b} z$$
$$x\xrightarrow[\log x]{1/x} a \xrightarrow[ab]{b} z$$
Using a representation of $\xrightarrow[\text{operation}]{\text{derivative}}$. This is not quite accurate to the lecture, as the operations are represented at the nodes of the graph, rather than along the edges.
More generally,
$$(a,p)\xrightarrow[f(a,b)]{\partial z/\partial b} z$$
$$(b,q)\xrightarrow[f(a,b)]{\partial z/\partial p} z$$
Accumulating across this whole graph, we have
$$(a, p), (b, q) \to \left(z, \frac{\partial z}{\partial q}p \right)$$
##### Notation
This is notation Prof. Edelman is trying:
$\partial$ : One-step derivative (one line of a program)
$d$ : Full derivative (entire program)
Comment: I have used $d$ rather than $\mathrm d$ throughout these notes
Computation moves from **left to right**
The derivative moves from **right to left**. This is important when considering matrices. For scalars, this does not matter as they all commute.
### What is in a computer program?
i) Constants
ii) +,-,*,$\div$
iii) sqrt, sin, log, ...
iv) control (e.g. if, for, while, ...)
We then draw a difference between **constants** and **inputs**. When initialised as dual numbers, these appear as $(c, 0)$ and $(x, 1)$ respectively.
For example, with a function of two variables like ```+```.
$$a \to a\pm b$$
$$b \to a\pm b$$
is then represented as
$$(a, p) \to (a\pm b, p\pm q)$$
$$(b, q) \to (a\pm b, p\pm q)$$
For multiplication:
$$a \to a\cdot b$$
$$b \to a\cdot b$$
is then represented as
$$(a, p) \to (a\cdot b, bp + aq)$$
$$(b, q) \to (a\cdot b, bp + aq)$$
For division:
$$a \to a/b$$
$$b \to a/b$$
is then represented as
$$(a, p) \to (a/b, bp/b^2 + aq/a^2)$$
$$(b, q) \to (a/b, bp/b^2 + aq/a^2)$$
We will represent this in Julia by **overloading** our operators.
Within Julia, the base rules for ```log, sqrt, ...``` are all implemented with [ChainRules.jl](https://github.com/JuliaDiff/ChainRules.jl).
Interesting note: In deep learning, because there are many recursive calls, care must be taken to avoid overflow of your floats by properly scaling gradients, as otherwise it is easy for them to become infinitely large or small.
Looking at ```AutoDiff.ipynb``` for examples of implementation.
In this notebook, we saw the calculation of the derivative of the babylonian algorithm for the square root. Interesting to note is that a mathematically equivalent symbolic expression can be evaluated (in this case using SymPy), but this is **not** what the computer is doing.
## Reverse-mode autodiff
For forward-mode, we accumulated terms from **right to left**, while traversing the graph **forwards**. In reverse-mode, we accumulate terms **left to right**, while traversing the graph **backwards**.
Because multiplication is associative, even for matrices, the result is independent of the direction terms are accumulated in.
Important to note is that the computation is still in the same direction, it's just the derivative that is calculated "backwards".
$$(x,q) \xrightarrow[]{f^\prime(x) = \partial z /\partial x} (z,p)$$
where $q$ is **all previous paths**. To bring in this path, we now consider
$$(z, p) \xrightarrow{} (x, pf^\prime(x))$$
This is one rule of reverse-mode, the rest will be picked up on Thursday.