# Kokoyi Operator Manual
## Arithmetic operators
| Kokoyi | Kokoyi render | LaTex render | Meaning |
|:---------------:|:---------:|:------:|:-----:|
| `a + b` | $a + b$ | $a + b$ | Add two arrays element-wise |
| `a - b` | $a - b$ | $a - b$ | Minus two arrays element-wise |
| `a * b` | $a \circ b$ | $a * b$ | Multiply two arrays element-wise |
| `a / b` | $a / b$ | $a / b$ | Divide two arrays element-wise |
| `a @ b` | $a \cdot b$ | $a @ b$ | Matrix multiplication or vector inner product |
| `a ** b` | $a ^ b$ | $a ** b$ | `a` power of `b` element-wise |
| `\sqrt{a}` | $\sqrt{a}$ | $\sqrt{a}$ | Square root of each element of an array |
| `\frac{a}{b}` | $\frac{a}{b}$ | $\frac{a}{b}$ | Divide two arrays element-wise |
| `\lceil a \rceil` | $\lceil a \rceil$ | $\lceil a \rceil$ | Round-up each element of the array |
| `\lfloor a \rfloor` | $\lfloor a \rfloor$ | $\lfloor a \rfloor$ | Round-down each element of the array |
| `\exp(a)` | $\exp(a)$ | $\exp(a)$ | $e^a$. Can also use `\Exp(a)` |
| `\sin(a)` | $\sin(a)$ | $\sin(a)$ | Sine |
| `\cos(a)` | $\cos(a)$ | $\cos(a)$ | Cosine |
| `\tan(a)` | $\tan(a)$ | $\tan(a)$ | Tangent |
| `\sinh(a)` | $\sinh(a)$ | $\sinh(a)$ | Hyperbolic Sine |
| `\cosh(a)` | $\cosh(a)$ | $\cosh(a)$ | Hyperbolic Cosine |
| `\tanh(a)` | $\tanh(a)$ | $\tanh(a)$ | Hyperbolic Tangent |
| `\maximum(a, b)` | $\mathrm{maximum}(a, b)$ | n/a | Element-wise maximum of two arrays |
| `\minimum(a, b)` | $\mathrm{minimum}(a, b)$ | n/a | Element-wise minimum of two arrays |
## Comparison operators
| Kokoyi | Kokoyi render | LaTex render | Meaning |
|:---------------:|:---------:|:------:|:-----:|
| `a < b` | $a < b$ | $a < b$ | Element-wise smaller |
| `a > b` | $a > b$ | $a > b$ | Element-wise greater |
| `a \leq b` | $a \leq b$ | $a \leq b$ | Element-wise smaller or equal |
| `a \geq b` | $a \geq b$ | $a \geq b$ | Element-wise greater or equal |
| `a = b` | $a = b$ | $a = b$ | Element-wise equal |
| `a \neq b` | $a \neq b$ | $a \neq b$ | Element-wise not equal |
## Logical operators
| Kokoyi | Kokoyi render | LaTex render | Meaning |
|:---------------:|:---------:|:------:|:-----:|
| `a \and b` | $a \wedge b$ | n/a | Element-wise logical and |
| `a \or b` | $a \vee b$ | n/a | Element-wise logical or |
| `\not a` | $\neg a$ | n/a | Element-wise logical not |
## Reducers
| Kokoyi | Kokoyi render | LaTex render | Meaning |
|:---------------:|:---------:|:------:|:-----:|
| `\sum_{i=l}^{u}{f(i)}` | $\sum_{i=l}^{u}{ f(i) }$ | $\sum_{i=l}^{u}{ f(i) }$ | Sum up arrays $f(l) ... f(u)$. <br> Can also use `\Sum`. |
| `\mean_{i=l}^{u}{f(i)}` | $\mathrm{mean}_{i=l}^{u}{ f(i) }$ | n/a | Average arrays $f(l) ... f(u)$. <br> Can also use `\Mean`. |
| `\prod_{i=l}^{u}{f(i)}` | $\prod_{i=l}^{u}{ f(i) }$ | $\prod_{i=l}^{u}{ f(i) }$ | Product arrays $f(l) ... f(u)$. <br> Can also use `\Prod`. |
| `\concat_{i=l}^{u}{f(i)}` | $\|\|_{i=l}^{u}{ f(i) }$ | n/a | Concatenate arrays $f(l) ... f(u)$ <br> along the leading dimension.|
| `\max_{i=l}^{u}{f(i)}` | $\max_{i=l}^{u}{ f(i) }$ | $\max_{i=l}^{u}{ f(i) }$ | Maximum arrays $f(l) ... f(u)$ <br> Can also use `\Max`.|
| `\min_{i=l}^{u}{f(i)}` | $\min_{i=l}^{u}{ f(i) }$ | $\min_{i=l}^{u}{ f(i) }$ | Minimum arrays $f(l) ... f(u)$ <br> Can also use `\Min`.|
| `\argmax_{i=l}^{u}{f(i)}` | $\mathrm{argmax}$$_{i=l}^{u}{ f(i) }$ | n/a | Argmax arrays $f(l) ... f(u)$ <br> Can also use `\Argmax`.|
| `\argmin_{i=l}^{u}{f(i)}` | $\mathrm{argmin}$$_{i=l}^{u}{ f(i) }$ | n/a | Argmin arrays $f(l) ... f(u)$ <br> Can also use `\Argmin`.|
All reducers support the `x \in D` syntax.
Examples:
| Kokoyi | Kokoyi render |
|:---------------:|:---------:|
| `\sum_{x\in D}{f(x)}` | $\sum_{x\in D}{f(x)}$ |
| `\prod_{x\in D}{f(x)}` | $\prod_{x\in D}{f(x)}$ |
Reducers are essentially functions on arrays, you can call them on any arrays just like normal functions (where we recommand to use capitalized function name such as `\Sum`, `\Mean`).
Examples:
| Kokoyi | Kokoyi render |
|:---------------:|:---------:|
| `\Sum(X)` | $\mathrm{Sum}(X)$ |
| `\Mean(X)` | $\mathrm{Mean}(X)$ |
In kokoyi, the `Op_{i=l}^{u}{f(i)}` syntax $Op_{i=l}^{u}{f(i)}$ is in fact a syntax sugar of `Op(X) \where X \gets \{f(i)\}_{i=l}^{u}` $Op(X) \quad\textbf{where}\quad X \gets \{f(i)\}_{i=l}^{u}$.
## Normalizers
| Kokoyi | Kokoyi render | LaTex render | Meaning |
|:---------------:|:---------:|:------:|:-----:|
| `\|a\|` | $\|\|a\|\|$ | $\|\|a\|\|$ | L-2 Norm |
| `\|a\|_ p` | $\|\|a\|\|_p$ | $\|\|a\|\|_p$ | L-p Norm |
## Shape operators
| Kokoyi | Kokoyi render | LaTex render | Meaning |
|:-------:|:--------------:|:------:|:----------------:|
| `\|a\|` | $\|a\|$ | $\|a\|$ | Length of an array. If `a` has multiple dimensions, return the number of leading dimensions. |
| `\trans{a}` | $a^\top$ | n/a | Transposition of matrix `a`. If `a` is a vector, do nothing |
| `a \|\| b` | $a\|\|b$ | $a\|\|b$ | Concatenate two arrays along the leading dimension |
| `\GetShape(a)` | $GetShape$$(a)$ | n/a | Get the shape of an array |
| `\Reshape(a, (d_1, d_2))` | $ReShape(a,(d_1$,$d_2))$ | n/a | Reshape an array |
| `\Flatten(a)` | $Flatten(a)$ | n/a | Flatten an array to a vector |
| `\PadConstant(a, pad, val)` | $PadConstant$$(a, pad, val)$ | n/a | See `torch.nn.functional.pad` |
| `\PadReflect(a, pad, val)` | $PadReflect$$(a, pad, val)$ | n/a | See `torch.nn.functional.pad` |
| `\PadReplicate(a, pad, val)` | $PadReplicate$$(a, pad, val)$ | n/a | See `torch.nn.functional.pad` |
| `\PadCircular(a, pad, val)` | $PadCircular$$(a, pad, val)$ | n/a | See `torch.nn.functional.pad` |
| `\InterpolateNearest(a, size)` | $InterpolateNearest$$(a, size)$ | n/a | See `torch.nn.functional.interpolate` |
| `\InterpolateLinear(a, size)` | $InterpolateLinear$$(a, size)$ | n/a | See `torch.nn.functional.interpolate` |
| `\InterpolateBilinear(a, size)` | $InterpolateBilinear$$(a, size)$ | n/a | See `torch.nn.functional.interpolate` |
| `\InterpolateBicubic(a, size)` | $InterpolateBicubic$$(a, size)$ | n/a | See `torch.nn.functional.interpolate` |
| `\InterpolateTrilinear(a, size)` | $InterpolateTrilinear$$(a, size)$ | n/a | See `torch.nn.functional.interpolate` |
| `\InterpolateArea(a, size)` | $InterpolateArea$$(a, size)$ | n/a | See `torch.nn.functional.interpolate` |
## Array indexing
| Kokoyi | Kokoyi render | LaTex render | Meaning |
|:---------------:|:---------:|:------:|:-----:|
| `A[i]` | $A_{[i]}$ | $A[i]$ | Array indexing |
| `A[i, j]` | $A_{[i, j]}$ | $A_{[i, j]}$ | Multiple indices |
| `A[l:u]` | $A_{[l:u]}$ | $A[l:u]$ | Slice array from a range $[l, u)$ |
| `A[:u]` or `A[l:]` | $A_{[:u]}$ or $A_{[l:]}$ | $A[:u]$ or $A[l:]$ | Slice bounds can be omitted |
| `A[i, :, j]` | $A_{[i,:,j]}$ | $A[i, :, j]$ | Mixing slicing and indexing |
## Array creation
| Kokoyi | Kokoyi render | LaTex render | Meaning |
|:---------------:|:---------:|:------:|:-----:|
| `\Rand((d_1, d_2))` | $Rand((d_1, d_2))$ | n/a | Create a random array (see `torch.rand`) |
| `\RandInt(l, u)` | $RandInt(l, u)$ | n/a | Create an array of one random integer fomr $[l,u)$ (see `torch.randint`) |
## NN
### Functions
| Kokoyi | Kokoyi render | LaTex render | Meaning |
|:---------------:|:---------:|:------:|:-----:|
| `\ReLU(a)` | $ReLU(a)$ | n/a | ReLU |
| `\LeakyReLU(a)` | $LeakyReLU(a)$ | n/a | LeakyReLU |
| `\Sigmoid(a)` | $Sigmoid(a)$ | n/a | Sigmoid |
| `\Dropout(a)` | $Dropout(a)$ | n/a | Dropout |
| `\Linear(a, w, b)` | $Linear(a, w, b)$ | n/a | Linear transformation (see `torch.nn.functional.linear`) |
| `\Softmax(a)` | $Softmax(a)$ | n/a | Softmax |
| `\LogSoftmax(a)` | $LogSoftmax(a)$ | n/a | Log Softmax |
| `\LayerNorm(a)` | $LayerNorm(a)$ | n/a | See `torch.nn.functional.layer_norm` |
### Loss
| Kokoyi | Kokoyi render | LaTex render | Meaning |
|:---------------:|:---------:|:------:|:-----:|
| `\BCELoss(x, t)` | $BCELoss(x, y)$ | n/a | See `torch.nn.functional.binary_cross_entropy` |
| `\BCELossWithLogits(x, t)` | $BCELossWithLogits(x, y)$ | n/a | See `torch.nn.functional.binary_cross_entropy_with_logits` |
| `\CrossEntropy(x, t)` | $CrossEntropy(x, y)$ | n/a | See `torch.nn.functional.cross_entropy` |
| `\NLLLoss(x, t)` | $NLLLoss(x, y)$ | n/a | See `torch.nn.functional.nll_loss` |
| `\MSELoss(x, t)` | $MSELoss(x, y)$ | n/a | See `torch.nn.functional.mse_loss` |
| `\L1Loss(x, t)` | $L1Loss(x, y)$ | n/a | See `torch.nn.functional.l1_loss` |
| `\SmoothL1Loss(x, t)` | $SmoothL1Loss(x, y)$ | n/a | See `torch.nn.functional.smooth_l1_loss` |