# Understanding program control in pure functional programming with monads + comonads!? To be fair, the terminology is probably off-putting. However, you've probably already used `Array` in JS and it just to happens to be a monad. `Promise` is almost monadic (it's `then` is like `flatMap`, but it doesn't have `join`). Also, if you've programmed in Rust and used a `Result` or `Option`, you've used a monad. Svelte Writables almost have a comonadic structure, but lack the explicit state object on which lenses operate. The category theory language may be difficult, but the concept is easy. The following is summarized from Pareigis, 1970 & Loregian, 2015). ## Objects It's what it says on the tin, "object". It can be anything in particular: a number, a set, a function, an arrow, a color, a blanket, a book, a parent, a category, etc. The key to understanding it is that it is a very particular thing, not just a generic type. However, these are typically thought to all belong to the same class. So within any particular category, all the specific objects will belong to the same class of object. When we write down objects, we use capital letters, like $A$ or $Z$. ## Arrows (morphisms) In category theory, one can consider everything a graph. All the objects are nodes. All the arrows are directed edges that let one traverse from one object to another. If one wants to go from one object to another one far away, all one needs to do is traverse the composition of arrows between all the objects. When we write down arrows, we typically use lowercase letters, like $f$. Let's assume we have a class of the objects 1, 2, 3, 4, 5, 6, 7 and the arrows a, b, c, d, e, f: ```mermaid graph LR 1 --a--> 2 --b--> 3 --c--> 4 --d--> 5 --e--> 6 --f--> 7 ``` If one wants to go from 1 to 2, they just need to use the arrow a. If one wants to go from 1 to 3 then the need to compose the arrows a and b into another arrow: ```mermaid graph LR 1 --a--> 2 --b--> 3 --c--> 4 --d--> 5 --e--> 6 --f--> 7 1 --"b(a)"--> 3 ``` To get from one to 4, one need to compose c(b(a)) ```mermaid graph LR 1 --a--> 2 --b--> 3 --c--> 4 --d--> 5 --e--> 6 --f--> 7 1 --"c(b(a))"--> 4 ``` To go from 1 to 7 then one needs to compose all of the arrows into another one. ```mermaid graph LR 1 --a--> 2 --b--> 3 --c--> 4 --d--> 5 --e--> 6 --f--> 7 1 --"f(e(d(c(b(a)))))"--> 7 ``` Since we can see this is a graph, it wouldn't really make much sense to try to compose f(b), since you can directly get from the arrow f to the arrow b. This is technically because the domain of f, which happens to be 6, is not the same as the codomain of b, which happens to be 3. You might have learned the terms _domain_ and _range_ in high school algebra while studying functions. In that particular way of thinking, the set of numbers that form an input to a function is the domain, while the set of number that form the output of a function are the range (codomain). However, in category theory we use one different name, this is partly to distinguish how arrows point to a specific object. An **arrow** (or **morphism**) is a mapping between one object to another object. The **domain** is the object that the arrow starts from. The **codomain** is the object to which the arrow points. In math, we write down a arrow $f$ with a domain of $A$ and a codomain of $B$ like so: $$f: A \to B$$ ```mermaid graph LR A --f--> B ``` And an arrow $g$ with a domain of $B$ and a codomain of $C$: $$g: B \to C$$ ```mermaid graph LR B --g--> C ``` Then we can also write the composition with the composition operator $\circ$: $$h: A \to C \\ h = g \circ f$$ ```mermaid graph LR A --f--> B --g--> C A --h--> C ``` The great thing about some of the terminology is that it doesn't really change much between programming languages (especially those that have borrowed category theory or functional programming models). TypeScript is an example of one of those languages. One of the bits of similarity is **arrow functions**, one of the newer JavaScript and TypeScript ways of writing lambda functions. To define the type of an arrow function in TypeScript, one simply has to write: ```typescript type Arrow<A,B> = (_: A) => B ``` This is almost equivalent to writing out the mathmatical statement $f : A \to B$. It you see that statement in math, you can now imagine it as the above type. Of course, this is not an exact translation of category thery, since that would actually define an arrow that could only go from the exact object A to the exact object B, and it would not treat A and B as sets like the type definition above treats it. However, if we imagine that A and B are singleton classes, then it is an exact match. Or if we imagine that our objects are supposed to be sets, then it is also an exact match. However, arrows in a category are more like directed lines in a graph than like procedural functions. Defining this function also looks confusingly similar: ```typescript (a: A): B => {a} ``` This defines a lambda function that takes a value of type A and puts it inside a value of type B. This is also not exactly what is meant by $f : A \to B$. However, if we defined a function $f: \mathbb{Z} \to \mathbb{Z}$ (a function that maps from the set of natural integers to the set of natural integers), then we could write the function ```typescript const f = (a: bigint): bigint => a + 1n ``` which takes an element of the set of integers and adds one to it and returns an item of the set of integers. ## Categories A **category** is a class of objects with a family of morphisms between those objects which can be composed into other morphisms. The category must obey the laws of associativity and identity. When a category is written down, we use a boldface, or sometimes a script font, either: $\mathbf{C}$ or $\mathscr{C}$. Formally, a category $\mathbf{C}$ consists of: * A **class** $\mathbf{C}_O$. The elements of this class are **objects**. These objects are typically written down in captial latin leters. By writting $A, B, C, ... \in \mathbf{C}_O$, we mean that the objects $A,B,C$ are in the class $\mathbf{C}_O$ * A collection of sets of **morphisms** (or **arrows**), $\mathbf{C}(A, B)$, between objects in the category $A, B \in \mathbf{C}_0$. In this case the **domain** is $A$ and the **codomain** is $B$. An example arrow could be written down as $f : A \to B$, where $f \in \mathbf{C}(A, B)$. By saying $f \in \mathbf{C}(A, B)$, we mean that the arrow $f$ is in the collection of arrows between the objects $A$ and $B$. * The law of **associativity** of composition: $$A, B, C, D \in \mathbf{C}_O \\ f \in \mathbf{C}(A, B), g \in \mathbf{C}(B, C), h \in \mathbf{C}(C, D) \\ fgh = (fg)h = f(gh)$$ Composition can also be represented as: $$\circ : \mathbf{C}(B, C) \times \mathbf{C}(A, B) \to \mathbf{C}(A, C)$$ In which case the associative law is written: $$h \circ g \circ f = (h \circ g) \circ f = h \circ (g \circ f) $$ ```mermaid graph LR A -- "g∘f" --> C C -- h --> D A -- f --> B B -- "h∘g" --> D ``` * The **identity** law: For every object $A \in \mathbf{C}_O$ there is an identity morphism $id_A \in \mathbf{C}(A, A)$ such that for every $A, B \in \mathbf{C}_O$ and $f \in \mathbf{C}(A, B)$ we have $f \circ id_A = f = id_B \circ f$. Visually, we can see this equivalance as the following two diagrams, both compositions are equal to the arrow $f$: ```mermaid flowchart LR A -- id<sub>A</sub --> A A -- f --> B ``` ```mermaid flowchart LR A -- f --> B B -- id<sub>B</sub> --> B ``` If we go back to the example category that was defined in the "Arrows" section, we can elaborate every single arrow, including the identity arrows, the original arrows between object, and all the possible compositions of those arrows: ```mermaid graph LR 1 --a--> 2 --b--> 3 --c--> 4 --d--> 5 --e--> 6 --f--> 7 1 --"id<sub>1</sub>"--> 1 2 --"id<sub>2</sub>"--> 2 3 --"id<sub>3</sub>"--> 3 4 --"id<sub>4</sub>"--> 4 5 --"id<sub>5</sub>"--> 5 6 --"id<sub>6</sub>"--> 6 7 --"id<sub>7</sub>"--> 7 1 --"b(a)"--> 3 1 --"c(b(a))"--> 4 1 --"d(c(b(a)))"--> 5 1 --"e(d(c(b(a))))"--> 6 1 --"f(e(d(c(b(a)))))"--> 7 2 --"c(b)"--> 4 2 --"d(c(b))"--> 5 2 --"e(d(c(b)))"--> 6 2 --"f(e(d(c(b))))"--> 7 3 --"d(c)"--> 5 3 --"e(d(c))"--> 6 3 --"f(e(d(c)))"--> 7 4 --"e(d)"--> 6 4 --"f(e(d))"--> 7 5 --"f(e)"--> 7 ``` ## Functors A **functor** is a way of embedding the structure of objects and morphisms from one category in another category. Given two categories $\mathbf{C}$ and $\mathbf{D}$ we can write down a functor $F$ as $F : \mathbf{C} \to \mathbf{D}$, which means that a functor $F$ takes the category $\mathbf{C}$ to category $\mathbf{D}$. If you remember what an arrow is, the simplest definition is that it is an arrow between two categories. However, that's not the full definition. A functor actually does more than that. It takes all the structure (objects and arrows) from one category and embeds that structure in another category. The formal definition of a functor $F : \mathbf{C} \to \mathbf{D}$ is a pair $(F_0, F_1)$ where: * $F_0 : \mathbf{C}_O \to \mathbf{D}_O$ is a function sending an object $C \in \mathbf{C}_O$ to the object $FC \in \mathbf{D}_O$ * $F_1$ is a collection of functions $F_{AB} : \mathbf{C}(A, B) \to \mathbf{D}(FA, FB)$ for each set of objects $A, B \in \mathbf{C}_O$ that sends each morhpism $f : A \to B$ to morphism $Ff : FA \to FB$ so that the identity law and composition laws are preserved. * A functor $F : \mathbf{C} \to \mathbf{C}$ is called an **endofunctor**, since it works inside of the same category and not in a different one. * A functor that doesn't or cannot preserve all of the mappings between objects is called a **forgetful functor**. * A functor that preserves all mapping between objects and all structure is called a **full & faithful functor**. * Functors can also be either **full functors** or **faithful functors** and the difference between the two is more technical than necessary in the post. ```mermaid graph TB subgraph C A -- id<sub>A</sub --> A A -- f --> B B -- id<sub>B</sub> --> B end A -- F --> FA B -- F --> FB subgraph FC FA -- id<sub>FA</sub> --> FA FA -- Ff --> FB FB -- id<sub>FB</sub> --> FB end FA -- F --> FFA FB -- F --> FFB subgraph FFC FFA -- id<sub>FFA</sub> --> FFA FFA -- FFf --> FFB FFB -- id<sub>FFB</sub> --> FFB end ``` ## Monoids As an aside, let's take a look at what a monoid is. Anytime you were doing math in elementary school of even spelling out words, you were secretly using a monoid. How is this so? Well, you probably learned that $1 + 0 = 1$. That actually has a few monoid concepts in it. One is that there is a binary operation, in this case called addition. There's an identity/unit element: $0$. Also, you probably learned that it doesn't matter how you group adding more than two numbers, this is associativity. With the operation of addition comes the commutative property, but not all monoids have this. For instance, I mentioned spelling out words. One can't spell a word just by calling out its letters in any order, so spelling words is not commutative. But you can spell out "spell" by adding up it's parts in any grouping: "s"+"p"+"e"+"l"+"l" = ("s"+"p")+("e"+"l"+"l") = ("s"+"p"+"e"+"l")+"l" = etc. Also, adding the identity "" results in the same word: "spell"+"" = "spell". The formal definition of a **monoid** is a set $M$ (not a category) equipped with additional structure of * a **binary operation** $(-) \cdot (-) : M \times M \to M$ * a **neutral element** $1 \in M$ such that * there is **associativity** $$x,y,z \in M \vdash (x \cdot y) \cdot z = x \cdot (y \cdot z)$$ * there is **unitality** $$x \in M \vdash 1 \cdot x = x = x \cdot 1$$ (summarized from [nLab. Monoid](https://ncatlab.org/nlab/show/monoid)) :::info Since we have the zero, this also demonstrates associativity and unitality with addition: (0+2)+3 = 5 ::: ```mermaid flowchart LR subgraph "First op" 0 2 plus1{\+} end subgraph "Second op" 3 plus2{\+} 0 & 2 ----> plus1 3 -----> plus2 plus1 -- 2 --> plus2 end plus2 --> 5 ``` :::info Since we have the zero, this also demonstrates associativity and unitality with addition: 0+(2+3) = 5 ::: ```mermaid flowchart LR subgraph "First op" 2 3 op1{\+} end subgraph "Second op" op2{\+} 2 & 3 ----> op1 0 -----> op2 op1 -- 5 --> op2 end op2 --> 5 ``` This doesn't just apply to addition, here's an example of multiplication: ```mermaid flowchart LR subgraph "First op" 1 2 op1{\*} end subgraph "Second op" 3 op2{\*} 1 & 2 ----> op1 3 -----> op2 op1 -- 2 --> op2 end op2 --> 6 ``` Or an example of string concatenation: ```mermaid flowchart LR subgraph "First op" i{"''"} He op1{.} end subgraph "Second op" llo op2{.} i & He ----> op1 llo -----> op2 op1 -- "He" --> op2 end op2 --> Hello ``` ## Monads A monad is a larger structure that has a few objects and relationships in a category $\mathbf{C}$. The typical category theory definition from Petricek, 2018 is: > A monad over a category $\mathbf{C}$ is a triple $(T,\eta,\mu)$ where $T : \mathbf{C} \to \mathbf{C}$ is a functor, $\eta : id_{\mathbf{C}} \to T$ and $\mu : T^2 \to T$ are natural transformations such that: > - $\mu_A ◦ T\mu_A = \mu_A ◦ \mu_{TA}$ > - $\mu_A ◦ \eta_{TA} = id_{TA} = \mu_A ◦ T\eta_A$ But this is much too opaque to understand. One needs to see the communtative diagram to which this belongs and fully understand how to work out how functors get applied to and transform the morphsisms of the category. So let's see if we can make this any clearer. These equations assume that there is some object $A \in \mathbf{C}$. The endofunctor $T$ brings that object and all its associated morphisms, including the identity morphism into the new category $T\mathbf{C}$ (which by definition reduces to $\mathbf{C}$). We also have the two natural transformations $\mu$ and $\eta$ which each have very specific roles to play. The natural transformation $\mu$ is and called `compose` or `multiply`. While it may not be very clear, $\mu_A ◦ T\mu_A = \mu_A ◦ \mu_{TA}$ is basically the same formulation of the condition of associativity of the monoid's binary operation. We can take any two applications of the functor $T$ and we can compose them into a single application of the functor $T$. It might help to add the domains and codomains to get a clearer picture of what is being mapped to what: - $\mu_A : TTA \to TA$ (natural transformation from category $TT\mathbf{C}$ to $T\mathbf{C}$) - $T\mu_A : TTTA \to TTA$ (functor T directly transforming $\mu_A$ from the category $\mathbf{C}$ into $T\mathbf{C}$) - $\mu_{TA} : TTTA \to TTA$ (the natural transformation from $TTT\mathbf{C}$ to $TT\mathbf{C}$ for the object $A$) - $\mu_A ◦ T\mu_A : TTTA \to TA$ - $\mu_A ◦ \mu_{TA} : TTTA \to TA$ - The paths traced out go from $TTTA \to TTA \to TA$ - If we chased the diagram, we would see that it is a square with two paths that are associative. We go from a functor applied three times to the category to only applied once, which is analogous to simplifying a multiplication of $a \cdot b \cdot c \to d$. ```mermaid graph TB TTTA TTA1["TTA"] TTA2["TTA"] TA TTTA -- Tμ<sub>A</sub> --> TTA1 TTTA -- μ<sub>TA</sub> --> TTA2 TTA1 -- μ<sub>A</sub> --> TA TTA2 -- μ<sub>A</sub> --> TA ``` The natural transformation $\eta$ is also called the `unit`. If you squint just right, you'll see that the second equation $\mu_A ◦ \eta_{TA} = id_{TA} = \mu_A ◦ T\eta_A$ is actually the condition of unitality from the monoid. Also, the neutral element has become an identity morphism. ```mermaid graph LR TTA TA1["TA"] TA2["TA"] TA1 -- μ<sub>A</sub> --> TTA TA1 -- id<sub>TA</sub> --> TA2 TTA -- η<sub>TA</sub> --> TA2 ``` ```mermaid graph LR TTA TA1["TA"] TA2["TA"] TA1 -- μ<sub>A</sub> --> TTA TA1 -- id<sub>TA</sub> --> TA2 TTA -- Tη<sub>A</sub> --> TA2 ``` Given this definition, we can make a comparison to the structure of the monoid we defined above and clearly see that, indeed, "a monad is an monoid in the category of endofunctors". While monoids are simple objects like numbers, strings, or vector spaces; monads represent containers or process composition. Both of them have roughly the same outline when it comes to composing, multipling, or adding various objects together. :::info To truly see the monoid structure of the monad, we can construct compositions of the functors to show that functor composition works the same ways as addition (or the binary operation) from a monoid. The main differences are that instead of using the binary operation directly, we use the various natural transformations that are created by repeated application of the functor: TTTA. This graph has the same structure as the monoid graph above. ::: ```mermaid flowchart LR subgraph "First compose op" T1["T"] T2["T"] compose1{"μ<sub>TA</sub>"} end subgraph "Second compose op" T3["T"] TA compose2{"μ<sub>A</sub>"} T1 & T2 ----> compose1 T3 -----> compose2 compose1 -- T --> compose2 end compose2 --> TA ``` ## Composing computation with monads From Perrone (2019): > A Kleisli morphism of $T$ from $X$ to $Y$ is a morphism $f : X \to TY$ of $\mathbf{C}$. > > In mathematics it often happens that one would like to obtain a function from $X$ to $Y$ from some construction (for example, a limit), but the result is not always well-defined or unique, or not always existing. Maybe the output of a process is subject to fluctuations, so that a probabilistic output models the situation better. Or, there could be extra side effects...and so on. Allowing more general results or outputs is sometimes a better description of the system, that is, replacing $Y$ with the extension $TY$. Generalized functions include ordinary functions in the same way as generalized elements include ordinary elements: via the map η. A function $f : X \rightarrow Y$ uniquely defines a map $X \rightarrow TY$ given by $\eta \circ f$ . Note that this is different from extending an existing $f : X \rightarrow Y$ to $TY$: we are not extending an existing function to generalized elements, we are allowing more general functions on $X$ which _take [on]_ values in elements of $TY$ which may not come from $Y$. In particular, a generalized element can be seen as a constant generalized function. From this discription, we can see that Kleisli arrows can model programs/subroutines/functions where the returned value may be undefined/not present, outside of a required range/domain, an error condition; where it represents side effects or an IO operation. This is important to keep in mind. It means we can compose functions that each return a monadic value and handle extra possibilities through function composition. Of course, at this point we need to understand how function composition would actually work with Kleisli arrows. Because as the arrows are currently defined, they won't cleanly compose. The domain of every Kleisli arrow is a normal object, while the codomain is an object with the functor applied. None of these Kleisi arrows can be composed directly. We need what the Computer Science Monad™ or Haskell Monad™ defines as `bind`. The method `bind` is a method that takes a Kleisli arrow and the monadic object, tranforms the arrow into an arrow $TX \to TY$, then applies the function to the monadic object. These new arrows can then be composed together without having to think about the error states (until after thinking through all of what they mean, since the monad should let you act on those states if you need to). :::info In the mathmatical papers around monads and comonads, they define an operation called ${}^*$ which can be applied to a Kleisli arrow, say $f : A \to TB$ that produces another function $f^* : TA \to TB$. Then this can be used to directly compose the arrows the ways one would normally compose them. In Haskell one uses a different `bind` operator, written in the language as `>>=` which has a different type signature, but does the same thing through partial application. The signature of `>>=` (`bind`) is written `T -> (x -> Ty) -> Ty`. When it is called with a function, it passes the contents of the monad to the function and returns what the function returns. ::: ```mermaid graph LR TA[/"TA"/] --- f{"(η∘f)<sup>*</sup>"} --> TB[/"TB"/] --- g{"(η∘g)<sup>*</sup>"} --> TC[/"TC"/] --- h{"(η∘h)<sup>*</sup>"} --> TD[/"TD"/] f ------> TBe[/"TB<sub>err</sub>"/] g ----> TCe[/"TC<sub>err</sub>"/] h --> TDe[/"TD<sub>err</sub>"/] ``` This represents some process that isn't as straightforward or "pure" as something where every step is guaranteed to succeed or have a well defined answer. However, it lets us define simple blocks of computation that might fail and string them together without having to think too much about those failures. All we have to do is think about the steps and how the data is tranformed at each step and what the next step expects, some people call this the happy path. ## What does this mean in code? We still need to make sense of this in code. There are vital differences in the structures. The ideas don't cleanly map onto each other. Petricek, 2018 says: > A mathematical monad and a programming monad are related, but they are not the same. In fact, philosophers would say that they are of a different kind [8], the first being a priori and the second being a posteriori kind of knowledge. Some aspects of what a monad is change as they bridge the gap. Moggi introduced monads for reasoning about effectful programs, but Wadler uses them to implement effectful programs in a purely functional programming language. Even a very direct translation subtly changes the purpose of a monad. Uustalu & Vene, 2008 says: > Since the seminal work by Moggi in the late 80s [25], monads, more precisely, strong monads, have become a generally accepted tool for structuring effectful notions of computation, such as computation with exceptions, output, computation using an environment, state-transforming, nondeterministic and probabilistic computation etc. The idea is to use a Kleisli category as the category of impure, effectful functions, with the Kleisli inclusion giving an embedding of the pure functions from the base category. > > But monads do not capture all meaningful kinds of impure computation. In particular, they exclude some natural notions where, instead of producing effects, computations consume something beyond just values (“contexts” of values). Hence it is natural to ask whether comonads, likely with some additional structure (as strength for monads), can deliver a solution for such cases. (Combinations of monads and comonads via distributive laws can then hopefully account for notions of computation that are both effectful and context-dependent.) ```mermaid graph LR TA[/"TA"/] --- f{"bind(T, η∘f)"} --> TB[/"TB"/] --- g{"bind(T, η∘g)"} --> TC[/"TC"/] --- h{"bind(T, η∘h)"} --> TD[/"TD"/] f ------> TBe[/"TB<sub>err</sub>"/] g ----> TCe[/"TC<sub>err</sub>"/] h --> TDe[/"TD<sub>err</sub>"/] ``` Connecting this to programming we have: - A functor (an endofunctor, to be precise) $T : \mathbf{C} \rightarrow \mathbf{C}$, such as `Array<_>`. If you are an familiar with the concept of a functor, this is simple. If you are not, a functor takes one category and maps it to another category. If it maps from one category to the same category, it is an endofunctor. - In programming, this is basically a type `T` with a `map` method and two additional methods. - The method `map` on a `T` (such as `Array<_>`) is also an endofunctor, $map : f(a) \rightarrow T(f)(T(a))$. Using a function `const f = (a: A) => B` and the `map` we can go from an `Array<A>` to an `Array<B>`. Or by using the identity function, we can map from `Array<A>` to an `Array<A>` and preserve the same object identity. - A natural transformation, $\eta : id_{\mathbf{C}} \rightarrow T$, called `unit`. This transforms from an object in category _**C**_ with identity Id<sub>_**C**_</sub> to the functor T. - In code, this is the constructor of type T, such as `Array<A>` that takes an object of category _**C**_ and transforms it into a type `Array`, also in the category _**C**_. - For instance, if the object `'42'` is passed into the constructor for `Array`, then we have the object `[42]`. - A natural transformation, $\mu : T^2 \rightarrow T$, called `join` (also called `composition`, or `multiply`) which takes an nested object T<sup>2</sup> - In programming this method on an `Array<_>` is `flat`, which takes type `Array<Array<_>>` and produces an object of type `Array<_>`. - The method `bind`, $bind : T \times Mor_{\mathbf{C}}(\mathbf{C}, T) ⇒ Mor_{\mathbf{C}}(T, T)$, where the $Mor_{\mathbf{C}}(\mathbf{C}, T)$ are Kleisli morphisms. This can be thought of as creating the lambda `(ArrA: Array<A>, f: (a: A) => Array<B>) => Array<B> = join(map(ArrA, f)`. Where `map(Array<A>, f: (a: A) => Array<B>) => Array<Array<B>>`. Since the function `f` returns an array, it could model a failed computation where the array may be empty. - In programming with `Array`s, `flatMap` is used to chain togther multiple functions in F that each produce `Array<_>` objects. It is like applying the monad operations `join`, then `map(f: (a: A) => Array<B>)`. It follows the opposite pattern of application from `bind`. A comonad is a structure containing the opposite arrows in a category $\mathbf{C}$: - A functor $T : \mathbf{C} → \mathbf{C}$. - A natural transformation $\varepsilon : T \rightarrow id_{\mathbf{C}}$, called the `counit` (or `extract`). This tranforms from a functor T to another object refered to by the arrow $id_{\mathbf{C}}$. - In programming, this is equivalent to 'unwrapping' the value in a monad. In a list, this is the `head` operator. In a Svelte Readable, this is `get`. - A natural transformation $\nu : T \rightarrow T^2$, called `comultiplication` (or `duplicate`). This is the opposite of the `join` operation. This takes a functor T and produces a functor T<sup>2</sup>. In other words, it takes a container and wraps it in another of the same container. - An operation `extend`, $extend : T × Mor_{\mathbf{C}}(T, \mathbf{C}) ⇒ (T → T)$ (dual of `bind`) which takes a function `(a: T<A>) => b: B` and produces a function `(a: T<A>) => T<B>`. - Now we can compose functions `f(a: T<A>) => B` and `g(b: T<B>) => C` by using `extend`, `compose(g, extend(f))`. These are actually called the `co-Kleisli arrows (or morphisms)` between `f` and `g`. - Now using these `co-Kleisli arrows` we can lazily compose chains of computation that can be executed at some later time much like we can with `bind` above. This means that using `Array` (or, really a more suitable data type like `Promise` or `async` functions even though, unlike Array, they don't follow all the monadic laws), we can build chains of execution that pipe into each other: ```ts type A = string type B = int[] type C = int type D = int type E = int type Step1 = (a: A) => Array<B> type Step2 = (b: B) => Array<C> type Step3 = (c: C) => Array<D> type Step4 = (d: D) => Array<E> const f: Step1 const g: Step2 const h: Step3 const i: Step4 const hi = "Greetings, earthings!" const result = f(hi).flatMap(g).flatMap(h).flatMap(i).flat() ``` This is what I started to do in my Fiber library that I need to extract into its own library: - [the library](https://github.com/lightningrodlabs/rea-playspace/blob/main/modules/data-providers/src/lib/fiber.ts) - [usage](https://github.com/lightningrodlabs/rea-playspace/blob/0041995b94dba01121f7d1b847afb8600360954c/modules/data-providers/src/holochain/project/ProjectProvider.ts#L95) Extra credit reading on Monads: - [Kühl, Felix. (2022) A Monad is just a Monoid in the Category of Endofunctors — Let’s actually unravel this.](https://medium.com/@felix.kuehl/a-monad-is-just-a-monoid-in-the-category-of-endofunctors-lets-actually-unravel-this-f5d4b7dbe5d6) - [Lippert, Eric. (2013) Monads, part one. 21 Feb.](https://ericlippert.com/2013/02/21/monads-part-one/) - [Marsden, D. (2022) First steps into a larger world of monads.](https://stringdiagram.com/2022/10/09/first-steps-into-a-larger-world-of-monads/) - [Bicategories, monoids, internal and enriched categories.](https://stringdiagram.com/2022/10/16/bicategories-monoids-internal-and-enriched-categories/) - [Young, Siaw. (2020) Promises are Almost Monads. 8 Nov.](https://www.siawyoung.com/promises-are-almost-monads/) - [async/await is just the do-notation of the Promise monad](https://gist.github.com/VictorTaelin/bc0c02b6d1fbc7e3dbae838fb1376c80) - [The Task Monad in Javascript: pure asynchronous effects you can compose](https://kwijibo.github.io/task-monad-in-javascript/) - [Tasks, Monads, and LINQ](https://devblogs.microsoft.com/pfxteam/tasks-monads-and-linq/) Extra credit reading on Comonads: - [Marsden, D. (2022) Comonads – Monads Upside Down.](https://stringdiagram.com/2022/10/30/comonads-monads-upside-down/) - [Jordan, E. (2018) Life Is A Comonad.](https://eli-jordan.github.io/2018/02/16/life-is-a-comonad/) &mdash; this is actually a prretty clear discussion of comonads in Scala. - [Gibbons, J. (2011) Lenses are the coalgebras for the costate comonad.](https://patternsinfp.wordpress.com/2011/01/31/lenses-are-the-coalgebras-for-the-costate-comonad/) - [Milewski, B. (2017) Comonads. 2 Jan.](https://bartoszmilewski.com/2017/01/02/comonads/) - [Gonzalez, Gabriella. (2013) Comonads are objects. Feb.](https://www.haskellforall.com/2013/02/you-could-have-invented-comonads.html) Profunctor Lenses in JS: * [Functional lenses for contemporary frameworks - Andre Staltz](https://www.youtube.com/watch?v=5R3l2r1XxKI) * [James McNamara - Functional Lenses In JavaScript with Shades](https://www.youtube.com/watch?v=_D3IPecC0S8) * [Quick Introduction to Lenses with Optics.js](https://www.youtube.com/watch?v=vf3P_i1IMtU) * [The power of lenses – Juhana Laurinharju](https://www.youtube.com/watch?v=hnROywmy_HI) More Advanced, but more accurate reading: - [Leinster, T. (2016). Basic category theory. arXiv preprint arXiv:1612.09375.](https://arxiv.org/abs/1612.09375) - [Fong, B., & Spivak, D. I. (2018). Seven sketches in compositionality: An invitation to applied category theory. arXiv preprint arXiv:1803.05316.](https://arxiv.org/abs/1803.05316) - [Pareigis, B. (1970). Categories and functors. Pure and applied Mathematics, 39.](https://epub.ub.uni-muenchen.de/7244/) - [Barr, M., & Wells, C. (1990). Category theory for computing science (Vol. 1). New York: Prentice Hall.](http://www.tac.mta.ca/tac/reprints/articles/22/tr22.pdf) - [Lawvere, F. W., & McLarty, C. (2005). An elementary theory of the category of sets (long version) with commentary. Reprints in Theory and Applications of Categories, 11, 1-35.](http://www.tac.mta.ca/tac/reprints/articles/11/tr11.pdf) - [Loregian, F. (2015). Coend calculus. arXiv preprint arXiv:1501.02503.](https://arxiv.org/abs/1501.02503) &mdash; Appendix A has a good summary of Category Theory basics. - [Perrone, P. (2019). Notes on Category Theory with examples from basic mathematics. arXiv preprint arXiv:1912.10642.](https://arxiv.org/pdf/1912.10642.pdf) &mdash; Good overview of the monad/comonad in the last chapter. - [Riehl, E. (2017). Category theory in context. Courier Dover Publications.](https://math.jhu.edu/~eriehl/context.pdf) - [nLab. Functor.](https://ncatlab.org/nlab/show/functor) - [nLab. Monad.](https://ncatlab.org/nlab/show/monad) - [nLab. Module over a monad.](https://ncatlab.org/nlab/show/module+over+a+monad) - [nLab. Comonad.](https://ncatlab.org/nlab/show/comonad) - [nlab. Lens (in computer science).](https://ncatlab.org/nlab/show/lens+%28in+computer+science%29#LensesAreCostateCoalgebras) - [Petricek, T. (2018). What we talk about when we talk about monads. arXiv preprint arXiv:1803.10195.](https://tomasp.net/academic/papers/monads/monads-programming.pdf) &mdash; This is probably the most accurate of all the links. - [Pickering, M., Gibbons, J., & Wu, N. (2016). Profunctor Optics.](https://www.cs.ox.ac.uk/people/jeremy.gibbons/publications/poptics.pdf) - [Moggi, E. (1991). Notions of computation and monads. Information and computation, 93(1), 55-92.](https://doi.org/10.1016/0890-5401(91)90052-4) - [Orchard, D. (2012). Should i use a monad or a comonad. Unpublished draft.](https://www.cs.kent.ac.uk/people/staff/dao7/drafts/monad-or-comonad-orchard11-draft.pdf) - [Uustalu, T., & Vene, V. (2008). Comonadic notions of computation. Electronic Notes in Theoretical Computer Science, 203(5), 263-284.](https://doi.org/10.1016/j.entcs.2008.05.029) &mdash; this is quite good, but also quite advanced. - > In this paper, we proceed directly from the motivation to treat some important notions of context-dependent computation, namely notions of dataflow computation (stream-based computation) and notions of computation on trees such as tree relabellings in attribute evaluation. We demonstrate that a rather elegant framework for working with these notions of computation is given by symmetric (semi)monoidal comonads. Reassuringly, strong monads appear associated with symmetric monoidal comonads also in works on the categorical semantics of intuitionistic linear and modal logic [4,7]. We describe some aspects of the structure of coKleisli categories corresponding to symmetric (semi)monoidal comonads and describe then a general interpretation of languages for context-dependent computation into such categories. - [Silva, A. (2015). A short introduction to the coalgebraic method. ACM SIGLOG News, 2(2), 16-27.](https://repository.ubn.ru.nl/bitstream/handle/2066/147292/147292.pdf) - [Gibbons, J., & Johnson, M. (2012). Relating algebraic and coalgebraic descriptions of lenses. Electronic Communications of the EASST, 49.](https://journal.ub.tu-berlin.de/eceasst/article/view/726/742) - [Jakl, T., & Reggio, L. (2022, draft) An Invitation to Game Comonads.](https://kam.mff.cuni.cz/~jaklt/tmp/esslli2022/notes.pdf)