owned this note
owned this note
Published
Linked with GitHub
###### tags: `generative models` `numerical analysis` `one-offs`
# Splitting Integrators and Normalising Flows
**Overview**: In this note, I will describe the technique of *vector field splitting* as applied to the numerical solution of differential equations. I will then describe how similar principles implicitly underlie a number of constructions which have been used to devise tractable normalising flows.
## The Numerical Solution of Differential Equations
Consider a time-independent ordinary differential equation initial value problem which is specified as
\begin{align}
\frac{dx}{dt} &= f(x(t)) \\
x(0) &= x_0.
\end{align}
There are a wealth of scenarios in which the solution of this equation is of key interest. Unfortunately, in many of these scenarios, an analytical solution is not forthcoming. As such, it is desirable to design reliable methods for numerical approximation of the solution.
In a standard course on numerical analysis, one will encounter a number of generic methods for solving such equations. Readers may be familiar with examples such as:
* Euler's Methods (Explicit, Implicit, ...)
* Linear Multistep Methods (Adams-Bashforth, Adams Moulton, ...)
* Runge-Kutta Methods (Explicit, Implicit, Diagonally-Implicit, ...).
A strength of these methods is their generality: given access to evaluations of $f$ at desired input points $x$, one can use these methods to construct accurate approximate solutions for a wide range of problems.
A necessary drawback of this generality is that these methods may fail to take advantage of any additional structure which is available in a specific problem. As such, they cannot be the whole story on the numerical solution of differential equations.
## Differential Equations with Decomposable Structure
A common feature of many differential equations which one encounters in practice is that there is some level of *modularity* to the problem. For some examples (borrowed from the [review paper](https://www.massey.ac.nz/~rmclachl/an.pdf) of McLachlan-Quispel),
* In fluid dynamics, the PDEs which describe the evolution of a fluid comprise terms which correspond to advection, diffusion, and pressure.
* In dynamical systems, some components of the system may be (conditionally) linear, and others may be nonlinear.
* In quantum mechanics, the Schrödinger equation is made up of a term which is most easily handled by working in frequency space (via the Fourier transform), and a term which is most easily handled in the original space.
For a more contemporary example, one can consider problems of sampling and optimisation which involve objective functions which are built by combining the contributions of many individual observations, as well as structural regularisation functionals.
Going forward, I will refer to a differential equation as having a *decomposable* structure if it can be written in the form
\begin{align}
\frac{dx}{dt} &= f(x(t)) \\
f(x) &= \sum_{i \in I} f_i (x),
\end{align}
where there is a tacit assumption that each of the component vector fields $f_i$ is in some sense simpler than the aggregate vector field $f$.
## Solving Decomposable Differential Equations by Splitting
Suppose that we are tasked with the numerical solution of a decomposable differential equation with the above structure. For each $i \in I$, define the $i^\text{th}$ flow map $\phi_i^t$ by
\begin{align}
x_0 &= x \\
\dot{x}_s &= f_i (x_s) \quad \text{for}\, 0\leqslant s \leqslant t \\
x_t &=: \phi_i^t (x).
\end{align}
Our standing assumption will be that each of these flow maps $\{ \phi_i^t \}_{i \in I, t \in \mathbf{R}}$ can be evaluated cheaply, either due to their availability in closed form, or amenability to rapid approximation.
Given the availability of these maps, a natural route to approximating the solution of our original problem is the following:
1. Fix an ordering $I = {i_1, \ldots, i_M}$.
2. Set $\hat{x}_0 = x_0$.
3. For $m = 1, \ldots, M$,
1. Set $\hat{x}_m = \phi_m^t \left( \hat{x}_{m-1} \right)$.
4. Output $\hat{x}_M \approx x(t)$.
This would be described as a *splitting method* for our problem, as we have *split* up our original vector field into a number of more tractable components.
For the interested reader, it should be noted that there are some quite related (though not identical) notions in the world of numerical analysis, such as multirate integrators and multiscale integrators, as well as partitioned and additive Runge-Kutta methods.
Another appealing feature of splitting methods is that they are natural building blocks for *structure-preserving* methods. Suppose that our dynamical system of interest possesses some interesting structural property, e.g.
* the flow is volume-preserving
* the flow conserves or dissipates a certain energy functional
* the flow is confined to a submanifold of the ambient space
* ... etc.
It is then often of interest to construct a numerical solution which reproduces these features. It can be shown that if one splits $f$ into sub-vector fields which each reproduce these features individually, then an appropriate composition of their exact flow maps also will!
## Normalising Flows and Transformation-Based Measure Approximation
A popular research topic in recent years has been the application of invertible transformations to tasks in measure approximation (usually density estimation and approximate inference). The key mathematical tool underlying this endeavour is the following observation: if $p$ is a probability density, and $T$ is an invertible transformation, then the density of $y = T(x), x \sim p$ can be written explicitly as
\begin{align}
q(y) &= \left( T_\# p \right)(y) \\
&= p \left( T^{-1} (y) \right) \cdot \det \left( \frac{\partial T^{-1}}{\partial y} (y) \right).
\end{align}
In principle, one can then construct elaborate densities which admit exact sampling and density computations, simply by drawing a sample from a tractable distribution, and pushing it through an invertible transformation.
In fact, one may iterate this procedure, and apply several such transformations. Specify $L$ invertible transformations $\{ T_\ell \}_{\ell = 1}^L$, and define recursively $q_0 = p$, $q_\ell = \left( T_\ell \right)_\# q_{\ell - 1}$ for $1 \leqslant \ell \leqslant L$. It then holds that
\begin{align}
q_L(y) &= \left( \left(T_L \right)_\# q_{L-1} \right)(y) \\
&= q_{L-1} \left( T_L^{-1} (y) \right) \cdot \det \left( \frac{\partial T_L^{-1}}{\partial y} (y) \right) \\
&= \left( \left(T_{L-1} \right)_\# q_{L-2} \right) \left( T_L^{-1} (y) \right) \cdot \det \left( \frac{\partial T_L^{-1}}{\partial y} (y) \right) \\
&= q_{L-2} \left( T_{L-1}^{-1} \circ T_L^{-1} (y) \right) \cdot \det \left( \frac{\partial T_{L - 1}^{-1}}{\partial y} (T_L^{-1}(y)) \right) \cdot \det \left( \frac{\partial T_L^{-1}}{\partial y} (y) \right) \\
&= \cdots \\
&= q_0 (y_0) \cdot \prod_{\ell = 1}^L \det \left( \frac{\partial T_\ell^{-1}}{\partial y_\ell} (y_\ell) \right).
\end{align}
This principle is essentially that of *normalising flows* (known in some circles as *transport maps*).
In practice, a challenge is that the cost of computing the determinant of a matrix grows rapidly with the dimension of the ambient space. As such, there has been much effort spent in parametrising transformations $T$ for which the corresponding Jacobian determinant is much cheaper to evaluate.
## Invertible Transformations by ODE Flows
It can be shown that the flow map corresponding to a well-posed ordinary differential equation is generically an invertible transformation. While not every invertible transformation can be obtained in this way (consider the mapping $x \mapsto -x$ on the real line), the class of flow maps is nevertheless quite flexible, and so can serve as a rich source of inspiration for the construction of normalising flows.
Of course, as detailed above, the exact evaluation of ODE flow maps is typically challenging. With this in mind, we turn to the numerical approximation of such maps.
## Normalising Flows via Vector Field Splitting
Consider again the solution of a generic ODE IVP
\begin{align}
\frac{dx}{dt} &= f(x(t)) \\
x(0) &= x.
\end{align}
Even without assuming the a priori existence of a decomposition of $f$ into meaningful parts, we can still concoct decompositions after the fact. For example, writing $\{ e_i \}_{e \in [D]}$ for the coordinate vectors in $\mathbf{R}^D$, we can say that
\begin{align}
f &= \sum_{i \in [D]} e_i f_i \\
f_i &= \langle e_i, f \rangle.
\end{align}
This essentially corresponds to the statement that a vector field on $\mathbf{R}^D$ can be written out in coordinates.
Without any assurances that the $f_i$ obtained in this fashion are any more tractable than our original $f$, we might simply approximate the flow map of $f_i$ by an Euler step, i.e.
\begin{align}
\left( \hat{\phi}_i^t (x) \right)_j = \begin{cases}
x_j, & \text{for } j \neq i\\
x_i + t \cdot f_i (x), & \text{for } i = j.
\end{cases}
\end{align}
As we have constructed it, the Jacobian of this flow map will be the identity matrix, with the exception of the $i^\text{th}$ row. Some simple manipulations confirm that the determinant of this matrix is given by $\left( 1 + t \cdot \frac{\partial f_i}{\partial x_i} \right)$. For appropriate $f_i$ and sufficiently small $t$, one can guarantee that this quantity will be bounded away from $0$, uniformly in $x$.
Interpreted appropriately, this is essentially the structure of *autoregressive flows* (AFs). Typical AFs make additional simplifying assumptions on the structure of the $f_i$ (e.g. that $\frac{\partial f_i}{\partial x_j} \equiv 0$ for $j > i$ ), but the preceding derivations demonstrate that this restriction is not strictly needed, provided that one is only interested in guaranteeing a well-behaved Jacobian matrix.
One can generalise this idea further, by instead decomposing $[D]$ into larger subsets, e.g.
\begin{align}
[D] &= A_1 \sqcup \cdots \sqcup A_M \\
f &= \sum_{m \in [M]} e_{A_m} f_m \\
f_m &= \langle e_{A_m}, f \rangle.
\end{align}
In the case where $M = 2$, this is roughly the structure of NICE / RealNVP. A distinction is that these methods additionally constrain $f_m$ to be semi-linear, in the sense that
\begin{align}
f_m ( \{ x_n \}_{n \in [M]}) = A(x_{-m}) x_m + b(x_{-m}),
\end{align}
where $A$ and $b$ are a matrix and a vector respectively. A further restriction is that the matrix $A$ is also taken to be diagonal.
While the specific restrictions are not necessarily intrinsically important, the need for some kind of restriction is. This can be seen by noting that as the size of each $A_m$ grows, the Jacobian matrix of the corresponding transformation will be nondegenerate on a subspace of dimension $|A_m|$, and so the complexity of determinant operations will creep back up again if the mappings are constructed carelessly.
A third example is to explicitly construct vector fields $f$ which admit favourable decompositions a priori. For example, one might assert that $f$ have a "nonlinear singular value decomposition" of the form
\begin{align}
f(x) &= \sum_{m \in [M]} U_m f_m (V_m^* x),
\end{align}
where each of the $(U_m, V_m)$ are orthogonal rectangular matrices, and $f_m$ is a vector field of appropriate dimension. This connects to the ideas underpinning both *planar flows* and their successors, *Sylvester flows*.
## Conclusion
In this note, I have drawn some connections between the notion of vector field splitting in the numerical solution of differential equations, and the construction of invertible transformations in modern machine learning tasks. Of course, this all comes after the advent of Neural ODEs and continuous normalising flows, which work directly with an ODE. As such, the benefits of hindsight are substantial in this case. Still, the links to ODE solvers may be conceptually useful for other reasons.
An alternative and parallel perspective, which I hope to detail elsewhere, is to skip over the infinitesimal genesis of these transformations, and focus directly on the structure of the corresponding Jacobian matrices. Instead of focusing on techniques from the solution of differential equations, one can instead draw upon the rich theory of (numerical) linear algebra to devise transformations with structured Jacobians.