###### tags: `one-offs` `optimisation` `smoothness`
# Convex Taylor Polynomials
**Overview**: In this note, I discuss a nice result about Taylor expansions of convex functions, and their convexity.
## From Smoothness to Majorants
There's a cool result from a couple of years ago which I think is really neat, and the proof is simple enough that I think it's worth presenting widely. The topic is approximating convex functions with (regularised) Taylor polynomials, and (near-)preservation of convexity under taking Taylor expansions.
To begin with: it is not so hard to see that Taylor approximations of convex functions can easily fail to be convex: consider any convex function with non-vanishing third derivative, and take its third-order Taylor polynomial. As such, if we are interested in preserving convexity, it is natural to consider constraints and regularisation.
Focus now on convex functions with whose derivatives of order p are Lipschitz continuous, i.e.
\begin{align}
\left\Vert \left(\mathrm{D}^{p}f\right)\left(x\right)-\left(\mathrm{D}^{p}f\right)\left(y\right)\right\Vert _{\mathrm{op}}\leqslant L\cdot\left\Vert x-y\right\Vert _{2}.
\end{align}
This is a sensible constraint to impose on $f$, in the sense that such functions will deviate from local approximations by controllable amounts.
Fortunately, this constraint implies also a number of additional bounds on the growth of f by integration. For example, the difference between $\left( \mathrm{D}^{p-k} f \right) \left( y \right)$ and the corresponding derivative of its Taylor approximation from $x$ at $y$ can be bounded in operator norm as $L\cdot\frac{\left\Vert x-y\right\Vert _{2}^{k+1}}{\left(k+1\right)!}$. In particular, taking $k = p - 2$, we see that the Hessian of $f$ at $y$ can be bounded from above (in semidefinite order) by the Hessian of the $p$th-order Taylor approximation at $x$, plus $L\cdot\frac{\left\Vert x-y\right\Vert _{2}^{p-1}}{\left(p-1\right)!}\cdot I_{d}$.
Separately, one can prove that the Hessian of the mapping $y \mapsto p\cdot L\cdot\frac{\left\Vert x-y\right\Vert _{2}^{p+1}}{\left(p+1\right)!}$ is bounded from below (again in semidefinite order) by this same matrix. As such, one can deduce that the Hessian of the regularised Taylor polynomial
\begin{align}
y\mapsto T\left(y;x,p,f\right)+p\cdot L\cdot\frac{\left\Vert x-y\right\Vert _{2}^{p+1}}{\left(p+1\right)!}
\end{align}
dominates the Hessian of $f$, where $T$ is the relevant Taylor polynomial. Recalling now that $f$ is convex, it thus follows that this approximation is also convex! Moreover, it is also an upper bound to $f$. This property actually holds already for
\begin{align}
y\mapsto T\left(y;x,p,f\right)+L\cdot\frac{\left\Vert x-y\right\Vert _{2}^{p+1}}{\left(p+1\right)!}
\end{align}
As a consequence, we see that given a smooth, convex function $f$, it is feasible to form an approximation to it at a point $x$, with high-order accuracy in the neighbourhood of $x$, and *global* convexity. The practical implication of this is that one can construct improved optimisation methods based on these high-order majorants, with realistic hopes of minimising these majorants, by virtue of convexity.
The main result here is given in Nesterov's [paper](https://alfresco.uclouvain.be/alfresco/service/guest/streamDownload/workspace/SpacesStore/aabc2323-0bc1-40d4-9653-1c29971e7bd8/coredp2018_05web.pdf?guest=true), and the algorithmic implications are explored in a number of related works by the same author and coauthors.