###### tags: `one-offs` `rejection sampling`
# Univariate Log-Concave Rejection Sampling is Solved
**Overview**: In this note, I describe some results of Luc Devroye on rejection sampling from log-concave densities in dimension one. I also comment on some more recent results which show that Devroye's construction is essentially optimal.
## Prelude
Today, I stumbled upon a paper of Luc Devroye entitled "Random variate generation for the generalized inverse gaussian distribution" (link [here](http://luc.devroye.org/Devroye-RNG-GIG-2012.pdf). Despite the title giving a fairly narrow impression of its scope, the paper actually contains a very nice result which is much more general, essentially tackling the problem of log-concave rejection sampling in one dimension. I have since found out that this particular result is contained in earlier work of Devroye, though I do not know an appropriate earliest reference.
## The Result
The solution is essentially as follows: with a well-chosen piecewise-log-linear envelope to the target, using only three pieces, once can obtain a good acceptance rate. The construction is actually remarkably simple: first, find the mode of the density. Then, looking to either side of the mode, find points at which the density is reduced by a constant factor (say $\mathrm{e}^{-1}$), relative to the density at the mode. Draw tangents to the log-density at these three points, and use these to form a piecewise-log-linear upper envelope to the target. Optimising this "constant factor" can be done agnostically to the target, and gives a good method, in the sense that $\mathbf{P}\left(\mathrm{accept}\right)\geqslant1-\mathrm{e}^{-1}>0$, for **any** log-concave density in one dimension.
## Some Recent Advances
A recent work of Chewi and coauthors studies the complexity of this algorithm in the setting of well-conditioned log-concave targets. The extra work for their positive result is to show that under these assumptions, one can find these 'points of lower density' efficiently. They also use a slightly different envelope, which instead has Gaussian tails, though I suspect that this is not strictly necessary for controlling the acceptance rate. For the matching negative result, they also show complexity lower bounds, verifying that this method is rate-optimal.
## Conclusion
For me, this says that if a sampling problem is reducible to a sampling from a one-dimensional log-concave probability measure, then the problem can be solved simply and efficiently. It is not a surprising result, but it is nice to know what problems can safely be internalised as 'solved', and this result essentially accomplishes that.

Overview: In this note, I log some basic observations about diffusion-based generative models.

8/14/2023Overview: In this note, I describe some aspects of hierarchical structure in MCMC algorithms, and how they can be of theoretical and practical relevance.

8/9/2023Overview: In this note, I discuss a recurrent question which can be used to generate research questions about methods of all sorts. I then discuss a specific instance of how this question has proved fruitful in the theory of optimisation algorithms. Methods and Approximations A nice story is that when Brad Efron derived the bootstrap, it was done in service of the question “What is the jackknife an approximation to?”. I can't help but agree that there's something quite exciting about research questions which have this same character of ''What is (this existing thing) an approximation to?''. One bonus tilt on this which I appreciate is that there can be multiple levels of approximation, and hence many answers to the same question. One well-known example is gradient descent, which can be viewed as an approximation to the proximal point method, which can then itself be viewed as an approximation to a gradient flow. There are probably even more stops along the way here. In this case, there is even the perspective that from the perspective of mathematical theory, there may be at least as much to be gained by stopping off at the proximal point interpretation, as there is from the gradient flow perspective. My experience is that generalist applied mathematicians get to grips with the gradient flow quickly, but optimisation theorists can squeeze more out of the PPM formulation. There is thus some hint that using this 'intermediate' approximation can be particularly insightful in its own right. It would be interesting to collect more examples with this character.

5/22/2023Overview: In this note, I prove Hoeffding's inequality from the perspectives of martingales and convex ordering. The Basic Construction Let $-\infty<a<x<b<\infty$, and define a random variable $M$ with law $M\left(x;a,b\right)$ by \begin{align} M=\begin{cases} a & \text{w.p. }\frac{b-x}{b-a}\ b & \text{w.p. }\frac{x-a}{b-a}. \end{cases}

5/22/2023
Published on ** HackMD**

or

By clicking below, you agree to our terms of service.

Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet

Wallet
(
)

Connect another wallet
New to HackMD? Sign up