###### tags: `one-offs` `monte carlo` `sampling` `diagnostics`
# Convergence Diagnostics for MCMC
**Overview**: In this note, I describe some aspects of the convergence diagnosis problem for MCMC, and make some observations on how the existing literature approaches the problem. I then outline some perspectives on why certain approaches might be appealing at a conceptual or theoretical level.
## Introduction
I'm finding it interesting to read about convergence diagnostics for MCMC. These algorithms are of course being used in practice (with or without explicit diagnostics), and as a result, the development seems to have initially been at a more intuitive and algorithmic level; with a focus on sensible and practical contributions. There are works around the topic which are theoretical in character, but (without having fully digested the literature) one gets the impression that there aren't so many results which are rigorous **and** tell you what you'd honestly like to know about a given diagnostic.
It bears acknowledging that this is true for large parts of the theory of *any* class of practical algorithms. Nevertheless, for me, it felt a bit more pointed in this case; there seem not to be so many examples of proving nice things under strong-but-non-fictional assumptions.
## Algorithm Structure: acknowledge, or ignore?
Interestingly to me, there is also a challenging tension about how much structure to use. At one end, one could attempt to use the structure of the algorithm in terms of a stationary, Markovian process to inform some diagnostic. It seems that this approach is prone to generating diagnostics which might lean too heavily on asymptotic intuition. The difficulty here is that stationary Markov chains have many nice properties, but for the problem of convergence diagnosis, the interesting case is non-stationary chains, or at least chains which lie close to the boundary. One then wonders whether, for the purposes of convergence diagnosis, the structure of the algorithm might be something of a red herring.
The other end of the spectrum is then to consider the problem as quasi-structureless, i.e. acknowledge only your algorithm generates samples in some way, you broadly trust that it is consistent as the number of iterations grows, and the role of the Markov structure is mostly to give you this trust, rather than to diagnose the practical performance. This is more of a black-box approach, in which some information about the algorithm is thrown away; one expects that this might lead to more conservative procedures in general, at least if taken seriously.
## Some Weak and Partial Conclusions
I don't really know which way is most sensible. One of the few things in which I have a bit of confidence is that the tests themselves should be relatively weak, e.g. it is better to test that your algorithm is getting some simple, relevant moments correct, rather than undertaking some more stringent nonparametric test of equality of distributions. It seems sensible that the asymptotic theory of Markov processes should be present in the tests in some sense (e.g. as a backstop to ensure that, under some conditions, you will eventually diagnose that a convergent chain is converging), but it's hard to gauge how much more of a role it should play.
Anyways, it seems like there is much to be done. I'm not sure what other problems in numerical computation have this flavour; convergence diagnosis for deterministic problems is typically more transparent, and for methods like crude Monte Carlo, there is no 'convergence' per se, so much as there is basic control of variance, mean-square error, and so on. It would be insightful to identify similar problems, though, especially those for which satisfactory diagnostics have been developed.