Abstract
Markov chain Monte Carlo (MCMC) methods to sample from a probability distribution $\pi$ defined on a space $(\Theta,\mathcal{T})$ consist of the simulation of realisations of Markov chains $\{\theta_{n},n\geq1\}$ of invariant distribution $\pi$ and such that the distribution of $\theta_{i}$ converges to $\pi$ as $i\rightarrow\infty$. In practice one is typically interested in the computation of expectations of functions, say $f$, with respect to $\pi$ and it is also required that averages $M^{-1}\sum_{n=1}^{M}f(\theta_{n})$ converge to the expectation of interest. The iterative nature of MCMC makes it difficult to develop generic methods to take advantage of parallel computing environments when interested in reducing time to convergence. While numerous approaches have been proposed to reduce the variance of ergodic averages, including averaging over independent realisations of $\{\theta_{n},n\geq1\}$ simulated on several computers, techniques to reduce the "burn-in" of MCMC are scarce. In this paper we explore a simple and generic approach to improve convergence to equilibrium of existing algorithms which rely on the Metropolis-Hastings (MH) update, the main building block of MCMC. The main idea is to use averages of the acceptance ratio w.r.t. multiple realisations of random variables involved, while preserving $\pi$ as invariant distribution. The methodology requires limited change to existing code, is naturally suited to parallel computing and is shown on our examples to provide substantial performance improvements both in terms of convergence to equilibrium and variance of ergodic averages. In some scenarios gains are observed even on a serial machine.
Original language | English |
---|---|
Journal | arXiv |
Publication status | Published - 29 Dec 2020 |
Keywords
- doubly intractable distributions
- intractable likelihood
- Markov chain Monte Carlo
- pseudo-marginal Metropolis-Hastings
- reversible jump Monte Carlo
- sequential Monte Carlo
- state-space models
- particle MCMC