On the Improvements and Innovations of Monte Carlo Methods

  • Chang Zhang

Student thesis: Doctoral ThesisDoctor of Philosophy (PhD)

Abstract

Monte Carlo methods have played a central role in computational statistics for many years. While it has become a powerful tool for solving scientific problems over years of development, Monte Carlo algorithms also suffer from several limitations, making them inefficient when solving more challenging problems. This thesis is mainly concerned with improving the existing Monte Carlo algorithms as well as developing novel ideas that could potentially trigger
extensive future works in the field of Monte Carlo methods.
In Chapter 1, we gave a review of the existing Monte Carlo methods and algorithms that are related to the work presented in this thesis. In Chapter 2, we focused on developing an improved algorithm for making inferences on the Piecewise-Deterministic Markov Processes (PDMP). We combined the idea of block sampling Doucet et al. (2006) with the existing particle filter for PDMP (Godsill and Vermaak, 2005) to obtain an improved algorithm. Simulations showed that the new algorithm is more capable of locating the jumps that are likely to
be missed by the existing particle filter. A particle Gibbs sampler based on the new algorithm is also developed in the chapter. In Chapters 3 and 4, we developed an ABC-SMC algorithm based on Del Moral et al. (2012). Inspired by the fact that many problems solved by ABCtype algorithms involve a generator in which the generation process relies on the simulations of some latent random variables, we developed a modified ABC-SMC algorithm that, instead
of generating these latent random variables from scratch every time, targets a specific joint distribution of the latent random variables instead. Under the same computational budget, we have numerically shown that the new algorithm achieves large improvement and can obtain more accurate approximations of the true posteriors compared to the standard ABC-SMC algorithms. Simulations also show that the new algorithm scales well in high dimensions. In Chapter 5, we turned to look at novel Monte Carlo approaches in light of Dau and Chopin
(2020). We developed a novel SMC algorithm that samples Markov process snippets whose states, with proper weights, can be used to approximate expectations with respect to the target. Numerical examples indicate that this novel algorithm has significant performance improvement compared to its competitor. Lastly, the possible directions of future works based
on the thesis are discussed in Chapter 5.6
Date of Award13 Oct 2022
Original languageEnglish
Awarding Institution
  • University of Bristol
SupervisorMark A Beaumont (Supervisor) & Christophe Andrieu (Supervisor)

Cite this

'