Abstract
We develop a practical approach to establish the stability, that is the recurrence in a given set, of a large class of controlled Markov chains. These processes arise in various areas of applied science and encompass important numerical methods. We show in particular how individual Lyapunov functions and associated drift conditions for the parametrised family of Markov transition probabilities and the parameter update can be combined to form Lyapunov functions for the joint process, leading to the proof of the desired stability property. Of particular interest is the fact that the approach applies even
in situations where the two components of the process present a time-scale separation, which is a crucial feature of practical situations. We then move on to show how such a recurrence property can be used in the context of stochastic approximation in order to prove the convergence of the parameter sequence,
including in the situation where the so-called stepsize is adaptively tuned. We finally show that the results apply to various algorithms of interest in computational statistics and cognate areas.
in situations where the two components of the process present a time-scale separation, which is a crucial feature of practical situations. We then move on to show how such a recurrence property can be used in the context of stochastic approximation in order to prove the convergence of the parameter sequence,
including in the situation where the so-called stepsize is adaptively tuned. We finally show that the results apply to various algorithms of interest in computational statistics and cognate areas.
Original language | English |
---|---|
Number of pages | 31 |
Journal | Annals of Applied Probability |
DOIs | |
Publication status | Published - 1 Feb 2015 |