The almost sure rate of convergence of linear stochastic approximation algorithms is analyzed in this correspondence. As the main result, it is demonstrated that their almost sure rate of convergence is equivalent to the almost sure rate of convergence of the averages of their input data sequences. As opposed to most of the existing results on the rate of convergence of stochastic approximation which cover only algorithms with the noise decomposable as the sum of a martingale difference, vanishing and telescoping sequence, the main results of this correspondence hold under assumptions not requiring the input data sequences to admit any particular decomposition. Although no decomposition of the input data sequences is required, the results on the almost sure rate of convergence of linear stochastic approximation algorithms obtained in this correspondence are as tight as the rate of convergence in the law of iterated logarithm. Moreover, the main result of this correspondence yields the law of iterated logarithm for linear stochastic approximation if the law of iterated logarithm holds for the input data sequences. The obtained general results are illustrated with two (nontrivial) examples where the input data sequences are strongly mixing strictly stationary random processes or functions of a uniformly ergodic Markov chain. These results are also applied to the analysis of least mean square (LMS) algorithms.
|Translated title of the contribution||On the almost sure rate of convergence of linear stochastic approximation algorithms|
|Pages (from-to)||401 - 409|
|Number of pages||9|
|Journal||IEEE Transactions on Information Theory|
|Publication status||Published - Feb 2004|
Bibliographical notePublisher: IEEE-Inst Electrical Electronics Engineers Inc
Other identifier: IDS number 773WY