We define an evolving in time Bayesian neural network called a Hidden Markov neural network. The weights of a feed-forward neural network are modelled with the hidden states of a Hidden Markov model, whose observed process is given by the available data. A filtering algorithm is used to learn a variational approximation to the evolving in time posterior over the weights. Training is pursued through a sequential version of Bayes by Backprop Blundell et al. 2015, which is enriched with a stronger regularization technique called variational DropConnect. The experiments test variational DropConnect on MNIST and display the performance of Hidden Markov neural networks on time series.
|Submitted - 20 Jun 2020
FingerprintDive into the research topics of 'Dynamic Bayesian Neural Networks'. Together they form a unique fingerprint.
High-dimensional hidden Markov models: methodology, computational issues, solutions and applicationsAuthor: Rimella, L., 28 Sept 2021
Supervisor: Whiteley, N. (Supervisor)
Student thesis: Doctoral Thesis › Doctor of Philosophy (PhD)File