Abstract
We define an evolving in time Bayesian neural network called a Hidden Markov neural network. The weights of a feed-forward neural network are modelled with the hidden states of a Hidden Markov model, whose observed process is given by the available data. A filtering algorithm is used to learn a variational approximation to the evolving in time posterior over the weights. Training is pursued through a sequential version of Bayes by Backprop Blundell et al. 2015, which is enriched with a stronger regularization technique called variational DropConnect. The experiments test variational DropConnect on MNIST and display the performance of Hidden Markov neural networks on time series.
Original language | English |
---|---|
Publication status | Submitted - 20 Jun 2020 |
Fingerprint
Dive into the research topics of 'Dynamic Bayesian Neural Networks'. Together they form a unique fingerprint.Student theses
-
High-dimensional hidden Markov models: methodology, computational issues, solutions and applications
Rimella, L. (Author), Whiteley, N. (Supervisor), 28 Sept 2021Student thesis: Doctoral Thesis › Doctor of Philosophy (PhD)
File