Dynamic Bayesian Neural Networks

Lorenzo Rimella, Nick Whiteley

Research output: Contribution to conferenceConference Paper

Abstract

We define an evolving in time Bayesian neural network called a Hidden Markov neural network. The weights of a feed-forward neural network are modelled with the hidden states of a Hidden Markov model, whose observed process is given by the available data. A filtering algorithm is used to learn a variational approximation to the evolving in time posterior over the weights. Training is pursued through a sequential version of Bayes by Backprop Blundell et al. 2015, which is enriched with a stronger regularization technique called variational DropConnect. The experiments test variational DropConnect on MNIST and display the performance of Hidden Markov neural networks on time series.
Original languageEnglish
Publication statusSubmitted - 20 Jun 2020

Fingerprint

Dive into the research topics of 'Dynamic Bayesian Neural Networks'. Together they form a unique fingerprint.

Cite this