We present a Bayesian reinforcement learning model with a working memory module which can solve some non-Markovian decision processes. The model is tested, and compared against SARSA (lambda), on a standard working-memory task from the psychology literature. Our method uses the Kalman temporal difference framework, And its extension to stochastic state transitions, to give posterior distributions over state-action values. This framework provides a natural mechanism for using reward information to update more than the current state-action pair, and thus negates the use of eligibility traces. Furthermore, the existence of full posterior distributions allows the use of Thompson sampling for action selection, which in turn removes the need to choose an appropriately parameterised action-selection method.
|Title of host publication||2015 IEEE Symposium Series on Computational Intelligence|
|Subtitle of host publication||Proceedings of a meeting held 7-10 December 2015, Cape Town, South Africa|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Number of pages||8|
|Publication status||Published - Apr 2016|