Recent models of learning in games have attempted to produce individual-level learning algorithms that are asymptotically characterised by the replicator dynamics of evolutionary game theory. In contrast, we describe a population-level model which is characterised by the smooth best response dynamics, a system which is intrinsic to the theory of adaptive behaviour in individuals. This model is novel in that the population members are not required to make any game-theoretical calculations, and instead simply assess the values of actions based upon observed rewards. We prove that this process must converge to Nash distribution in several classes of games, including zero-sum games, games with an interior ESS, partnership games and supermodular games. A numerical example confirms the value of our approach for the Rock--Scissors--Paper game.
|Translated title of the contribution||Population-level reinforcement learning resulting in smooth best response dynamics|
|Publication status||Published - 2002|