Abstract
We consider a decentralized multiagent Multi Armed Bandit (MAB) setup consisting of N agents, solving the same MAB instance to minimize individual cumulative regret. In our model, agents collaborate by exchanging messages through pairwise gossip style communications. We develop two novel algorithms, where each agent only plays from a subset of all the arms. Agents use the communication medium to recommend only armIDs (not samples), and thus update the set of arms from which they play. We establish that, if agents communicate Ω(log(T)) times through any connected pairwise gossip mechanism, then every agent's regret is a factor of order N smaller compared to the case of no collaborations. Furthermore, we show that the communication constraints only have a second order effect on the regret of our algorithm. We then analyze this second order term of the regret to derive bounds on the regretcommunication tradeoffs. Finally, we empirically evaluate our algorithm and conclude that the insights are fundamental and not artifacts of our bounds. We also show a lower bound which gives that the regret scaling obtained by our algorithm cannot be improved even in the absence of any communication constraints. Our results demonstrate that even a minimal level of collaboration among agents greatly reduces regret for all agents.
Original language  English 

Title of host publication  Proceedings of AISTATS 
Number of pages  44 
Publication status  Accepted/In press  17 Jan 2020 
Fingerprint Dive into the research topics of 'The GossipingInsertEliminate Algorithm for MultiAgent Bandits'. Together they form a unique fingerprint.
Profiles

Dr A J Ganesh
 School of Mathematics  Associate Professor
 Statistical Science
 Probability, Analysis and Dynamics
 Cabot Institute for the Environment
 Probability
Person: Academic , Academic , Member