Abstract
Motivated by emerging need of learning algorithms for large scale networked and decentralized systems, we introduce a distributed version of the classical stochastic MultiArm Bandit (MAB) problem. Our setting consists of a large number of agents n that collaboratively and simultaneously solve the same instance of K armed MAB to minimize the average cumulative regret over all agents. The agents can communicate and collaborate among each other only through a pairwise asynchronous gossip based protocol that exchange a limited number of bits. In our model, agents at each point decide on (i) which arm to play, (ii) whether to, and if so (iii) what and whom to communicate with. Agents in our model are decentralized, namely their actions only depend on their observed history in the past. We develop a novel algorithm in which agents, whenever they choose, communicate only armids and not samples, with another agent chosen uniformly and independently at random. The peragent regret scaling achieved by our algorithm is $\BigO łeft( \fracłceil\fracK n \rceil+łog(n) Δ łog(T) + \fracłog^3(n) łog łog(n) Δ^2 \right) $. Furthermore, any agent in our algorithm communicates (armids to an uniformly and independently chosen agent) only a total of Θ(łog(T))$ times over a time interval of T. We compare our results to two benchmarks  one where there is no communication among agents and one corresponding to complete interaction, where an agent has access to the entire system history of arms played and rewards obtained of all agents. We show both theoretically and empirically, that our algorithm experiences a significant reduction both in peragent regret when compared to the case when agents do not collaborate and each agent is playing the standard MAB problem (where regret would scale linearly in K), and in communication complexity when compared to the full interaction setting which requires T communication attempts by an agent over T arm pulls. Our result thus demonstrates that even a minimal level of collaboration among the different agents enables a significant reduction in peragent regret.
Original language  English 

Article number  53 
Number of pages  35 
Journal  Proceedings of the ACM on Measurement and Analysis of Computing Systems 
Volume  3 
Issue number  3 
DOIs  
Publication status  Published  17 Dec 2019 
Fingerprint
Dive into the research topics of 'Social learning in multiagent multiarmed bandits'. Together they form a unique fingerprint.Profiles

Dr A J Ganesh
 School of Mathematics  Associate Professor
 Statistical Science
 Probability, Analysis and Dynamics
 Cabot Institute for the Environment
 Probability
Person: Academic , Member