Distributed Possibilistic Learning in Multi-Agent Systems

Michael Crosscombe*, Jonathan Lawry, David Harvey

*Corresponding author for this work

Research output: Contribution to conferenceConference Paperpeer-review

Abstract

Possibility theory is proposed as an uncertainty representation framework for distributed learning in multiagent systems and robot swarms. In particular, we investigate its application to the best-of-n problem where the aim is
for a population of agents to identify the highest quality out of n options through local interactions between individuals and limited direct feedback from the environment. In this context we claim that possibility theory provides efficient
mechanisms by which an agent can learn about the state of the world, and which can allow them to handle inconsistencies between what they and others believe by varying the level of imprecision of their own beliefs. We introduce a discrete
time model of a population of agents applying possibility theory to the best-of-n problem. Simulation experiments are then used to investigate the accuracy of possibility theory in this context as well as its robustness to noise under varying
amounts of direct evidence. Finally, we compare possibility theory in this context with a similar probabilistic approach.
Original languageEnglish
Publication statusPublished - 2019
EventThe 3rd International Symposium on Swarm Behavior and Bio-Inspired Robotics -
Duration: 20 Nov 201920 Nov 2019

Conference

ConferenceThe 3rd International Symposium on Swarm Behavior and Bio-Inspired Robotics
Abbreviated titleSWARM 2019
Period20/11/1920/11/19

Fingerprint

Dive into the research topics of 'Distributed Possibilistic Learning in Multi-Agent Systems'. Together they form a unique fingerprint.

Cite this