Imprecise belief fusion improves multi-agent social learning

Zixuan Liu*, Jonathan Lawry, Michael Crosscombe

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

Abstract

In social learning, agents learn not only from direct evidence but also through interactions with their peers. We investigate the role of imprecision in such interactions and ask whether it can improve the effectiveness of the collective learning process. To that end we propose a model of social learning where beliefs are equivalent to formulas in a propositional language, and where agents learn from each other by combining their beliefs according to a fusion operator. The latter is parameterised so as to allow for different levels of imprecision, where a more imprecise fusion operator tends to generate a more imprecise fused belief when the two combined beliefs differ. In this context we describe both difference equation models and agent-based simulations of social learning under a variety of conditions and with different initial biases. The results presented suggest that for populations with a strong initial bias towards incorrect beliefs some level of imprecision in fusion can improve learning accuracy across a range of learning conditions. Furthermore, such benefits of imprecision are consistent with a stability analysis of the fixed points of the proposed difference equation models.
Original languageEnglish
Article number130424
Number of pages17
JournalPhysica A: Statistical Mechanics and its Applications
Volume664
Early online date8 Feb 2025
DOIs
Publication statusPublished - 15 Apr 2025

Bibliographical note

Publisher Copyright:
© 2025 The Authors.

Fingerprint

Dive into the research topics of 'Imprecise belief fusion improves multi-agent social learning'. Together they form a unique fingerprint.

Cite this