Abstract
A bounded confidence model is introduced for probabilistic social learning in which individuals learn by regularly pooling their beliefs with those of others, and also by updating their beliefs based on noisy evidence they have received directly. Furthermore, an individual only pools with those others in their social network whose current beliefs are sufficiently similar to their own. Two measures of similarity between beliefs are considered based on statistical distance and KL divergence. Agent-based simulations are then used to investigate the efficacy of the proposed model for detecting zealot agents who do not learn from evidence or from their peers but instead constantly promote a fixed opinion. Results indicate that given appropriate similarity thresholds both metrics can be effective but that statistical distance is less sensitive to the choice of similarity threshold, suggesting that it is less dependent on prior knowledge about the type of zealots present in the population. A central result is that the effectiveness of this form of collective reliability assessment is significantly reduced if learning agents have a high level of distrust of evidence. Extensions to the model are then investigated by incorporating weighted pooling and restricting interactions to small world networks. Finally, bounded confidence is evaluated for multi-hypotheses scenarios in which there is a heterogeneous population of zealots advocating for different hypotheses.
Original language | English |
---|---|
Title of host publication | ALIFE 2024 |
Subtitle of host publication | Proceedings of the 2024 Artificial Life Conference. |
Publisher | Massachusetts Institute of Technology (MIT) Press |
Pages | 89-98 |
Number of pages | 9 |
Publication status | Published - 22 Jul 2024 |