A bounded confidence model is introduced for probabilistic social learning in which individuals learn by regularly pooling their beliefs with those of others, and also by updating their beliefs based on noisy evidence they have received directly. Furthermore, an individual only pools with those others in their social network whose current beliefs are sufficiently similar to their own. Two measures of similarity between beliefs are considered based on statistical distance and KL divergence. Agent-based simulations are then used to investigate the efficacy of the proposed model for detecting zealot agents who do not learn from evidence or from their peers but instead constantly promote a fixed opinion. Results indicate that given appropriate similarity thresholds both metrics can be effective but that statistical distance is less sensitive to the choice of similarity threshold, suggesting that it is less dependent on prior knowledge about the type of zealots present in the population. A central result is that the effectiveness of this form of collective reliability assessment is significantly reduced if learning agents have a high level of distrust of evidence. Extensions to the model are then investigated by incorporating weighted pooling and restricting interactions to small world networks. Finally, bounded confidence is evaluated for multi-hypotheses scenarios in which there is a heterogeneous population of zealots advocating for different hypotheses.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.