Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-5 of 5
Jonathan Lawry
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference3, (July 22–26, 2024) 10.1162/isal_a_00711
Abstract
View Paper
PDF
The collective perception problem is commonly discussed in swarm robotics, with many proposed solutions. However, there has been less discussion on the impact of faulty agents on the efficacy of these decision making strategies, and few possible solutions to mitigate any negative effects. This paper introduces a decentralised, immuno-inspired ‘check, track, mark’ (CTM) routine, and tests its efficacy when used to mitigate the effect of faulty agents in the collective perception problem. The CTM routine is inspired by macrophages in the human immune system, and their use in preventing pathogens from infecting healthy cells. We test the routine using three previously established decision making strategies, and a model of a detection algorithm with saturating true-positive and false-positive rates. We find that the proposed approach improves the ability of agents to reach an accurate consensus in the presence of faulty agents across all three of the decision making strategies tested, with increases in accuracy between 15–213%. For one strategy, the CTM routine also allows for an accurate consensus to be reached in fewer timesteps, with a median decrease in time to consensus of 29%. Out of the parameters associated with the CTM routine, we find the interval between initial checks to be most significant in affecting the speed and accuracy of the group in reaching consensus.
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference89, (July 22–26, 2024) 10.1162/isal_a_00833
Abstract
View Paper
PDF
A bounded confidence model is introduced for probabilistic social learning in which individuals learn by regularly pooling their beliefs with those of others, and also by updating their beliefs based on noisy evidence they have received directly. Furthermore, an individual only pools with those others in their social network whose current beliefs are sufficiently similar to their own. Two measures of similarity between beliefs are considered based on statistical distance and KL divergence. Agent-based simulations are then used to investigate the efficacy of the proposed model for detecting zealot agents who do not learn from evidence or from their peers but instead constantly promote a fixed opinion. Results indicate that given appropriate similarity thresholds both metrics can be effective but that statistical distance is less sensitive to the choice of similarity threshold, suggesting that it is less dependent on prior knowledge about the type of zealots present in the population. A central result is that the effectiveness of this form of collective reliability assessment is significantly reduced if learning agents have a high level of distrust of evidence. Extensions to the model are then investigated by incorporating weighted pooling and restricting interactions to small world networks. Finally, bounded confidence is evaluated for multi-hypotheses scenarios in which there is a heterogeneous population of zealots advocating for different hypotheses.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life3, (July 18–22, 2022) 10.1162/isal_a_00480
Abstract
View Paper
PDF
Social learning is an important collective behaviour in many biological and artificial systems. We investigate a model of social learning which combines two distinct processes, one relating to how individuals adapt their beliefs as a result of interacting with their peers, and one relating to when they search for and how they learn directly from evidence. For each process we introduce conservative and open-minded behaviours and combine these to obtain four social learning behaviour types. A simple truth-seeking task is considered and a three-valued model of belief states is adopted. By means of difference equation models and agent-based simulations we then investigate the performance of the different learning behaviours. We show that certain heterogeneous mixtures of behaviours result in the most robust performance for a variety of learning rates and initial conditions, and that such mixtures are well suited for social learning in dynamic environments.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life39, (July 18–22, 2022) 10.1162/isal_a_00522
Abstract
View Paper
PDF
Typically, collective behaviour research has tended to focus on behaviour arising in populations of homogeneous agents. However, humans, animals, robots and software agents typically exhibit various forms of heterogeneity. In natural systems, this heterogeneity has often been associated with improved performance. In this work, we ask whether spatial interference within a population of co-operating mobile agents can be managed effectively via conflict resolution mechanisms that exploit the population’s intrinsic heterogeneity. An idealised model of foraging is presented in which a population of simulated ant-like agents is tasked with making as many journeys as possible back and forth along a route that includes tunnels that are wide enough for only one agent. Four conflict resolution schemes are used for determining which agent has priority when two or more meet within a tunnel. These schemes are tested in the context of heterogeneous populations of varying size. The findings demonstrate that a conflict resolution mechanism that exploits agent heterogeneity can achieve a significant reduction in the impact of spatial interference. However, whether or not a particular scheme is successful depends on how the heterogeneity that it exploits is implicated in the population-wide dynamics that underpin system-level performance.
Proceedings Papers
. isal2021, ALIFE 2021: The 2021 Conference on Artificial Life77, (July 18–22, 2021) 10.1162/isal_a_00407
Abstract
View Paper
PDF
A decentralised collective learning problem is investigated in which a population of agents attempts to learn the true state of the world based on direct evidence from the environment and belief fusion carried out during local interactions between agents. A parameterised fusion operator is introduced that returns beliefs of varying levels of imprecision. This is used to explore the effect of fusion imprecision on learning performance in a series of agent-based simulations. In general, the results suggest that imprecise fusion operators are optimal when the frequency of fusion is high relative to the frequency with which evidence is obtained from the environment.