Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-3 of 3
Amine Boumaza
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference38, (July 22–26, 2024) 10.1162/isal_a_00756
Abstract
View Paper
PDF
Online Embodied Evolution is a distributed learning method for collective heterogeneous robotic swarms, in which evolution is carried out in a decentralized manner. These algorithms are well suited for open-ended evolution where the goal is to evolve efficient survival strategies in a priori unknown and changing environments. In this work, we are interested in analyzing the features of the environment that favors evolution of certain behaviors. We hypothesize that certain types of perceivable artifacts in the environment provide affordances that exert a selection pressure toward the fixation of certain behaviors. We test this hypothesis in simulation in two different settings an open-ended environment without any selection pressure and compare it to one which originality of behaviors is enforced. The experiments show that in the open-ended environment agents evolved a certain type of behavior that exploited affordances, hinting at the existence of a selection pressure from the affordances as it is claimed from ecological psychology in the case of natural systems. We present different objective and subjective measures to support this claim.
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life162-161, (September 4–8, 2017) 10.1162/isal_a_028
Abstract
View Paper
PDF
In this paper, we study how a swarm of robots adapts over time to solve a collaborative task using a distributed Embodied Evolutionary approach, where each robot runs an evolutionary algorithm and they locally exchange genomes and fitness values. Particularly, we study a collaborative foraging task, where the robots are rewarded for collecting food items that are too heavy to be collected individually and need at least two robots to be collected. Further, the robots also need to display a signal matching the color of the item with an additional effector. Our experiments show that the distributed algorithm is able to evolve swarm behavior to collect items cooperatively. The experiments also reveal that effective cooperation is evolved due mostly to the ability of robots to jointly reach food items, while learning to display the right color that matches the item is done suboptimally. However, a closer analysis shows that, without a mechanism to avoid neglecting any kind of item, robots collect all of them, which means that there is some degree of learning to choose the right value for the color effector depending on the situation.
Proceedings Papers
. alife2014, ALIFE 14: The Fourteenth International Conference on the Synthesis and Simulation of Living Systems282-289, (July 30–August 2, 2014) 10.1162/978-0-262-32621-6-ch046