Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-3 of 3
Hiroyuki Mizuno
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference9, (July 22–26, 2024) 10.1162/isal_a_00720
Abstract
View Paper
PDF
To achieve autonomous cooperation among heterogeneous agents, we propose the artificial minimal self. The artificial minimal self is implemented based on the free energy principle (FEP), which all living organisms are supposed to follow. In a standard FEP, the generative model configured for each physical agent consists only of its own observations and actions. The key to autonomous cooperation among heterogeneous agents is to extend the sense of self beyond to others to include interaction with others to self. In this study, focusing on Gallagher’s minimal self where the self is extended to others, the generative model integrates its own observations and actions and those of other heterogeneous agents. Multiple heterogeneous physical agents have different embodiments, resulting in different types of observations and actions. To integrate different types of observations and actions, we define observations and actions based on environmental information independent of the agent’s embodiment. Integrating the observations and actions of multiple physical agents into a generative model allows extending the sense of self beyond to others, leading to autonomous cooperation between heterogeneous agents. We demonstrated cooperative object transport by two heterogeneous robot arms implementing the artificial minimal self. The results confirmed that each robot arm can autonomously share tasks and transport objects by simply providing a common goal of placing the object into the goal. Furthermore, the process of self-extension to others reproduced the behavior of observing the characteristics of others that is seen in human cooperation.
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference139, (July 24–28, 2023) 10.1162/isal_a_00573
Abstract
View Paper
PDF
As AI and robots become more widespread, their social and ethical nature will become more important. The study of behavioral models that follow us, humans, as social creatures, can be expected as one of the solutions. We assume that human sociality including punishments toward free-riders, which is necessary for a stable society, are deeply related to emotions, especially emotions toward others and empathy. Based on this idea, we incorporated social emotions into the active inference, a human behavior model in cognitive science, and investigated the social behavior of the models using a virtual game.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life18, (July 18–22, 2022) 10.1162/isal_a_00496
Abstract
View Paper
PDF
This paper proposes a method for an artificial agent to behave socially by controlling it by active inference with an empathy mechanism. Active inference is a Bayesian hypothesis for understanding the mechanism of a biological agent’s cognitive activities and is basically defined for single-agent cases. We extended active inference to the case of an agent surrounded by other agents. These other agents are not only objects of recognition but also sources of social perceptions and actions. An agent controlled with the proposed method infers the others’ expectations toward itself on the basis of an empathy mechanism and tries to act in response to the expectations. Although defining proper sociality for a given situation is difficult since it differs by situation, we define sociality as an agent behaving as others expect. Accordingly, the others surrounding the agent are teachers for the agent to learn proper sociality; thus, an agent controlled with the proposed method can learn proper sociality in a variety of situations in a unified manner. We evaluated the proposed method regarding the controlling of autonomous mobile robots (AMRs) and evaluated sociality from the trajectory of the AMRs. From the evaluation results, an agent controlled with the proposed method could behave more socially than an agent controlled by standard active inference. In two agents case, the agent controlled with the proposed method behaved in a social way that decreased travel distance of another by 13.7% and increased margin between the agents by 25.8%, even if it increased travel distance of the agent by 8.2%. They also indicate that an agent controlled with the proposed method behaves more socially when it is surrounded by altruistic others but less socially when surrounded by selfish others.