Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-9 of 9
Wataru Noguchi
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference90, (July 22–26, 2024) 10.1162/isal_a_00709
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference74, (July 24–28, 2023) 10.1162/isal_a_00687
Abstract
View Paper
PDF
Body representations, which have multimodal receptive fields in the peripersonal space where individuals interact with the environment within their reach, show plasticity through tool use and are necessary for adaptive and skillful use of external tools. In this study, we propose a neural network model that develops a multimodal and body-centered peripersonal space representation of the plastic body representation through tool use, whereas previous developmental models can only explain the plastic body representation as a non-body-centered one. Our proposed model reconstructs visual and tactile sensations corresponding to proprioceptive sensations after integrating visual and tactile sensations through a Transformer based on a self-attention mechanism. By learning through camera vision and arm touch of a simulated robot and proprioception of camera and arm postures, a body representation was developed that localizes tactile sensations on a simultaneously developed peripersonal space representation. In particular, learning during tool use causes the body representation to have plasticity due to tool use, and the peripersonal space representation is shared by sharing part of the visual and tactile decoding modules. As a result, the model obtains the plastic body representation on the body-centered multimodal peripersonal space representation.
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference46, (July 24–28, 2023) 10.1162/isal_a_00642
Abstract
View Paper
PDF
The rubber hand illusion is a phenomenon that involves perceiving a rubber hand as part of one’s own body. The occurrence of this illusion is evaluated by the subjective report and the proprioceptive drift, in which the position of the hand is shifted in perception. The proprioceptive drift and sense of body ownership are assumed to be related; however, some research results have cast doubt on this relationship. We built a deep neural network model to simulate the rubber hand experiment to investigate the principles behind proprioceptive drift. Our deep neural network model was trained using consistent multisensory data and tested with inconsistent data, such as the rubber hand illusion. The model successfully predicted proprioceptive drift, suggesting that simple predictive learning mechanisms can account for this phenomenon.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life46, (July 18–22, 2022) 10.1162/isal_a_00529
Abstract
View Paper
PDF
This study reveals what kind of temporal and spatial patterns form when learning in an adversarial relationship between two individuals. The model was implemented by coupling generative adversarial networks, which are well-known in the field of machine learning. The obtained temporal patterns resulted in chaos with a positive Lyapunov exponent for time-series learning, whereas spatial pattern learning produced structured patterns with a higher fractal dimension, not just more complexity with a higher entropy.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life62, (July 18–22, 2022) 10.1162/isal_a_00548
Abstract
View Paper
PDF
Chemotaxis is a phenomenon whereby organisms like ameba direct their movements responding to their environmental gradients, often called gradient climbing. It is considered to be the origin of self-movement that characterizes life forms. In this work, we have simulated the gradient climbing behaviour on Neural Cellular Automata (NCA) that has recently been proposed as a model to simulate morphogenesis. NCA is a cellular automata model using deep networks for its learnable update rule and it generates a target cell pattern from a single cell through local interactions among cells. Our model, Gradient Climbing Neural Cellular Automata (GCNCA), has an additional feature that enables itself to move a generated pattern by responding to a gradient injected into its cell states.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life51, (July 18–22, 2022) 10.1162/isal_a_00535
Abstract
View Paper
PDF
Several studies that deal with the acquisition of concepts in a bottom-up manner from experiences in the physical space exist, but there are few of them that deal with the bidirectional interaction between symbolic operations and experiences in the physical world. It was shown that a shared module neural network succeeded in generating a bottom-up spatial representation of the external world, without involving learning of the signals of the spatial structure. Furthermore, the module can understand the external map as a symbol based on its spatial representation, and top-down navigation can be performed using the map. In this study, we extended this model and proposed a simulation model that unifies the emergence of a number representation, learning of symbol manipulation on the representation, and top-down understanding of symbol manipulation onto the physical world. Our results show that the learning results of the symbol manipulation can be applied to the physical world prediction, and our proposed model succeeded in grounding symbol manipulation onto physical experiences.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life531-532, (July 29–August 2, 2019) 10.1162/isal_a_00216
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life147-154, (July 23–27, 2018) 10.1162/isal_a_00035
Abstract
View Paper
PDF
Animals develop spatial recognition through visuomotor integrated experiences. In nature, animals change their behavior during development and develop spatial recognition. The developmental process of spatial recognition has been previously studied. However, it is unclear how behavior during development affects the development of spatial recognition. To investigate the effect of movement pattern (behavior) on spatial recognition, we simulated the development of spatial recognition using controlled behaviors. Hierarchical recurrent neural networks (HRNNs) with multiple time scales were trained to predict visuomotor sequences of a simulated mobile agent. The spatial recognition developed with HRNNs was compared for various values of randomness of the agent’s movement. The experimental results show that spatial recognition was not developed for movements with a randomness that was too small or too large but for movements with intermediate randomness.
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life324-331, (September 4–8, 2017) 10.1162/isal_a_055
Abstract
View Paper
PDF
Spatial recognition is the ability to recognize the environment and generate goal-directed behaviors, such as navigation. Animals develop spatial recognition by integrating their subjective visual and motion experiences. We propose a model that consists of hierarchical recurrent neural networks with multiple time scales, fast, medium, and slow, that shows how spatial recognition can be obtained from only visual and motion experiences. For high-dimensional visual sequences, a convolutional neural network (CNN) was used to recognize and generate vision. Our model, which was applied to a simulated mobile agent, was trained to predict future visual and motion experiences and generate goal-directed sequences toward destinations that were indicated by photographs. Due to the training, our model was able to achieve spatial recognition, predict future experiences, and generate goal-directed sequences by integrating subjective visual and motion experiences. An internal state analysis showed that the internal states of slow recurrent neural networks were self-organized by the agent’s position. Furthermore, such representation of the internal states was obtained efficiently as the representation was independent of the prediction and generation processes.