Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-8 of 8
Lana Sinapayen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference68, (July 22–26, 2024) 10.1162/isal_a_00798
Abstract
View Paper
PDF
ALife is primed to address the biggest challenges in astrobiology by simulating systems which capture the most general and fundamental features of living systems. One such challenge is how to detect life outside of the solar system— especially without making strong assumptions about how life would manifest and interact with its planetary environment. Here we explore an ALife model meant to overcome this problem, by focusing on what life may do, rather than what life may be: life can spread between planetary systems (panspermia) and can modify planetary characteristics (terraformation). Our model shows that as life propagates across the galaxy, correlations emerge between planetary characteristics and location, and these correlations can function as a biosignature. This biosignature is agnostic because it is independent of strong assumptions about any particular instantiation of life or planetary characteristic. We demonstrate (and evaluate) a way to prioritize specific planets for further observation—based on their potential for containing life. We consider obstacles that must be overcome to practically implement our approach, including identifying specific ways in which better understanding astrophysical and planetary processes would improve our ability to detect life.
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference14, (July 24–28, 2023) 10.1162/isal_a_00591
Abstract
View Paper
PDF
This paper reports on patterns exhibiting self-replication with spontaneous, inheritable mutations and exponential genetic drift in Neural Cellular Automata. Despite the models not being explicitly trained for mutation or inheritability, the descendant patterns exponentially drift away from ancestral patterns, even when the automaton is deterministic. While this is far from being the first instance of evolutionary dynamics in a cellular automaton, it is the first to do so by exploiting the power and convenience of Neural Cellular Automata, arguably increasing the space of variations and the opportunity for Open Ended Evolution.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life454-460, (July 29–August 2, 2019) 10.1162/isal_a_00201
Abstract
View Paper
PDF
Is cognition a collection of loosely connected functions tuned to different tasks, or is it more like a general learning algorithm? If such an hypothetical general algorithm did exist, tuned to our world, could it adapt seamlessly to a world with different laws of nature? We consider the theory that predictive coding is such a general rule, and falsify it for one specific neural architecture known for high-performance predictions on natural videos and replication of human visual illusions: PredNet. Our results show that PredNet’s high performance generalizes without retraining on a completely different natural video dataset. Yet PredNet cannot be trained to reach even mediocre accuracy on an artificial video dataset created with the rules of the Game of Life (GoL). We also find that a submodule of PredNet, a Convolutional Neural Network trained alone, has excellent accuracy on the GoL while having mediocre accuracy on natural videos, showing that PredNet’s architecture itself might be responsible for both the high performance on natural videos and the loss of performance on the GoL. Just as humans cannot predict the dynamics of the GoL, our results suggest that there could be a trade-off between high performance on sensory inputs with different sets of rules.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life163-170, (July 23–27, 2018) 10.1162/isal_a_00037
Abstract
View Paper
PDF
Our previous study showed that embodied cultured neural networks and spiking neural networks with spike-timing dependent plasticity (STDP) can learn a behavior as they avoid stimulation from outside. In a sense, the embodied neural network can autonomously change their activity to avoid external stimuli. In this paper, as a result of our experiments using cultured neurons, we find that there is also a second property allowing the network to avoid stimulation: if the network cannot learn to avoid the external stimuli, it tends to decrease the stimulus-evoked spikes as if to ignore the input neurons. We also show such a behavior is reproduced by spiking neural networks with asymmetric-STDP. We consider that these properties can be regarded as autonomous regulation of self and non-self for the network.
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life380-387, (September 4–8, 2017) 10.1162/isal_a_065
Abstract
View Paper
PDF
We propose the Epsilon Network (ε-network), a network that automatically adjusts its size to the complexity of a stream of data while performing online learning. The network optimises its topology during training, simultaneously adding and removing neurons and weights: it adds neurons where they can raise performance, and removes redundant neurons while preserving performance. The network is a neural realisation of the ε-machine devised by Crutchfield and al. (Crutchfield and Young (1989)). In this paper our network is trained to predict video frames; we evaluate it on simple, complex, and noisy videos and show that the final number of neurons is a good indicator of the complexity and predictability of the data stream.
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life275-282, (September 4–8, 2017) 10.1162/isal_a_048
Abstract
View Paper
PDF
Spiking neural networks with spike-timing dependent plasticity (STDP) can learn to avoid the external stimulations spontaneously. This principle is called "Learning by Stimulation Avoidance" (LSA) and can be used to reproduce learning experiments on cultured biological neural networks. LSA has promising potential, but its application and limitations have not be studied extensively. This paper focuses on the scalability of LSA for large networks and shows that LSA works well in small networks (100 neurons) and can be scaled to networks up to approximately 3,000 neurons.
Proceedings Papers
. ecal2015, ECAL 2015: the 13th European Conference on Artificial Life373-380, (July 20–24, 2015) 10.1162/978-0-262-33027-5-ch067
Proceedings Papers
. ecal2015, ECAL 2015: the 13th European Conference on Artificial Life175-182, (July 20–24, 2015) 10.1162/978-0-262-33027-5-ch037