Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-2 of 2
Jakob Foerster
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference47, (July 22–26, 2024) 10.1162/isal_a_00770
Abstract
View Paper
PDF
Human intelligence emerged through the process of natural selection and evolution on Earth. We investigate what it would take to re-create this process in silico . While past work has often focused on low-level processes (such as simulating physics or chemistry), we instead take a more targeted approach, aiming to evolve agents that can accumulate open-ended culture and technologies across generations. Towards this, we present JaxLife: an artificial life simulator in which embodied agents, parameterized by deep neural networks, must learn to survive in an expressive world containing programmable systems. First, we describe the environment and show that it can facilitate meaningful turing-complete computation. We then analyze the evolved emergent agents’ behavior, such as rudimentary communication protocols, agriculture, and tool-use. Finally, we investigate how complexity scales with the amount of compute used. We believe JaxLife takes a step towards studying evolved behavior in more open-ended simulations. 1
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference67, (July 24–28, 2023) 10.1162/isal_a_00674
Abstract
View Paper
PDF
Meta-learning, the notion of learning to learn, enables learning systems to quickly and flexibly solve new tasks. This usually involves defining a set of outer-loop meta-parameters that are then used to update a set of inner-loop parameters. Most meta-learning approaches use complicated and computationally expensive bi-level optimisation schemes to update these meta-parameters. Ideally, systems should perform multiple orders of meta-learning, i.e. to learn to learn to learn and so on, to accelerate their own learning. Unfortunately, standard meta-learning techniques are often inappropriate for these higher-order meta-parameters because the meta-optimisation procedure becomes too complicated or unstable. Inspired by the higher-order meta-learning we observe in real-world evolution, we show that using simple population-based evolution implicitly optimises for arbitrarily-high order meta-parameters. First, we theoretically prove and empirically show that population-based evolution implicitly optimises meta-parameters of arbitrarily-high order in a simple setting. We then introduce a minimal self-referential param-eterisation, which in principle enables arbitrary-order meta-learning. Finally, we show that higher-order meta-learning improves performance on time series forecasting tasks.