Abstract
AI is currently making transformational impacts with advances in Large Language Models (LLMs) such as ChatGPT. Here we summarize the arguments of Harvey (2024) that relate these advances to the adoption by AI of techniques grounded in Artificial Life (ALife) and Cybernetics methodology. We argue that these borrowings by AI are so far limited to the development of problem-solving tools for humans to use, ignoring wider aspects of cognition such as agency and motivation that ALife studies can address. One fear expressed by some (e.g. Hinton 2023) is that the prospect of machines being ‘more intelligent’ than humans poses a new ‘RTO-Existential Threat’ to humans; RTO: ’the Robots may Take Over’. We argue that such concerns are currently misplaced since these robots and AI machines have no self-derived agency or motivations. Robots are not legally responsible agents, all robot and AI actions can and should be legally attributed to the human developers of such tools. Cooperative human-robot symbiosis is more likely than the apocalyptic vision. Assessment of the (very real) non-RTO societal risks can be aided by studies of agency and motivation. Robots should be ‘Our Friends', not ‘The Enemy’. Life is not a zero-sum game.