Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-3 of 3
Simon McGregor
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference133, (July 24–28, 2023) 10.1162/isal_a_00597
Abstract
View Paper
PDF
This article argues that the application of an embodied cognitive science perspective does not require us to distinguish between systems that have a physically tangible body and systems that do not. I consider the specific case of ChatGPT, a large language model specialised for interactive dialogue, and argue that ChatGPT can potentially be seen as embodied, albeit with a very unfamiliar type of embodiment. I propose that we should explicitly distinguish between two notions of physicality: on the one hand, whether a system’s body is tangible or not (roughly, whether we imagine it as providing us tactile-kinesthetic affordances); on the other hand, whether a system is physically situated or not (i.e. whether or not it interacts physically with the rest of the Universe). I discuss whether or not tangibility should be accorded any major theoretical weight, within cognitive science, by considering several theoretical issues relating to embodiment: six from the previous literature, and four that I raise myself.. My conclusion is that (at least in regard to these aspects of embodied cognition) there is no good theoretical reason to treat tangible bodies as a prerequisite for embodied cognition. Hence, I argue that an interactive language model like Chat-GPT can, in principle, perceive the world and interact with it just as physically as a squid or robot does (albeit less tangibly) through text channels, which serve as its physical sensors and actuators. Whether or not we should understand it as doing so depends on its behaviour, not on its substrate.
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life283-289, (September 4–8, 2017) 10.1162/isal_a_049
Abstract
View Paper
PDF
Discourse in the representation debate within cognitive science employs a problematic model of explanations, which needs to be challenged. What makes a good explanation, even on a philosophically realist interpretation, depends not only on the agreed facts regarding a phenomenon, but also intrinsically on the purposes of the theorist. While one might expect traditional cognitivists to be oblivious to these considerations, it is ironic that outspoken proponents of enactivism ignore them. It is also open for debate whether there is any objective fact of the matter regarding whether a given system makes use of representational content or not. Intentional systems theory is compatible with anti-realism regarding representations, as are philosophies of science such as constructive empiricism. I argue that, due to the diversity of the discipline, different theorists within cognitive science have legitimately different explanatory needs, and that this merits both nonrepresentational and representational explanations of the very same system.
Proceedings Papers
. alife2014, ALIFE 14: The Fourteenth International Conference on the Synthesis and Simulation of Living Systems498-505, (July 30–August 2, 2014) 10.1162/978-0-262-32621-6-ch080