According to much of theoretical linguistics, a fair amount of our linguistic knowledge is innate. One of the best-known (and most contested) kinds of evidence for a large innate endowment is the argument from the poverty of the stimulus (APS). An APS obtains when human learners systematically make inductive leaps that are not warranted by the linguistic evidence. A weakness of the APS has been that it is very hard to assess what is warranted by the linguistic evidence. Current artificial neural networks appear to offer a handle on this challenge, and a growing literature has started to explore the potential implications of such models to questions of innateness. We focus on Wilcox, Futrell, and Levy’s (2024) use of several different networks to examine the available evidence as it pertains to wh-movement, including island constraints. WFL conclude that the (presumably linguistically neutral) networks acquire an adequate knowledge of wh-movement, thus undermining an APS in this domain. We examine the evidence further, looking in particular at parasitic gaps and across-the-board movement, and argue that current networks do not succeed in acquiring or even adequately approximating wh-movement from training corpora roughly the size of the linguistic input that children receive. We also show that the performance of one of the models improves considerably when the training data are artificially enriched with instances of parasitic gaps and across-the-board movement. This finding suggests, albeit tentatively, that the networks’ failure when trained on natural, unenriched corpora is due to the insufficient richness of the linguistic input, thus supporting the APS.
Skip Nav Destination
Article navigation
August 30 2024
Large Language Models and the Argument from the Poverty of the Stimulus
In Special Collection:
CogNet
Nur Lan,
Nur Lan
Laboratoire de Sciences Cognitives et Psycholinguistique, Ecole Normale Supérieure; and Department of Linguistics, Tel Aviv University, nur.lan@ens.psl.eu
Search for other works by this author on:
Emmanuel Chemla,
Emmanuel Chemla
Ecole Normale Supérieure, EHESS, PSL University, CNRS, emmanuel.chemla@ens.psl.eu
Search for other works by this author on:
Roni Katzir
Roni Katzir
Department of Linguistics and Sagol School of Neuroscience, Tel Aviv University, rkatzir@tauex.tau.ac.il
Search for other works by this author on:
Nur Lan
Laboratoire de Sciences Cognitives et Psycholinguistique, Ecole Normale Supérieure; and Department of Linguistics, Tel Aviv University, nur.lan@ens.psl.eu
Emmanuel Chemla
Ecole Normale Supérieure, EHESS, PSL University, CNRS, emmanuel.chemla@ens.psl.eu
Roni Katzir
Department of Linguistics and Sagol School of Neuroscience, Tel Aviv University, rkatzir@tauex.tau.ac.il
Online ISSN: 1530-9150
Print ISSN: 0024-3892
© 2024 by the Massachusetts Institute of Technology
2024
Massachusetts Institute of Technology
Linguistic Inquiry 1–28.
Citation
Nur Lan, Emmanuel Chemla, Roni Katzir; Large Language Models and the Argument from the Poverty of the Stimulus. Linguistic Inquiry 2024; doi: https://doi.org/10.1162/ling_a_00533
Download citation file:
Sign in
Don't already have an account? Register
Client Account
You could not be signed in. Please check your email address / username and password and try again.
Captcha Validation Error. Please try again.
Sign in via your Institution
Sign in via your InstitutionEmail alerts
119
Views
Advertisement
Cited By
Related Articles
Using Computational Models to Test Syntactic Learnability
Linguistic Inquiry (October,2024)
Some Correct Error-Driven Versions of the Constraint Demotion Algorithm
Linguistic Inquiry (October,2009)
V-Raising and Grammar Competition in Korean: Evidence from Negation and Quantifier Scope
Linguistic Inquiry (January,2007)
The Limitations of Large Language Models for Understanding Human Language and Cognition
Open Mind (August,2024)