Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 5 |
| Since 2017 (last 10 years) | 24 |
| Since 2007 (last 20 years) | 90 |
Descriptor
| Language Processing | 81 |
| Second Language Learning | 30 |
| Language Acquisition | 28 |
| Models | 27 |
| Language Research | 22 |
| Linguistic Theory | 22 |
| Grammar | 20 |
| Bilingualism | 16 |
| Syntax | 15 |
| Psycholinguistics | 13 |
| Semantics | 13 |
| More ▼ | |
Source
Author
| McNamara, Danielle S. | 2 |
| Paivio, Allan | 2 |
| Paradis, Michel | 2 |
| Rayner, Keith | 2 |
| Aaron Stoller | 1 |
| Adger, David | 1 |
| Allen, Mark D. | 1 |
| Altarriba, Jeanette | 1 |
| Altmann, Gerry T.M. | 1 |
| Amaral, Luiz | 1 |
| Amaral, Luiz A. | 1 |
| More ▼ | |
Publication Type
| Opinion Papers | 90 |
| Journal Articles | 89 |
| Reports - Evaluative | 26 |
| Reports - Descriptive | 15 |
Education Level
| Higher Education | 4 |
| Adult Education | 1 |
| Elementary Secondary Education | 1 |
| Postsecondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Aaron Stoller; Chris Schacht – Education and Culture, 2024
The emergence of Large Language Models has exposed composition studies' long-standing commitment to Cartesian assumptions that position writing as a nonmaterial, distinctly human activity. This paper develops a naturalized theory of composition grounded in Deweyan pragmatic naturalism that dissolves the nature/culture dualism embedded in…
Descriptors: Writing (Composition), Artificial Intelligence, Natural Language Processing, Writing Processes
Nathan Lindberg – Writing Center Journal, 2025
In this essay, I suggest that we should embrace generative artificial intelligence (GenAI) writing tools, particularly chatbots (e.g., ChatGPT, Copilot, Claude), because they can enable linguistic equity by leveling the academic playing field for English as an additional language students. As writing experts, we can find ways to use this…
Descriptors: Artificial Intelligence, Man Machine Systems, Natural Language Processing, Technology Uses in Education
Benjamin Luke Davies; Katherine Demuth – Language Learning and Development, 2024
When acquiring the English plural, children correctly produce plural words long before they develop an understanding of morphological structure. When acquiring Sesotho noun prefixes, children are aware of the multiple constraints governing variation from a young age. Both of these cases raise questions about the Shin and Miller (2022) account of…
Descriptors: African Languages, Morphology (Languages), Syntax, Second Language Learning
Zi Yang; Junjie Gavin Wu; Haoran Xie – Asia Pacific Journal of Education, 2025
The emergence of generative artificial intelligence (GAI) in the past two years is exerting profound effects throughout society. However, while this new technology undoubtedly promises substantial benefits, its disruptive nature also means that it poses a variety of challenges. The field of education is no exception. This position paper intends to…
Descriptors: Artificial Intelligence, Ethics, Technology Uses in Education, Natural Language Processing
McNamara, Danielle S. – Discourse Processes: A Multidisciplinary Journal, 2021
This article provides a commentary within the special issue, Integration: The Keystone of Comprehension. According to most contemporary frameworks, a driving force in comprehension is the reader's ability to generate the links among the words and sentences (ideas) in the texts and between the ideas in the text and what the readers already know. As…
Descriptors: Inferences, Language Processing, Reading Comprehension, Reading Research
Guasti, Maria Teresa – First Language, 2020
In this commentary on the Special Issue, I will address the question of what memory spans measure concerning language, as language has, at least, a linear and a hierarchical dimension. I suggest that if anything what is measured has to do with the linear dimension. Then, I will discuss the welcome results on bilingual children with language…
Descriptors: Bilingualism, Inhibition, Language Impairments, Short Term Memory
Finley, Sara – First Language, 2020
In this commentary, I discuss why, despite the existence of gradience in phonetics and phonology, there is still a need for abstract representations. Most proponents of exemplar models assume multiple levels of abstraction, allowing for an integration of the gradient and the categorical. Ben Ambridge's dismissal of generative models such as…
Descriptors: Phonology, Phonetics, Abstract Reasoning, Linguistic Theory
Stringer, David – Second Language Research, 2021
Westergaard (2021) presents an updated account of the Linguistic Proximity Model and the micro-cue approach to the parser as an acquisition device. The property-by-property view of transfer inherent in this approach contrasts with other influential models that assume that third language (L3) acquisition involves the creation of a full copy of only…
Descriptors: Transfer of Training, Linguistic Theory, Second Language Learning, Multilingualism
Lieven, Elena; Ferry, Alissa; Theakston, Anna; Twomey, Katherine E. – First Language, 2020
During language acquisition children generalise at multiple layers of granularity. Ambridge argues that abstraction-based accounts suffer from lumping (over-general abstractions) or splitting (over-precise abstractions). Ambridge argues that the only way to overcome this conundrum is in a purely exemplar/analogy-based system in which…
Descriptors: Language Acquisition, Children, Generalization, Abstract Reasoning
McNamara, Danielle S. – Grantee Submission, 2020
This article provides a commentary within the special issue, Integration: The Keystone of Comprehension. According to most contemporary frameworks, a driving force in comprehension is the reader's ability to generate the links among the words and sentences (ideas) in the texts and between the ideas in the text and what the readers already know. As…
Descriptors: Inferences, Language Processing, Reading Comprehension, Reading Research
Koring, Loes; Giblin, Iain; Thornton, Rosalind; Crain, Stephen – First Language, 2020
This response argues against the proposal that novel utterances are formed by analogy with stored exemplars that are close in meaning. Strings of words that are similar in meaning or even identical can behave very differently once inserted into different syntactic environments. Furthermore, phrases with similar meanings but different underlying…
Descriptors: Language Acquisition, Figurative Language, Syntax, Phrase Structure
Demuth, Katherine; Johnson, Mark – First Language, 2020
Exemplar-based learning requires: (1) a segmentation procedure for identifying the units of past experiences that a present experience can be compared to, and (2) a similarity function for comparing these past experiences to the present experience. This article argues that for a learner to learn a language these two mechanisms will require…
Descriptors: Comparative Analysis, Language Acquisition, Linguistic Theory, Grammar
Zettersten, Martin; Schonberg, Christina; Lupyan, Gary – First Language, 2020
This article reviews two aspects of human learning: (1) people draw inferences that appear to rely on hierarchical conceptual representations; (2) some categories are much easier to learn than others given the same number of exemplars, and some categories remain difficult despite extensive training. Both of these results are difficult to reconcile…
Descriptors: Models, Language Acquisition, Prediction, Language Processing
McClelland, James L. – First Language, 2020
Humans are sensitive to the properties of individual items, and exemplar models are useful for capturing this sensitivity. I am a proponent of an extension of exemplar-based architectures that I briefly describe. However, exemplar models are very shallow architectures in which it is necessary to stipulate a set of primitive elements that make up…
Descriptors: Models, Language Processing, Artificial Intelligence, Language Usage
MacKenzie D. Sidwell; Landon W. Bonner; Kayla Bates-Brantley; Shengtian Wu – Intervention in School and Clinic, 2024
Oral reading fluency probes are essential for reading assessment, intervention, and progress monitoring. Due to the limited options for choosing oral reading fluency probes, it is important to utilize all available resources such as generative artificial intelligence (AI) like ChatGPT to create oral reading fluency probes. The purpose of this…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Oral Reading

Peer reviewed
Direct link
