Publication Date
| In 2026 | 0 |
| Since 2025 | 7 |
| Since 2022 (last 5 years) | 27 |
| Since 2017 (last 10 years) | 62 |
Descriptor
Source
Author
| Lazarus, Sheryl S. | 3 |
| Sterett H. Mercer | 3 |
| Thurlow, Martha L. | 3 |
| Liu, Kristin K. | 2 |
| Michael Matta | 2 |
| Milena A. Keller-Margulis | 2 |
| Rogers, Christopher M. | 2 |
| Wilson, Joshua | 2 |
| Yue Huang | 2 |
| Zhang, Mo | 2 |
| Abbasian, Gholam-Reza | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 51 |
| Reports - Research | 51 |
| Tests/Questionnaires | 6 |
| Information Analyses | 4 |
| Dissertations/Theses -… | 3 |
| Reports - Descriptive | 3 |
| Reports - Evaluative | 2 |
| Numerical/Quantitative Data | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
| Teachers | 1 |
Location
| Iran | 5 |
| China | 3 |
| Japan | 3 |
| Germany | 2 |
| Italy | 2 |
| Saudi Arabia | 2 |
| Texas | 2 |
| United Kingdom | 2 |
| Utah | 2 |
| Vietnam | 2 |
| Australia | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Mo Zhang; Paul Deane; Andrew Hoang; Hongwen Guo; Chen Li – Educational Measurement: Issues and Practice, 2025
In this paper, we describe two empirical studies that demonstrate the application and modeling of keystroke logs in writing assessments. We illustrate two different approaches of modeling differences in writing processes: analysis of mean differences in handcrafted theory-driven features and use of large language models to identify stable personal…
Descriptors: Writing Tests, Computer Assisted Testing, Keyboarding (Data Entry), Writing Processes
Matthew D. Coss – Language Learning & Technology, 2025
The extent to which writing modality (i.e., hand-writing vs. keyboarding) impacts second-language (L2) writing assessment scores remains unclear. For alphabetic languages like English, research shows mixed results, documenting both equivalent and divergent scores between typed and handwritten tests (e.g., Barkaoui & Knouzi, 2018). However, for…
Descriptors: Computer Assisted Testing, Paper and Pencil Tests, Second Language Learning, Chinese
Jessie S. Barrot – Education and Information Technologies, 2024
This bibliometric analysis attempts to map out the scientific literature on automated writing evaluation (AWE) systems for teaching, learning, and assessment. A total of 170 documents published between 2002 and 2021 in Social Sciences Citation Index journals were reviewed from four dimensions, namely size (productivity and citations), time…
Descriptors: Educational Trends, Automation, Computer Assisted Testing, Writing Tests
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Jussi S. Jauhiainen; Agustín Garagorry Guerra – Innovations in Education and Teaching International, 2025
The study highlights ChatGPT-4's potential in educational settings for the evaluation of university students' open-ended written examination responses. ChatGPT-4 evaluated 54 written responses, ranging from 24 to 256 words in English. It assessed each response using five criteria and assigned a grade on a six-point scale from fail to excellent,…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation, Writing Evaluation
Andrew Runge; Sarah Goodwin; Yigal Attali; Mya Poe; Phoebe Mulcaire; Kai-Ling Lo; Geoffrey T. LaFlair – Language Testing, 2025
A longstanding criticism of traditional high-stakes writing assessments is their use of static prompts in which test takers compose a single text in response to a prompt. These static prompts do not allow measurement of the writing process. This paper describes the development and validation of an innovative interactive writing task. After the…
Descriptors: Material Development, Writing Evaluation, Writing Assignments, Writing Skills
Choi, Yun Deok – Language Testing in Asia, 2022
A much-debated question in the L2 assessment field is if computer familiarity should be considered a potential source of construct-irrelevant variance in computer-based writing (CBW) tests. This study aims to make a partial validity argument for an online source-based writing test (OSWT) designed for English placement testing (EPT), focusing on…
Descriptors: Test Validity, Scores, Computer Assisted Testing, English (Second Language)
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Dan Song; Alexander F. Tang – Language Learning & Technology, 2025
While many studies have addressed the benefits of technology-assisted L2 writing, limited research has delved into how generative artificial intelligence (GAI) supports students in completing their writing tasks in Mandarin Chinese. In this study, 26 university-level Mandarin Chinese foreign language students completed two writing tasks on two…
Descriptors: Artificial Intelligence, Second Language Learning, Standardized Tests, Writing Tests
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Mirjam de Vreeze-Westgeest; Sara Mata; Francisca Serrano; Wilma Resing; Bart Vogelaar – European Journal of Psychology and Educational Research, 2023
The current study aimed to investigate the effectiveness of an online dynamic test in reading and writing, differentiating in typically developing children (n = 47) and children diagnosed with dyslexia (n = 30) aged between nine and twelve years. In doing so, it was analysed whether visual working memory, auditory working memory, inhibition,…
Descriptors: Computer Assisted Testing, Reading Tests, Writing Tests, Executive Function
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Anneen Church – Perspectives in Education, 2023
Restrictions and challenges brought on by the COVID-19 pandemic challenged higher education institutions to innovate to keep reaching teaching and learning goals. In South Africa, existing social inequalities were exacerbated by the pandemic restrictions and many students faced severe challenges in terms of access and support to aid in their…
Descriptors: Foreign Countries, Writing Tests, Student Evaluation, COVID-19

Peer reviewed
Direct link
