Publication Date
| In 2026 | 0 |
| Since 2025 | 5 |
| Since 2022 (last 5 years) | 19 |
| Since 2017 (last 10 years) | 44 |
| Since 2007 (last 20 years) | 114 |
Descriptor
Source
Author
| Attali, Yigal | 8 |
| Mercer, Sterett H. | 7 |
| Wolfe, Edward W. | 5 |
| Kantor, Robert | 4 |
| Lee, Yong-Won | 4 |
| Bridgeman, Brent | 3 |
| Crehan, Kevin D. | 3 |
| Deane, Paul | 3 |
| Keller-Margulis, Milena A. | 3 |
| Matta, Michael | 3 |
| Matter, M. Kevin | 3 |
| More ▼ | |
Publication Type
Education Level
Location
| Canada | 9 |
| Arizona | 7 |
| Iran | 5 |
| Pennsylvania | 5 |
| Florida | 3 |
| California | 2 |
| Hong Kong | 2 |
| Indonesia | 2 |
| Iran (Tehran) | 2 |
| Taiwan | 2 |
| Turkey | 2 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 1 |
| Kentucky Education Reform Act… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Wheeler, Jordan M.; Engelhard, George; Wang, Jue – Measurement: Interdisciplinary Research and Perspectives, 2022
Objectively scoring constructed-response items on educational assessments has long been a challenge due to the use of human raters. Even well-trained raters using a rubric can inaccurately assess essays. Unfolding models measure rater's scoring accuracy by capturing the discrepancy between criterion and operational ratings by placing essays on an…
Descriptors: Accuracy, Scoring, Statistical Analysis, Models
Andrea Gjorevski; Mimi Li; Troy L. Cox – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2025
Open access to novel AI tools offers unprecedented opportunities for human-AI collaboration in writing instruction and assessment. While research on using generative AI tools like ChatGPT in these contexts is emerging, more is needed to understand their effectiveness as Automated Writing Evaluation (AWE) tools. This study explores the potential of…
Descriptors: Artificial Intelligence, Criterion Referenced Tests, Essay Tests, Automation
Somayeh Fathali; Fatemeh Mohajeri – Technology in Language Teaching & Learning, 2025
The International English Language Testing System (IELTS) is a high-stakes exam where Writing Task 2 significantly influences the overall scores, requiring reliable evaluation. While trained human raters perform this task, concerns about subjectivity and inconsistency have led to growing interest in artificial intelligence (AI)-based assessment…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Artificial Intelligence
Meaghan McKenna; Hope Gerde; Nicolette Grasley-Boy – Reading and Writing: An Interdisciplinary Journal, 2025
This article describes the development and administration of the "Kindergarten-Second Grade (K-2) Writing Data-Based Decision Making (DBDM) Survey." The "K-2 Writing DBDM Survey" was developed to learn more about current DBDM practices specific to early writing. A total of 376 educational professionals (175 general education…
Descriptors: Writing Evaluation, Writing Instruction, Preschool Teachers, Kindergarten
Katherine L. Buchanan; Milena Keller-Margulis; Amanda Hut; Weihua Fan; Sarah S. Mire; G. Thomas Schanding Jr. – Early Childhood Education Journal, 2025
There is considerable research regarding measures of early reading but much less in early writing. Nevertheless, writing is a critical skill for success in school and early difficulties in writing are likely to persist without intervention. A necessary step toward identifying those students who need additional support is the use of screening…
Descriptors: Writing Evaluation, Evaluation Methods, Emergent Literacy, Beginning Writing
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Khodi, Ali – Language Testing in Asia, 2021
The present study attempted to to investigate factors which affect EFL writing scores through using generalizability theory (G-theory). To this purpose, one hundred and twenty students participated in one independent and one integrated writing tasks. Proceeding, their performances were scored by six raters: one self-rating, three peers,-rating and…
Descriptors: Writing Tests, Scores, Generalizability Theory, English (Second Language)
Beseiso, Majdi; Alzubi, Omar A.; Rashaideh, Hasan – Journal of Computing in Higher Education, 2021
E-learning is gradually gaining prominence in higher education, with universities enlarging provision and more students getting enrolled. The effectiveness of automated essay scoring (AES) is thus holding a strong appeal to universities for managing an increasing learning interest and reducing costs associated with human raters. The growth in…
Descriptors: Automation, Scoring, Essays, Writing Tests
Implications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Matta, Michael; Mercer, Sterett H.; Keller-Margulis, Milena A. – School Psychology, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Implications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Wang, Jue; Engelhard, George, Jr. – Educational and Psychological Measurement, 2019
The purpose of this study is to explore the use of unfolding models for evaluating the quality of ratings obtained in rater-mediated assessments. Two different judgmental processes can be used to conceptualize ratings: impersonal judgments and personal preferences. Impersonal judgments are typically expected in rater-mediated assessments, and…
Descriptors: Evaluative Thinking, Preferences, Evaluators, Models
Michael Matta; Milena A. Keller-Margulis; Sterett H. Mercer – Grantee Submission, 2022
Although researchers have investigated technical adequacy and usability of written-expression curriculum-based measures (WE-CBM), the economic implications of different scoring approaches have largely been ignored. The absence of such knowledge can undermine the effective allocation of resources and lead to the adoption of suboptimal measures for…
Descriptors: Cost Effectiveness, Scoring, Automation, Writing Tests
Katy Dyson; Laura Piestrzynski – Dimensions of Early Childhood, 2025
Emergent writing--the process where young children begin to experiment with written language--is an important contributor to the development of literacy skills. One way for teachers to support the development of writing skills in preschool-aged children is by integrating the Classroom Assessment Scoring System (CLASS) as a framework to foster…
Descriptors: Writing Instruction, Teaching Methods, Beginning Writing, Preschool Children
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Reading and Writing: An Interdisciplinary Journal, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment

Peer reviewed
Direct link
