Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 4 |
Descriptor
| Predictive Validity | 5 |
| Scoring | 5 |
| Writing Tests | 5 |
| Automation | 3 |
| Elementary School Students | 3 |
| Essays | 3 |
| Grade 7 | 3 |
| Middle School Students | 3 |
| Bias | 2 |
| Curriculum Based Assessment | 2 |
| Grade 4 | 2 |
| More ▼ | |
Author
| Michael Matta | 2 |
| Milena A. Keller-Margulis | 2 |
| Sterett H. Mercer | 2 |
| James, Cindy L. | 1 |
| Keller-Margulis, Milena A. | 1 |
| Matta, Michael | 1 |
| Mercer, Sterett H. | 1 |
| Merrigan, Teresa E. | 1 |
Publication Type
| Reports - Research | 3 |
| Journal Articles | 2 |
| Dissertations/Theses -… | 1 |
| Reports - Evaluative | 1 |
Education Level
| Elementary Education | 4 |
| Grade 7 | 3 |
| Intermediate Grades | 3 |
| Junior High Schools | 3 |
| Middle Schools | 3 |
| Secondary Education | 3 |
| Grade 4 | 2 |
| Grade 6 | 1 |
| Grade 8 | 1 |
Audience
Location
| Pennsylvania | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Implications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Matta, Michael; Mercer, Sterett H.; Keller-Margulis, Milena A. – School Psychology, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Implications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Michael Matta; Milena A. Keller-Margulis; Sterett H. Mercer – Grantee Submission, 2022
Although researchers have investigated technical adequacy and usability of written-expression curriculum-based measures (WE-CBM), the economic implications of different scoring approaches have largely been ignored. The absence of such knowledge can undermine the effective allocation of resources and lead to the adoption of suboptimal measures for…
Descriptors: Cost Effectiveness, Scoring, Automation, Writing Tests
Merrigan, Teresa E. – ProQuest LLC, 2012
The purpose of the current study was to evaluate the psychometric properties of alternative approaches to administering and scoring curriculum-based measurement for written expression. Specifically, three response durations (3, 5, and 7 minutes) and six score types (total words written, words spelled correctly, percent of words spelled correctly,…
Descriptors: Curriculum Based Assessment, Testing, Scoring, Writing Tests
James, Cindy L. – Assessing Writing, 2006
How do scores from writing samples generated by computerized essay scorers compare to those generated by ''untrained'' human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample…
Descriptors: Writing (Composition), Predictive Validity, Scoring, Validity

Peer reviewed
Direct link
