NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Individuals with Disabilities…1
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Katherine L. Buchanan; Milena Keller-Margulis; Amanda Hut; Weihua Fan; Sarah S. Mire; G. Thomas Schanding Jr. – Early Childhood Education Journal, 2025
There is considerable research regarding measures of early reading but much less in early writing. Nevertheless, writing is a critical skill for success in school and early difficulties in writing are likely to persist without intervention. A necessary step toward identifying those students who need additional support is the use of screening…
Descriptors: Writing Evaluation, Evaluation Methods, Emergent Literacy, Beginning Writing
Peer reviewed Peer reviewed
Direct linkDirect link
Beseiso, Majdi; Alzubi, Omar A.; Rashaideh, Hasan – Journal of Computing in Higher Education, 2021
E-learning is gradually gaining prominence in higher education, with universities enlarging provision and more students getting enrolled. The effectiveness of automated essay scoring (AES) is thus holding a strong appeal to universities for managing an increasing learning interest and reducing costs associated with human raters. The growth in…
Descriptors: Automation, Scoring, Essays, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lestari, Santi B.; Brunfaut, Tineke – Language Testing, 2023
Assessing integrated reading-into-writing task performances is known to be challenging, and analytic rating scales have been found to better facilitate the scoring of these performances than other common types of rating scales. However, little is known about how specific operationalizations of the reading-into-writing construct in analytic rating…
Descriptors: Reading Writing Relationship, Writing Tests, Rating Scales, Writing Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Abigail A.; Jung, Pyung-Gang; Poch, Apryl L.; Brandes, Dana; Shin, Jaehyun; Lembke, Erica S.; McMaster, Kristen L. – Reading & Writing Quarterly, 2020
The purpose of this study was to investigate evidence of reliability, criterion validity, and grade-level differences of curriculum-based measures of writing (CBM-W) with 612 students in grades 1-3. Four scoring procedures (words written, words spelled correctly, correct word sequences, and correct minus incorrect word sequences) were used with…
Descriptors: Curriculum Based Assessment, Writing Tests, Test Reliability, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Uzun, N. Bilge; Alici, Devrim; Aktas, Mehtap – European Journal of Educational Research, 2019
The purpose of study is to examine the reliability of analytical rubrics and checklists developed for the assessment of story writing skills by means of generalizability theory. The study group consisted of 52 students attending the 5th grade at primary school and 20 raters in Mersin University. The G study was carried out with the fully crossed…
Descriptors: Foreign Countries, Scoring Rubrics, Check Lists, Writing Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rios, Joseph A.; Sparks, Jesse R.; Zhang, Mo; Liu, Ou Lydia – ETS Research Report Series, 2017
Proficiency with written communication (WC) is critical for success in college and careers. As a result, institutions face a growing challenge to accurately evaluate their students' writing skills to obtain data that can support demands of accreditation, accountability, or curricular improvement. Many current standardized measures, however, lack…
Descriptors: Test Construction, Test Validity, Writing Tests, College Outcomes Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Campbell, Heather; Espin, Christine A.; McMaster, Kristen – Reading and Writing: An Interdisciplinary Journal, 2013
The purpose of this study was to examine the validity and reliability of Curriculum-Based Measures in writing for English learners. Participants were 36 high school English learners with moderate to high levels of English language proficiency. Predictor variables were type of writing prompt (picture, narrative, and expository), time (3, 5, and 7…
Descriptors: Curriculum Based Assessment, Writing Tests, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Frame, Laura B.; Vidrine, Stephanie M.; Hinojosa, Ryan – Journal of Psychoeducational Assessment, 2016
The Kaufman Test of Educational Achievement, Third Edition (KTEA-3) is a revised and updated comprehensive academic achievement test (Kaufman & Kaufman, 2014). Authored by Drs. Alan and Nadeen Kaufman and published by Pearson, the KTEA-3 remains an individual achievement test normed for individuals of ages 4 through 25 years, or for those in…
Descriptors: Achievement Tests, Elementary Secondary Education, Test Validity, Test Reliability
Merrigan, Teresa E. – ProQuest LLC, 2012
The purpose of the current study was to evaluate the psychometric properties of alternative approaches to administering and scoring curriculum-based measurement for written expression. Specifically, three response durations (3, 5, and 7 minutes) and six score types (total words written, words spelled correctly, percent of words spelled correctly,…
Descriptors: Curriculum Based Assessment, Testing, Scoring, Writing Tests
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
McMaster, Kristen L.; Du, Xiaoqing; Petursdottir, Anna-Lind – Journal of Learning Disabilities, 2009
The purpose of the two studies reported in this article was to examine technical features of curriculum-based measures for beginning writers. In Study 1, 50 first graders responded to word copying, sentence copying, and story prompts. In Study 2, 50 additional first graders responded to letter, picture-word, picture-theme, and photo prompts. In…
Descriptors: Curriculum Based Assessment, Grade 1, Writing Tests, Cues
Peer reviewed Peer reviewed
Direct linkDirect link
Gebril, Atta – Assessing Writing, 2010
Integrated tasks are currently employed in a number of L2 exams since they are perceived as an addition to the writing-only task type. Given this trend, the current study investigates composite score generalizability of both reading-to-write and writing-only tasks. For this purpose, a multivariate generalizability analysis is used to investigate…
Descriptors: Scoring, Scores, Second Language Instruction, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – ETS Research Report Series, 2008
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multitrait) rating dimensions and their relationships to holistic scores and "e-rater"® essay feature variables in the context of the TOEFL® computer-based test (CBT) writing assessment. Data analyzed in the study were analytic and holistic…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scoring
Wolfe, Edward W.; Kao, Chi-Wen – 1996
This paper reports the results of an analysis of the relationship between scorer behaviors and score variability. Thirty-six essay scorers were interviewed and asked to perform a think-aloud task as they scored 24 essays. Each comment made by a scorer was coded according to its content focus (i.e. appearance, assignment, mechanics, communication,…
Descriptors: Content Analysis, Educational Assessment, Essays, Evaluation Methods
Crehan, Kevin D. – 1997
Writing fits well within the realm of outcomes suitable for observation by performance assessments. Studies of the reliability of performance assessments have suggested that interrater reliability can be consistently high. Scoring consistency, however, is only one aspect of quality in decisions based on assessment results. Another is…
Descriptors: Evaluation Methods, Feedback, Generalizability Theory, Interrater Reliability
Previous Page | Next Page »
Pages: 1  |  2