NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Baker, Beverly A. – Assessing Writing, 2010
In high-stakes writing assessments, rater training in the use of a rating scale does not eliminate variability in grade attribution. This realisation has been accompanied by research that explores possible sources of rater variability, such as rater background or rating scale type. However, there has been little consideration thus far of…
Descriptors: Foreign Countries, Writing Evaluation, Writing Tests, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Gebril, Atta – Assessing Writing, 2010
Integrated tasks are currently employed in a number of L2 exams since they are perceived as an addition to the writing-only task type. Given this trend, the current study investigates composite score generalizability of both reading-to-write and writing-only tasks. For this purpose, a multivariate generalizability analysis is used to investigate…
Descriptors: Scoring, Scores, Second Language Instruction, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Whithaus, Carl; Harrison, Scott B.; Midyette, Jeb – Assessing Writing, 2008
This article examines the influence of keyboarding versus handwriting in a high-stakes writing assessment. Conclusions are based on data collected from a pilot project to move Old Dominion University's Exit Exam of Writing Proficiency from a handwritten format into a dual-option format (i.e., the students may choose to handwrite or keyboard the…
Descriptors: Writing Evaluation, Handwriting, Pilot Projects, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Assessing Writing, 2007
Educators often have to choose among different types of rating scales to assess second-language (L2) writing performance. There is little research, however, on how different rating scales affect rater performance. This study employed a mixed-method approach to investigate the effects of two different rating scales on EFL essay scores, rating…
Descriptors: Writing Evaluation, Writing Tests, Rating Scales, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Knoch, Ute; Read, John; von Randow, Janet – Assessing Writing, 2007
The training of raters for writing assessment through web-based programmes is emerging as an attractive and flexible alternative to the conventional method of face-to-face training sessions. Although some online training programmes have been developed, there is little published research on them. The current study aims to compare the effectiveness…
Descriptors: Writing Evaluation, Writing Tests, Professional Training, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, H. K. – Assessing Writing, 2004
This study aimed to comprehensively investigate the impact of a word-processor on an ESL writing assessment, covering comparison of inter-rater reliability, the quality of written products, the writing process across different testing occasions using different writing media, and students' perception of a computer-delivered test. Writing samples of…
Descriptors: Writing Evaluation, Student Attitudes, Writing Tests, Testing