Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
Assessing Writing | 1 |
Journal of Educational… | 1 |
Journal of Experimental… | 1 |
Language and Education | 1 |
National Council on Teacher… | 1 |
ProQuest LLC | 1 |
Reading Psychology | 1 |
School Administrator | 1 |
Tennessee Department of… | 1 |
Author
Bolaños, Daniel | 1 |
Cole, Ron A. | 1 |
Cross, James Logan | 1 |
Hasbrouck, Jan | 1 |
Hawk, Anne W. | 1 |
Johnson, Eugene G. | 1 |
Kaplan, Bruce A. | 1 |
Linn, Robert L. | 1 |
McCurry, Doug | 1 |
Miller, Eva R. | 1 |
Page, Ellis Batten | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 7 |
Journal Articles | 6 |
Reports - Research | 5 |
Speeches/Meeting Papers | 4 |
Dissertations/Theses -… | 1 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Secondary Education | 3 |
Elementary Education | 2 |
Early Childhood Education | 1 |
Grade 1 | 1 |
Grade 2 | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Intermediate Grades | 1 |
Primary Education | 1 |
Audience
Researchers | 1 |
Location
Florida | 1 |
Tennessee | 1 |
United States | 1 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 2 |
Race to the Top | 1 |
Assessments and Surveys
National Assessment of… | 14 |
Florida Comprehensive… | 1 |
Stanford Achievement Tests | 1 |
What Works Clearinghouse Rating
Wenjing Guo – ProQuest LLC, 2021
Constructed response (CR) items are widely used in large-scale testing programs, including the National Assessment of Educational Progress (NAEP) and many district and state-level assessments in the United States. One unique feature of CR items is that they depend on human raters to assess the quality of examinees' work. The judgment of human…
Descriptors: National Competency Tests, Responses, Interrater Reliability, Error of Measurement
Smith, Grant S.; Paige, David D. – Reading Psychology, 2019
Becoming a fluent reader has been established as important to reading comprehension. Prosody (expression) is an indicator of fluent reading that is linked to improved comprehension in students across elementary, middle, and secondary grades. Fluent reading is most often evaluated by classroom teachers through the use of a rubric, with the most…
Descriptors: Interrater Reliability, Oral Reading, Reading Fluency, National Competency Tests
National Council on Teacher Quality, 2023
Up until 2020, National Assessment of Educational Progress (NAEP) reading scores had increased only slightly since the early 1990s with large achievement gaps for students of color and students living in poverty. Modest gains in fourth grade reading proficiency since 1992 were erased during the pandemic. The insufficient progress in reading even…
Descriptors: National Competency Tests, Reading Achievement, Reading Instruction, Scores
Bolaños, Daniel; Cole, Ron A.; Ward, Wayne H.; Tindal, Gerald A.; Hasbrouck, Jan; Schwanenflugel, Paula J. – Journal of Educational Psychology, 2013
This article describes a comprehensive approach to fully automated assessment of children's oral reading fluency (ORF), one of the most informative and frequently administered measures of children's reading ability. Speech recognition and machine learning techniques are described that model the 3 components of oral reading fluency: word accuracy,…
Descriptors: Oral Reading, Reading Fluency, Automation, Accuracy
Smith, Margaret A. – School Administrator, 2012
The author's community, which includes parents of the 62,000 students who attend schools in Volusia County, Florida, has been baffled over the conflicting nature of Florida's school grading system of A, B, C, D, F based on the Florida Comprehensive Assessment Test, and the federal system of pass or fail. While many of the 80 schools in her…
Descriptors: Grading, Elementary Secondary Education, Federal Legislation, Advocacy
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Tennessee Department of Education, 2012
In the summer of 2011, the Tennessee Department of Education contracted with the National Institute for Excellence in Teaching (NIET) to provide a four-day training for all evaluators across the state. NIET trained more than 5,000 evaluators intensively in the state model (districts using alternative instruments delivered their own training).…
Descriptors: Video Technology, Feedback (Response), Evaluators, Interrater Reliability

Page, Ellis Batten – Journal of Experimental Education, 1994
National Assessment of Educational Progress writing sample essays from 1988 and 1990 (495 and 599 essays) were subjected to computerized grading and human ratings. Cross-validation suggests that computer scoring is superior to a two-judge panel, a finding encouraging for large programs of essay evaluation. (SLD)
Descriptors: Computer Assisted Testing, Computer Software, Essays, Evaluation Methods

Smitherman, Geneva – Language and Education, 1992
Analysis of nearly 1,800 essays written by 17-year-old African-American students were examined in terms of the frequency and distribution of Black English Vernacular (BEV) and the covariance of BEV with rater scores. Results suggests that BEV has converged with Edited American/Standard English and that students were not penalized for BEV in…
Descriptors: Black Dialects, Black Students, Essays, Interrater Reliability
Webb, Melvin W., II; Miller, Eva R. – 1995
As constructed-response items become an integral part of educational assessments, setting student performance standards on constructed-response items has become an important issue. Two standard-setting methods, one used for setting standards on the National Assessment of Educational Progress (NAEP) in reading in grade 8 and the other used to set…
Descriptors: Comparative Analysis, Constructed Response, Criteria, Educational Assessment
Hawk, Anne W.; Cross, James Logan – 1987
This study involved the selection and adaptation of a writing assessment procedure for teachers and researchers in the Duval County Public Schools (Florida) to use in assessing changes in writing ability among elementary grade students. Through a review of the literature, four writing assessment procedures (analytic, holistic, focused holistic,…
Descriptors: Elementary Education, Elementary School Teachers, Evaluators, Holistic Evaluation
Reckase, Mark D.; And Others – 1995
The research reported in this paper was conducted to gain information to guide the selection of standard setting procedures for use with polytomous items to set achievement levels on the National Assessment of Educational Progress (NAEP) assessments in U.S. History and geography. Standard-setting procedures were evaluated to determine the relative…
Descriptors: Academic Achievement, Educational Assessment, Elementary Secondary Education, Evaluation Methods
Kaplan, Bruce A.; Johnson, Eugene G. – 1992
Across the field of educational assessment the case has been made for alternatives to the multiple-choice item type. Most of the alternative types of items require a subjective evaluation by a rater. The reliability of this subjective rating is a key component of these types of alternative items. In this paper, measures of reliability are…
Descriptors: Educational Assessment, Elementary Secondary Education, Estimation (Mathematics), Evaluators
Linn, Robert L.; And Others – 1991
The statute authorizing the National Assessment of Educational Progress (NAEP) calls for the National Assessment Governing Board (NAGB) to set appropriate achievement levels in all areas and grades tested by the NAEP. These levels are intended to establish what students should know, not just what they do know. In 1990, the NAEP posited three…
Descriptors: Academic Achievement, Academic Standards, Credibility, Educational Assessment