Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 8 |
Descriptor
Source
Author
| Darling-Hammond, Linda | 2 |
| Bhashithe Abeysinghe | 1 |
| Congning Ni | 1 |
| Joon Suh Choi | 1 |
| Juanita Hicks | 1 |
| Langdon Holmes | 1 |
| Martinez, Michael E. | 1 |
| McCurry, Doug | 1 |
| Page, Ellis Batten | 1 |
| Polikoff, Morgan S. | 1 |
| Scott Crossley | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 4 |
| Reports - Descriptive | 4 |
| Reports - Evaluative | 4 |
| Reports - Research | 2 |
| Books | 1 |
Education Level
| Elementary Secondary Education | 6 |
| Secondary Education | 3 |
| Elementary Education | 2 |
| Grade 8 | 2 |
| High Schools | 2 |
| Higher Education | 2 |
| Postsecondary Education | 2 |
| Grade 12 | 1 |
| Grade 4 | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| More ▼ | |
Audience
| Policymakers | 1 |
Location
| Australia | 2 |
| Connecticut | 2 |
| New Hampshire | 2 |
| New York | 2 |
| Rhode Island | 2 |
| United Kingdom (England) | 2 |
| Vermont | 2 |
| Singapore | 1 |
| United States | 1 |
Laws, Policies, & Programs
| Every Student Succeeds Act… | 2 |
| Elementary and Secondary… | 1 |
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
| National Assessment of… | 10 |
| New York State Regents… | 2 |
| Dynamic Indicators of Basic… | 1 |
What Works Clearinghouse Rating
Wesley Morris; Langdon Holmes; Joon Suh Choi; Scott Crossley – International Journal of Artificial Intelligence in Education, 2025
Recent developments in the field of artificial intelligence allow for improved performance in the automated assessment of extended response items in mathematics, potentially allowing for the scoring of these items cheaply and at scale. This study details the grand prize-winning approach to developing large language models (LLMs) to automatically…
Descriptors: Automation, Computer Assisted Testing, Mathematics Tests, Scoring
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests
National Academies Press, 2022
The National Assessment of Educational Progress (NAEP) -- often called "The Nation's Report Card" -- is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable…
Descriptors: Costs, Futures (of Society), National Competency Tests, Educational Trends
Darling-Hammond, Linda – Learning Policy Institute, 2017
After passage of the Every Student Succeeds Act (ESSA) in 2015, states assumed greater responsibility for designing their own accountability and assessment systems. ESSA requires states to measure "higher order thinking skills and understanding" and encourages the use of open-ended performance assessments, which are essential for…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Darling-Hammond, Linda – Council of Chief State School Officers, 2017
The Every Student Succeeds Act (ESSA) opened up new possibilities for how student and school success are defined and supported in American public education. States have greater responsibility for designing and building their assessment and accountability systems. These new opportunities to develop performance assessments are critically important…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Polikoff, Morgan S. – Center for American Progress, 2014
The Common Core State Standards (CCSS) were created in response to the shortcomings of No Child Left Behind era standards and assessments. Among those failings were the poor quality of content standards and assessments and the variability in content expectations and proficiency targets across states, as well as concerns related to the economic…
Descriptors: Common Core State Standards, Educational Legislation, Federal Legislation, Elementary Secondary Education
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
National Assessment Governing Board, 2010
The purpose of the 2011 NAEP (National Assessment of Educational Progress) Writing Framework is to describe how the new NAEP Writing Assessment is designed to measure students' writing at grades 4, 8, and 12. As the ongoing national indicator of the academic achievement of students in the United States, NAEP regularly collects information on…
Descriptors: Writing Achievement, Writing Skills, Writing Evaluation, National Competency Tests
Peer reviewedPage, Ellis Batten – Journal of Experimental Education, 1994
National Assessment of Educational Progress writing sample essays from 1988 and 1990 (495 and 599 essays) were subjected to computerized grading and human ratings. Cross-validation suggests that computer scoring is superior to a two-judge panel, a finding encouraging for large programs of essay evaluation. (SLD)
Descriptors: Computer Assisted Testing, Computer Software, Essays, Evaluation Methods
Martinez, Michael E.; And Others – 1990
Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…
Descriptors: Automation, Computer Assisted Testing, Educational Technology, Multiple Choice Tests

Direct link
