NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 22 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Wenjing; Wind, Stefanie A. – Journal of Educational Measurement, 2021
The use of mixed-format tests made up of multiple-choice (MC) items and constructed response (CR) items is popular in large-scale testing programs, including the National Assessment of Educational Progress (NAEP) and many district- and state-level assessments in the United States. Rater effects, or raters' scoring tendencies that result in…
Descriptors: Test Format, Multiple Choice Tests, Scoring, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Guo, Wenjing – Educational Assessment, 2021
Scoring procedures for the constructed-response (CR) items in large-scale mixed-format educational assessments often involve checks for rater agreement or rater reliability. Although these analyses are important, researchers have documented rater effects that persist despite rater training and that are not always detected in rater agreement and…
Descriptors: Scoring, Responses, Test Items, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Yan; Murphy, Kevin B. – National Center for Education Statistics, 2020
In 2018, the National Center for Education Statistics (NCES) administered two assessments--the National Assessment of Educational Progress (NAEP) Technology and Engineering Literacy (TEL) assessment and the International Computer and Information Literacy Study (ICILS)--to two separate nationally representative samples of 8th-grade students in the…
Descriptors: National Competency Tests, International Assessment, Computer Literacy, Information Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
Woodcock, Stuart; Howard, Steven J.; Ehrich, John – School Psychology, 2020
Standardized testing is ubiquitous in educational assessment, but questions have been raised about the extent to which these test scores accurately reflect students' genuine knowledge and skills. To more rigorously investigate this issue, the current study employed a within-subject experimental design to examine item format effects on primary…
Descriptors: Elementary School Students, Grade 3, Test Items, Test Format
National Academies Press, 2022
The National Assessment of Educational Progress (NAEP) -- often called "The Nation's Report Card" -- is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable…
Descriptors: Costs, Futures (of Society), National Competency Tests, Educational Trends
National Assessment Governing Board, 2019
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. The NAEP assessment in mathematics has two components that differ in purpose. One assessment measures long-term trends in achievement among 9-, 13-, and 17-year-old students by using the same basic design each time.…
Descriptors: National Competency Tests, Mathematics Achievement, Grade 4, Grade 8
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bijsterbosch, Erik – Geographical Education, 2018
Geography teachers' school-based (internal) examinations in pre-vocational geography education in the Netherlands appear to be in line with the findings in the literature, namely that teachers' assessment practices tend to focus on the recall of knowledge. These practices are strongly influenced by national (external) examinations. This paper…
Descriptors: Foreign Countries, Instructional Effectiveness, National Competency Tests, Geography Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Tengberg, Michael – Language Testing, 2017
Reading comprehension tests are often assumed to measure the same, or at least similar, constructs. Yet, reading is not a single but a multidimensional form of processing, which means that variations in terms of reading material and item design may emphasize one aspect of the construct at the cost of another. The educational systems in Denmark,…
Descriptors: Foreign Countries, National Competency Tests, Reading Tests, Comparative Analysis
Reardon, Sean F.; Kalogrides, Demetra; Fahle, Erin M.; Podolsky, Anne; Zárate, Rosalía C. – Educational Researcher, 2018
Prior research suggests that males outperform females, on average, on multiple-choice items compared to their relative performance on constructed-response items. This paper characterizes the extent to which gender achievement gaps on state accountability tests across the United States are associated with those tests' item formats. Using roughly 8…
Descriptors: Test Items, Test Format, Gender Differences, Achievement Gap
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Reardon, Sean; Fahle, Erin; Kalogrides, Demetra; Podolsky, Anne; Zarate, Rosalia – Society for Research on Educational Effectiveness, 2016
Prior research demonstrates the existence of gender achievement gaps and the variation in the magnitude of these gaps across states. This paper characterizes the extent to which the variation of gender achievement gaps on standardized tests across the United States can be explained by differing state accountability test formats. A comprehensive…
Descriptors: Test Format, Gender Differences, Achievement Gap, Standardized Tests
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends
National Assessment Governing Board, 2014
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. Results of these periodic assessments, produced in print and web-based formats, provide valuable information to a wide variety of audiences. They inform citizens about the nature of students' comprehension of the…
Descriptors: National Competency Tests, Mathematics Achievement, Mathematics Skills, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G. – Applied Psychological Measurement, 2012
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
Descriptors: Item Response Theory, Models, Selection, Criteria
DeStefano, Lizanne; Johnson, Jeremiah – American Institutes for Research, 2013
This paper describes one of the first efforts by the National Assessment of Educational Progress (NAEP) to improve measurement at the lower end of the distribution, including measurement for students with disabilities (SD) and English language learners (ELLs). One way to improve measurement at the lower end is to introduce one or more…
Descriptors: National Competency Tests, Measures (Individuals), Disabilities, English Language Learners
Hart, Ray; Casserly, Michael; Uzzell, Renata; Palacios, Moses; Corcoran, Amanda; Spurgeon, Liz – Council of the Great City Schools, 2015
There has been little data collected on how much testing actually goes on in America's schools and how the results are used. So in the Spring of 2014, the Council staff developed and launched a survey of assessment practices. This report presents the findings from that survey and subsequent Council analysis and review of the data. It also offers…
Descriptors: Urban Schools, Student Evaluation, Testing Programs, Testing
Previous Page | Next Page »
Pages: 1  |  2