Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 12 |
Since 2006 (last 20 years) | 22 |
Descriptor
Test Format | 22 |
Test Items | 22 |
Grade 4 | 17 |
Mathematics Tests | 12 |
Grade 8 | 11 |
Difficulty Level | 10 |
National Competency Tests | 9 |
Achievement Tests | 8 |
Mathematics Achievement | 8 |
Item Response Theory | 7 |
Test Construction | 7 |
More ▼ |
Source
Author
Kalogrides, Demetra | 2 |
Podolsky, Anne | 2 |
Ayfer Sayin | 1 |
Bulut, Hatice Cigdem | 1 |
Bulut, Okan | 1 |
Chang, Wanchen | 1 |
Cho, Hyun-Jeong | 1 |
Cormier, Damien C. | 1 |
DeStefano, Lizanne | 1 |
Dodd, Barbara G. | 1 |
Fahle, Erin | 1 |
More ▼ |
Publication Type
Education Level
Grade 4 | 22 |
Elementary Education | 18 |
Grade 8 | 15 |
Intermediate Grades | 15 |
Middle Schools | 13 |
Junior High Schools | 11 |
Secondary Education | 9 |
Elementary Secondary Education | 8 |
Grade 12 | 5 |
Grade 3 | 5 |
Grade 7 | 5 |
More ▼ |
Audience
Teachers | 2 |
Parents | 1 |
Policymakers | 1 |
Location
Turkey | 3 |
Louisiana | 1 |
Oregon | 1 |
Pennsylvania | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 9 |
Trends in International… | 2 |
Gates MacGinitie Reading Tests | 1 |
Program for International… | 1 |
Stanford Achievement Tests | 1 |
Wechsler Individual… | 1 |
What Works Clearinghouse Rating
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Bulut, Okan; Bulut, Hatice Cigdem; Cormier, Damien C.; Ilgun Dibek, Munevver; Sahin Kursad, Merve – Educational Assessment, 2023
Some statewide testing programs allow students to receive corrective feedback and revise their answers during testing. Despite its pedagogical benefits, the effects of providing revision opportunities remain unknown in the context of alternate assessments. Therefore, this study examined student data from a large-scale alternate assessment that…
Descriptors: Error Correction, Alternative Assessment, Feedback (Response), Multiple Choice Tests
Yilmaz, Haci Bayram – International Electronic Journal of Elementary Education, 2019
Open ended and multiple choice questions are commonly placed on the same tests; however, there is a discussion on the effects of using different item types on the test and item statistics. This study aims to compare model and item fit statistics in a mixed format test where multiple choice and constructed response items are used together. In this…
Descriptors: Item Response Theory, Models, Goodness of Fit, Elementary School Science
Ilhan, Mustafa; Öztürk, Nagihan Boztunç; Sahin, Melek Gülsah – Participatory Educational Research, 2020
In this research, the effect of an item's type and cognitive level on its difficulty index was investigated. The data source of the study consisted of the responses of the 12535 students in the Turkey sample (6079 and 6456 students from eighth and fourth grade respectively) of TIMSS 2015. The responses were a total of 215 items at the eighth-grade…
Descriptors: Test Items, Difficulty Level, Cognitive Processes, Responses
Ping Wang – ProQuest LLC, 2021
According to the RAND model framework, reading comprehension test performance is influenced by readers' reading skills or reader characteristics, test properties, and their interactions. However, little empirical research has systematically compared the impacts of reader characteristics, test properties, and reader-test interactions across…
Descriptors: Reading Comprehension, Reading Tests, Reading Research, Test Items
National Assessment Governing Board, 2019
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. The NAEP assessment in mathematics has two components that differ in purpose. One assessment measures long-term trends in achievement among 9-, 13-, and 17-year-old students by using the same basic design each time.…
Descriptors: National Competency Tests, Mathematics Achievement, Grade 4, Grade 8
Reardon, Sean F.; Kalogrides, Demetra; Fahle, Erin M.; Podolsky, Anne; Zárate, Rosalía C. – Educational Researcher, 2018
Prior research suggests that males outperform females, on average, on multiple-choice items compared to their relative performance on constructed-response items. This paper characterizes the extent to which gender achievement gaps on state accountability tests across the United States are associated with those tests' item formats. Using roughly 8…
Descriptors: Test Items, Test Format, Gender Differences, Achievement Gap
Martin, Michael O., Ed.; von Davier, Matthias, Ed.; Mullis, Ina V. S., Ed. – International Association for the Evaluation of Educational Achievement, 2020
The chapters in this online volume comprise the TIMSS & PIRLS International Study Center's technical report of the methods and procedures used to develop, implement, and report the results of TIMSS 2019. There were various technical challenges because TIMSS 2019 was the initial phase of the transition to eTIMSS, with approximately half the…
Descriptors: Foreign Countries, Elementary Secondary Education, Achievement Tests, International Assessment
Reardon, Sean; Fahle, Erin; Kalogrides, Demetra; Podolsky, Anne; Zarate, Rosalia – Society for Research on Educational Effectiveness, 2016
Prior research demonstrates the existence of gender achievement gaps and the variation in the magnitude of these gaps across states. This paper characterizes the extent to which the variation of gender achievement gaps on standardized tests across the United States can be explained by differing state accountability test formats. A comprehensive…
Descriptors: Test Format, Gender Differences, Achievement Gap, Standardized Tests
Schoen, Robert C.; Yang, Xiaotong; Liu, Sicong; Paek, Insu – Grantee Submission, 2017
The Early Fractions Test v2.2 is a paper-pencil test designed to measure mathematics achievement of third- and fourth-grade students in the domain of fractions. The purpose, or intended use, of the Early Fractions Test v2.2 is to serve as a measure of student outcomes in a randomized trial designed to estimate the effect of an educational…
Descriptors: Psychometrics, Mathematics Tests, Mathematics Achievement, Fractions
Kevelson, Marisol J. C. – ETS Research Report Series, 2019
This study presents estimates of Black-White, Hispanic-White, and income achievement gaps using data from two different types of reading and mathematics assessments: constructed-response assessments that were likely more cognitively demanding and state achievement tests that were likely less cognitively demanding (i.e., composed solely or largely…
Descriptors: Racial Differences, Achievement Gap, White Students, African American Students
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends
National Assessment Governing Board, 2014
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. Results of these periodic assessments, produced in print and web-based formats, provide valuable information to a wide variety of audiences. They inform citizens about the nature of students' comprehension of the…
Descriptors: National Competency Tests, Mathematics Achievement, Mathematics Skills, Grade 4
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G. – Applied Psychological Measurement, 2012
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
Descriptors: Item Response Theory, Models, Selection, Criteria
DeStefano, Lizanne; Johnson, Jeremiah – American Institutes for Research, 2013
This paper describes one of the first efforts by the National Assessment of Educational Progress (NAEP) to improve measurement at the lower end of the distribution, including measurement for students with disabilities (SD) and English language learners (ELLs). One way to improve measurement at the lower end is to introduce one or more…
Descriptors: National Competency Tests, Measures (Individuals), Disabilities, English Language Learners
Previous Page | Next Page »
Pages: 1 | 2