Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 10 |
| Since 2007 (last 20 years) | 14 |
Descriptor
Source
Author
| Aiman Mohammad Freihat | 1 |
| Ardoin, Scott P. | 1 |
| Armani Talwar | 1 |
| Binder, Katherine S. | 1 |
| Biria, Reza | 1 |
| Bulut, Okan | 1 |
| Collentine, Karina | 1 |
| Córdova, Nora | 1 |
| Dartnell, Pablo | 1 |
| DiBartolomeo, Matthew | 1 |
| Ehrich, John | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 13 |
| Journal Articles | 12 |
| Dissertations/Theses -… | 1 |
| Information Analyses | 1 |
Education Level
| Higher Education | 5 |
| Early Childhood Education | 3 |
| Elementary Education | 3 |
| Grade 3 | 3 |
| Postsecondary Education | 3 |
| Primary Education | 3 |
| Adult Education | 1 |
| Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
| Measures of Academic Progress | 1 |
| National Assessment Program… | 1 |
| SAT (College Admission Test) | 1 |
| Test of English as a Foreign… | 1 |
| Woodcock Reading Mastery Test | 1 |
What Works Clearinghouse Rating
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Lions, Séverin; Dartnell, Pablo; Toledo, Gabriela; Godoy, María Inés; Córdova, Nora; Jiménez, Daniela; Lemarié, Julie – Educational and Psychological Measurement, 2023
Even though the impact of the position of response options on answers to multiple-choice items has been investigated for decades, it remains debated. Research on this topic is inconclusive, perhaps because too few studies have obtained experimental data from large-sized samples in a real-world context and have manipulated the position of both…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Responses
Tremblay, Kathryn A.; Binder, Katherine S.; Ardoin, Scott P.; Talwar, Amani; Tighe, Elizabeth L. – Journal of Research in Reading, 2021
Background: Of the myriad of reading comprehension (RC) assessments used in schools, multiple-choice (MC) questions continue to be one of the most prevalent formats used by educators and researchers. Outcomes from RC assessments dictate many critical factors encountered during a student's academic career, and it is crucial that we gain a deeper…
Descriptors: Grade 3, Elementary School Students, Reading Comprehension, Decoding (Reading)
Woodcock, Stuart; Howard, Steven J.; Ehrich, John – School Psychology, 2020
Standardized testing is ubiquitous in educational assessment, but questions have been raised about the extent to which these test scores accurately reflect students' genuine knowledge and skills. To more rigorously investigate this issue, the current study employed a within-subject experimental design to examine item format effects on primary…
Descriptors: Elementary School Students, Grade 3, Test Items, Test Format
Kalkan, Ömür Kaya; Kara, Yusuf; Kelecioglu, Hülya – International Journal of Assessment Tools in Education, 2018
Missing data is a common problem in datasets that are obtained by administration of educational and psychological tests. It is widely known that existence of missing observations in data can lead to serious problems such as biased parameter estimates and inflation of standard errors. Most of the missing data imputation methods are focused on…
Descriptors: Item Response Theory, Statistical Analysis, Data, Test Items
Kathryn A. Tremblay; Katherine S. Binder; Scott P. Ardoin; Armani Talwar; Elizabeth L. Tighe – Grantee Submission, 2021
Background: Of the myriad of reading comprehension (RC) assessments used in schools, multiple-choice (MC) questions continue to be one of the most prevalent formats used by educators and researchers. Outcomes from RC assessments dictate many critical factors encountered during a student's academic career, and it is crucial that we gain a deeper…
Descriptors: Reading Strategies, Eye Movements, Expository Writing, Grade 3
Gierl, Mark J.; Bulut, Okan; Guo, Qi; Zhang, Xinxin – Review of Educational Research, 2017
Multiple-choice testing is considered one of the most effective and enduring forms of educational assessment that remains in practice today. This study presents a comprehensive review of the literature on multiple-choice testing in education focused, specifically, on the development, analysis, and use of the incorrect options, which are also…
Descriptors: Multiple Choice Tests, Difficulty Level, Accuracy, Error Patterns
Young, Nicholas T.; Heckler, Andrew F. – Physical Review Physics Education Research, 2018
In the context of a generic harmonic oscillator, we investigated students' accuracy in determining the period, frequency, and angular frequency from mathematical and graphical representations. In a series of studies including interviews, free response tests, and multiple-choice tests developed in an iterative process, we assessed students in both…
Descriptors: Interviews, Accuracy, Multiple Choice Tests, Algebra
Liaghat, Farahnaz; Biria, Reza – International Journal of Instruction, 2018
This study aimed at exploring the impact of mentor text modelling on Iranian English as a Foreign Language (EFL) learners' accuracy and fluency in writing tasks with different cognitive complexity in comparison with two conventional approaches to teaching writing; namely, process-based and product-based approaches. To this end, 60 Iranian EFL…
Descriptors: Foreign Countries, Comparative Analysis, Teaching Methods, Writing Instruction
Collentine, Karina – Hispania, 2016
Tasks provide engaging ways to involve learners in meaningful, real-world activities with the foreign language (FL). Yet selecting classroom tasks suitable to learners' linguistic readiness is challenging, and task-based research is exploring the relationship between learners' overall abilities (e.g., reading, grammatical) and the complexity and…
Descriptors: Reading Ability, Second Language Learning, Task Analysis, College Students
Wolkowitz, Amanda A.; Skorupski, William P. – Educational and Psychological Measurement, 2013
When missing values are present in item response data, there are a number of ways one might impute a correct or incorrect response to a multiple-choice item. There are significantly fewer methods for imputing the actual response option an examinee may have provided if he or she had not omitted the item either purposely or accidentally. This…
Descriptors: Multiple Choice Tests, Statistical Analysis, Models, Accuracy
Han, Kyung T. – Practical Assessment, Research & Evaluation, 2012
For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing…
Descriptors: Statistical Analysis, Models, Multiple Choice Tests, Guessing (Tests)
DiBartolomeo, Matthew – ProQuest LLC, 2010
Multiple factors have influenced testing agencies to more carefully consider the manner and frequency in which pretest item data are collected and analyzed. One potentially promising development is judges' estimates of item difficulty. Accurate estimates of item difficulty may be used to reduce pretest samples sizes, supplement insufficient…
Descriptors: Test Items, Group Discussion, Athletics, Pretests Posttests

Peer reviewed
Direct link
