NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ismail, Yilmaz – Educational Research and Reviews, 2020
This study draws on the understanding that when the correlation between variables is not known yet the non-linear expectation in the correlation between the variables is present, non-linear measurement tools can be used. In education, possibility measurement tools can be used for non-linear measurement. Multiple-choice possibility measurement…
Descriptors: Multiple Choice Tests, Measurement Techniques, Student Evaluation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Abu-Ghazalah, Rashid M.; Dubins, David N.; Poon, Gregory M. K. – Applied Measurement in Education, 2023
Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly…
Descriptors: Guessing (Tests), Multiple Choice Tests, Probability, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Trevor I.; Bendjilali, Nasrine – Physical Review Physics Education Research, 2022
Several recent studies have employed item response theory (IRT) to rank incorrect responses to commonly used research-based multiple-choice assessments. These studies use Bock's nominal response model (NRM) for applying IRT to categorical (nondichotomous) data, but the response rankings only utilize half of the parameters estimated by the model.…
Descriptors: Item Response Theory, Test Items, Multiple Choice Tests, Science Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Foley, Brett P. – Practical Assessment, Research & Evaluation, 2016
There is always a chance that examinees will answer multiple choice (MC) items correctly by guessing. Design choices in some modern exams have created situations where guessing at random through the full exam--rather than only for a subset of items where the examinee does not know the answer--can be an effective strategy to pass the exam. This…
Descriptors: Guessing (Tests), Multiple Choice Tests, Case Studies, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rakes, Christopher R.; Ronau, Robert N. – International Journal of Research in Education and Science, 2019
The present study examined the ability of content domain (algebra, geometry, rational number, probability) to classify mathematics misconceptions. The study was conducted with 1,133 students in 53 algebra and geometry classes taught by 17 teachers from three high schools and one middle school across three school districts in a Midwestern state.…
Descriptors: Mathematics Instruction, Secondary School Teachers, Middle School Teachers, Misconceptions
Peer reviewed Peer reviewed
Direct linkDirect link
Ting, Mu Yu – EURASIA Journal of Mathematics, Science & Technology Education, 2017
Using the capabilities of expert knowledge structures, the researcher prepared test questions on the university calculus topic of "finding the area by integration." The quiz is divided into two types of multiple choice items (one out of four and one out of many). After the calculus course was taught and tested, the results revealed that…
Descriptors: Calculus, Mathematics Instruction, College Mathematics, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Maeda, Hotaka; Zhang, Bo – International Journal of Testing, 2017
The omega (?) statistic is reputed to be one of the best indices for detecting answer copying on multiple choice tests, but its performance relies on the accurate estimation of copier ability, which is challenging because responses from the copiers may have been contaminated. We propose an algorithm that aims to identify and delete the suspected…
Descriptors: Cheating, Test Items, Mathematics, Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Papaieronymou, Irini – Athens Journal of Education, 2017
This paper presents the results of a study which examined the role of particular tasks implemented through two instructional methods on college students' achievement in probability. A mixed methods design that utilized a pre-test (with multiple-choice items) and post-test (with multiple-choice and open-ended items) in treatment and control groups…
Descriptors: Teaching Methods, College Students, Academic Achievement, Probability
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends
Rahman, Nazia – ProQuest LLC, 2013
Samejima hypothesized that non-monotonically increasing item response functions (IRFs) of ability might occur for multiple-choice items (referred to here as "Samejima items") if low ability test takers with some, though incomplete, knowledge or skill are drawn to a particularly attractive distractor, while very low ability test takers…
Descriptors: Multiple Choice Tests, Test Items, Item Response Theory, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Sadaghiani, Homeyra R.; Pollock, Steven J. – Physical Review Special Topics - Physics Education Research, 2015
As part of an ongoing investigation of students' learning in first semester upper-division quantum mechanics, we needed a high-quality conceptual assessment instrument for comparing outcomes of different curricular approaches. The process of developing such a tool started with converting a preliminary version of a 14-item open-ended quantum…
Descriptors: Science Instruction, Quantum Mechanics, Mechanics (Physics), Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Jihyun; Corter, James E. – Applied Psychological Measurement, 2011
Diagnosis of misconceptions or "bugs" in procedural skills is difficult because of their unstable nature. This study addresses this problem by proposing and evaluating a probability-based approach to the diagnosis of bugs in children's multicolumn subtraction performance using Bayesian networks. This approach assumes a causal network relating…
Descriptors: Misconceptions, Probability, Children, Subtraction
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Huang, Sheng-Yun – Educational and Psychological Measurement, 2011
The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…
Descriptors: Computer Assisted Testing, Classification, Item Analysis, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Veldkamp, Bernard P. – International Journal of Testing, 2008
Integrity[TM], an online application for testing both the statistical integrity of the test and the academic integrity of the examinees, was evaluated for this review. Program features and the program output are described. An overview of the statistics in Integrity[TM] is provided, and the application is illustrated with a small simulation study.…
Descriptors: Simulation, Integrity, Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1981
A formal framework is presented for determining which of the distractors of multiple-choice test items has a small probability of being chosen by a typical examinee. The framework is based on a procedure similar to an indifference zone formulation of a ranking and election problem. (Author/BW)
Descriptors: Mathematical Models, Multiple Choice Tests, Probability, Test Items
Previous Page | Next Page ยป
Pages: 1  |  2