Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 8 |
Descriptor
Item Analysis | 13 |
Models | 13 |
Multiple Choice Tests | 13 |
Test Items | 8 |
Test Construction | 7 |
Item Response Theory | 4 |
Comparative Analysis | 3 |
Foreign Countries | 3 |
Guessing (Tests) | 3 |
Learning Processes | 3 |
Test Reliability | 3 |
More ▼ |
Source
Author
Abdullah, Romario | 1 |
Abu-Ghazalah, Rashid M. | 1 |
Adadan, Emine | 1 |
Ahmad, Mazalah | 1 |
Akaygun, Sevil | 1 |
Bejar, Isaac I. | 1 |
Ben-Eliyahu, Einat | 1 |
Bock, R. Darrell | 1 |
Boulais, André-Philippe | 1 |
Chu, Wei | 1 |
Cook, Robert J. | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Journal Articles | 8 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 2 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Secondary Education | 2 |
Elementary Education | 1 |
Grade 7 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Location
Canada | 3 |
California | 1 |
Malaysia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Alberta Grade Twelve Diploma… | 1 |
What Works Clearinghouse Rating
Abu-Ghazalah, Rashid M.; Dubins, David N.; Poon, Gregory M. K. – Applied Measurement in Education, 2023
Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly…
Descriptors: Guessing (Tests), Multiple Choice Tests, Probability, Models
Laliyo, Lukman Abdul Rauf; Hamdi, Syukrul; Pikoli, Masrid; Abdullah, Romario; Panigoro, Citra – European Journal of Educational Research, 2021
One of the issues that hinder the students' learning progress is the inability to construct an epistemological explanation of a scientific phenomenon. Four-tier multiple-choice (hereinafter, 4TMC) instrument and Partial-Credit Model were employed to elaborate on the diagnosis process of the aforementioned problem. This study was to develop and…
Descriptors: Learning Processes, Multiple Choice Tests, Models, Test Items
Langbeheim, Elon; Ben-Eliyahu, Einat; Adadan, Emine; Akaygun, Sevil; Ramnarain, Umesh Dewnarain – Chemistry Education Research and Practice, 2022
Learning progressions (LPs) are novel models for the development of assessments in science education, that often use a scale to categorize students' levels of reasoning. Pictorial representations are important in chemistry teaching and learning, and also in LPs, but the differences between pictorial and verbal items in chemistry LPs is unclear. In…
Descriptors: Science Instruction, Learning Trajectories, Chemistry, Thinking Skills
Chu, Wei; Pavlik, Philip I., Jr. – International Educational Data Mining Society, 2023
In adaptive learning systems, various models are employed to obtain the optimal learning schedule and review for a specific learner. Models of learning are used to estimate the learner's current recall probability by incorporating features or predictors proposed by psychological theory or empirically relevant to learners' performance. Logistic…
Descriptors: Reaction Time, Accuracy, Models, Predictor Variables
Cook, Robert J.; Durning, Steven J. – AERA Online Paper Repository, 2016
In an effort to better align item development to goals of assessing higher-order tasks and decision making, complex decision trees were developed to follow clinical reasoning scripts and used as models on which multiple-choice questions could be built. This approach is compatible with best-practice assessment frameworks like Evidence Centered…
Descriptors: Multiple Choice Tests, Decision Making, Models, Task Analysis
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André – Applied Measurement in Education, 2016
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Descriptors: Psychometrics, Multiple Choice Tests, Test Items, Item Analysis
Karim, Aidah Abdul; Shah, Parilah M.; Din, Rosseni; Ahmad, Mazalah; Lubis, Maimun Aqhsa – International Education Studies, 2014
This study explored the psychometric properties of a locally developed information skills test for youth students in Malaysia using Rasch analysis. The test was a combination of 24 structured and multiple choice items with a 4-point grading scale. The test was administered to 72 technical college students and 139 secondary school students. The…
Descriptors: Foreign Countries, Information Skills, Item Response Theory, Psychometrics
Kim, Jee-Seon – Journal of Educational Measurement, 2006
Simulation and real data studies are used to investigate the value of modeling multiple-choice distractors on item response theory linking. Using the characteristic curve linking procedure for Bock's (1972) nominal response model presented by Kim and Hanson (2002), all-category linking (i.e., a linking based on all category characteristic curves…
Descriptors: Multiple Choice Tests, Test Items, Item Response Theory, Simulation
Lei, Pui-Wa; Dunbar, Stephen B.; Kolen, Michael J. – Educational and Psychological Measurement, 2004
This study compares the parametric multiple-choice model and the nonparametric kernel smoothing approach to estimating option characteristic functions (OCCs) using an empirical criterion, the stability of curve estimates over occasions that represents random error. The potential utility of graphical OCCs in item analysis was illustrated with…
Descriptors: Nonparametric Statistics, Multiple Choice Tests, Item Analysis, Item Response Theory

Trevisan, Michael S.; And Others – Educational and Psychological Measurement, 1994
The reliabilities of 2-, 3-, 4-, and 5-choice tests were compared through an incremental-option model on a test taken by 154 high school seniors. Creating the test forms incrementally more closely approximates actual test construction. The nonsignificant differences among the option choices support the three-option item. (SLD)
Descriptors: Distractors (Tests), Estimation (Mathematics), High School Students, High Schools

Lord, Frederic M. – Psychometrika, 1974
Omitted items cannot properly be treated as wrong when estimating ability and item parameters. A convenient method for utilizing the information provided by omissions is presented. Theoretical and empirical justifications are presented for the estimates obtained by the new method. (Author)
Descriptors: Academic Ability, Guessing (Tests), Item Analysis, Latent Trait Theory
Bock, R. Darrell; And Others – 1993
In preparation for a study of essay questions and other forms of open-ended exercises in the California Golden State Examination for biology, the functioning of open-ended biology items in another examination was explored. The Golden State Examination program offers honors credit to students who wish to qualify for admission to programs in…
Descriptors: Academic Achievement, Biology, Educational Assessment, Equivalency Tests
Bejar, Isaac I.; And Others – 1977
The applicability of item characteristic curve (ICC) theory to a multiple choice test item pool used to measure achievement is described. The rationale for attempting to use ICC theory in an achievement framework is summarized, and the adequacy for adaptive testing of a classroom achievement test item pool in a college biology class is studied.…
Descriptors: Academic Achievement, Achievement Tests, Adaptive Testing, Biology