NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)0
Since 2007 (last 20 years)1
Education Level
Audience
Researchers5
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 35 results Save | Export
Rahman, Nazia – ProQuest LLC, 2013
Samejima hypothesized that non-monotonically increasing item response functions (IRFs) of ability might occur for multiple-choice items (referred to here as "Samejima items") if low ability test takers with some, though incomplete, knowledge or skill are drawn to a particularly attractive distractor, while very low ability test takers…
Descriptors: Multiple Choice Tests, Test Items, Item Response Theory, Probability
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1981
A formal framework is presented for determining which of the distractors of multiple-choice test items has a small probability of being chosen by a typical examinee. The framework is based on a procedure similar to an indifference zone formulation of a ranking and election problem. (Author/BW)
Descriptors: Mathematical Models, Multiple Choice Tests, Probability, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Haines, Christopher; Crouch, Rosalind – Teaching Mathematics and Its Applications: An International Journal of the IMA, 2005
In this research paper we discuss how some multiple-choice questions may be used to improve understanding, to develop and to assess modelling capabilities and as an aid to teaching.
Descriptors: Test Items, Multiple Choice Tests, Mathematical Applications, Mathematical Models
Eignor, Daniel R.; Douglass, James B. – 1982
This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…
Descriptors: Comparative Analysis, Latent Trait Theory, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Thissen, David; And Others – Journal of Educational Measurement, 1989
An item response model for multiple-choice items is described and illustrated in item analysis. The model provides parametric and graphical summaries of the performance of each alternative associated with a multiple-choice item. The illustrative application of the model involves a pilot test of mathematics achievement items. (TJH)
Descriptors: Distractors (Tests), Latent Trait Theory, Mathematical Models, Mathematics Tests
Samejima, Fumiko – 1981
In defense of retaining the "latent trait theory" term, instead of replacing it with "item response theory" as some recent research would have it, the following objectives are outlined: (1) investigation of theory and method for estimating the operating characteristics of discrete item responses using a minimum number of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Factor Analysis, Latent Trait Theory
Peer reviewed Peer reviewed
Kane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Divgi, D. R. – Journal of Educational Measurement, 1986
This paper discusses various issues involved in using the Rasch Model with multiple-choice tests and questions the suitability of this model for multiple-choice items. Results of some past studies supporting the model are shown to be irrelevant. The effects of the model's misfit on test equating are demonstrated. (Author JAZ)
Descriptors: Equated Scores, Goodness of Fit, Latent Trait Theory, Mathematical Models
Peer reviewed Peer reviewed
Wilcox, Rand R.; And Others – Journal of Educational Measurement, 1988
The second response conditional probability model of decision-making strategies used by examinees answering multiple choice test items was revised. Increasing the number of distractors or providing distractors giving examinees (N=106) the option to follow the model improved results and gave a good fit to data for 29 of 30 items. (SLD)
Descriptors: Cognitive Tests, Decision Making, Mathematical Models, Multiple Choice Tests
Hutchinson, T. P. – 1984
One means of learning about the processes operating in a multiple choice test is to include some test items, called nonsense items, which have no correct answer. This paper compares two versions of a mathematical model of test performance to interpret test data that includes both genuine and nonsense items. One formula is based on the usual…
Descriptors: Foreign Countries, Guessing (Tests), Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Veale, James R.; Foreman, Dale I. – Journal of Educational Measurement, 1983
Statistical procedures for measuring heterogeneity of test item distractor distributions, or cultural variation, are presented. These procedures are based on the notion that examinees' responses to the incorrect options of a multiple-choice test provide more information concerning cultural bias than their correct responses. (Author/PN)
Descriptors: Ethnic Bias, Item Analysis, Mathematical Models, Multiple Choice Tests
Samejima, Fumiko – 1980
Research related to the multiple choice test item is reported, as it is conducted by educational technologists in Japan. Sato's number of hypothetical equivalent alternatives is introduced. The based idea behind this index is that the expected uncertainty of the m events, or alternatives, be large and the number of hypothetical, equivalent…
Descriptors: Foreign Countries, Latent Trait Theory, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Hutchinson, T. P. – Contemporary Educational Psychology, 1986
Qualitative evidence for the operation of partial knowledge is given by two findings. First, performance when second and subsequent choices are made is above the chance level. Second, it is positively related to first choice performance. A number of theories incorporating partial knowledge are compared quantitatively. (Author/LMO)
Descriptors: Difficulty Level, Feedback, Goodness of Fit, Mathematical Models
Peer reviewed Peer reviewed
Feldt, Leonard S. – Applied Measurement in Education, 1993
The recommendation that the reliability of multiple-choice tests will be enhanced if the distribution of item difficulties is concentrated at approximately 0.50 is reinforced and extended in this article by viewing the 0/1 item scoring as a dichotomization of an underlying normally distributed ability score. (SLD)
Descriptors: Ability, Difficulty Level, Guessing (Tests), Mathematical Models
Thissen, David; Steinberg, Lynne – 1983
An extension of the Bock-Samejima model for multiple choice items is introduced. The model provides for varying probabilities of the response alternative when the examinee guesses. A marginal maximum likelihood method is devised for estimating the item parameters, and likelihood ratio tests for comparing more and less constrained forms of the…
Descriptors: Ability, Estimation (Mathematics), Guessing (Tests), Latent Trait Theory
Previous Page | Next Page ยป
Pages: 1  |  2  |  3