NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro – International Association for Development of the Information Society, 2014
In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…
Descriptors: Mathematical Models, Cooperative Learning, Multiple Choice Tests, Mathematics Instruction
Rahman, Nazia – ProQuest LLC, 2013
Samejima hypothesized that non-monotonically increasing item response functions (IRFs) of ability might occur for multiple-choice items (referred to here as "Samejima items") if low ability test takers with some, though incomplete, knowledge or skill are drawn to a particularly attractive distractor, while very low ability test takers…
Descriptors: Multiple Choice Tests, Test Items, Item Response Theory, Probability
Arnold, J. C. – J Exp Educ, 1969
Descriptors: Difficulty Level, Guessing (Tests), Mathematical Models, Methods
Peer reviewed Peer reviewed
Hutchinson, T. P. – Contemporary Educational Psychology, 1986
Qualitative evidence for the operation of partial knowledge is given by two findings. First, performance when second and subsequent choices are made is above the chance level. Second, it is positively related to first choice performance. A number of theories incorporating partial knowledge are compared quantitatively. (Author/LMO)
Descriptors: Difficulty Level, Feedback, Goodness of Fit, Mathematical Models
Peer reviewed Peer reviewed
Feldt, Leonard S. – Applied Measurement in Education, 1993
The recommendation that the reliability of multiple-choice tests will be enhanced if the distribution of item difficulties is concentrated at approximately 0.50 is reinforced and extended in this article by viewing the 0/1 item scoring as a dichotomization of an underlying normally distributed ability score. (SLD)
Descriptors: Ability, Difficulty Level, Guessing (Tests), Mathematical Models
Choppin, Bruce H. – 1983
In the answer-until-correct mode of multiple-choice testing, respondents are directed to continue choosing among the alternatives to each item until they find the correct response. There is no consensus as to how to convert the resulting pattern of responses into a measure because of two conflicting models of item response behavior. The first…
Descriptors: Computer Assisted Testing, Difficulty Level, Guessing (Tests), Knowledge Level
Douglass, James B. – 1980
The three-, two- and one-parameter (Rasch) logistic item characteristic curve models are compared for use in a large multi-section college course. Only the three-parameter model produced clearly unacceptable parameter estimates for 100 item tests with examinee samples ranging from 594 to 1082. The Rasch and two-parameter models were compared for…
Descriptors: Academic Ability, Achievement Tests, Course Content, Difficulty Level
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Samejima, Fumiko – 1986
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Item Analysis
Peer reviewed Peer reviewed
Westers, Paul; Kelderman, Henk – Psychometrika, 1992
A method for analyzing test-item responses is proposed to examine differential item functioning (DIF) in multiple-choice items within the latent class framework. Different models for detection of DIF are formulated, defining the subgroup as a latent variable. An efficient estimation method is described and illustrated. (SLD)
Descriptors: Chi Square, Difficulty Level, Educational Testing, Equations (Mathematics)
Huntley, Renee M.; Carlson, James E. – 1986
This study compared student performance on language-usage test items presented in two different formats: as discrete sentences and as items embedded in passages. American College Testing (ACT) Program's Assessment experimental units were constructed that presented 40 items in the two different formats. Results suggest item presentation may not…
Descriptors: College Entrance Examinations, Difficulty Level, Goodness of Fit, Item Analysis
Izard, J. F. – 1977
This material provides a discussion of the construction and analysis of tests prepared for classroom use by teachers. The initial discussion is concerned with the purposes of evaluation and the specification of objectives. This is followed by an examination of theoretical and practical considerations in planning a test. The material on test item…
Descriptors: Criterion Referenced Tests, Difficulty Level, Educational Objectives, Evaluation Criteria