NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Sotaridona, Leonardo S.; Meijer, Rob R. – Journal of Educational Measurement, 2003
Proposed two new indices to detect answer copying on a multiple choice test and conducted a simulation study to investigate the usefulness of both indexes. Discusses conditions under which the proposed indexes can be useful. (SLD)
Descriptors: Cheating, Multiple Choice Tests, Simulation, Testing Problems
Peer reviewed Peer reviewed
Lord, Frederic M. – Journal of Educational Measurement, 1977
Two approaches for determining the optimal number of choices for a test item, presently in the literature, are compared with two new approaches. (Author)
Descriptors: Forced Choice Technique, Latent Trait Theory, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
Rowley, Glenn L. – Journal of Educational Measurement, 1974
Descriptors: Achievement Tests, Anxiety, Educational Testing, Guessing (Tests)
Peer reviewed Peer reviewed
Roberts, Dennis M. – Journal of Educational Measurement, 1987
This study examines a score-difference model for the detection of cheating based on the difference between two scores for an examinee: one based on the appropriate scoring key and another based on an alternative, inappropriate key. It argues that the score-difference method could falsely accuse students as cheaters. (Author/JAZ)
Descriptors: Answer Keys, Cheating, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Reiling, Eldon; Taylor, Ryland – Journal of Educational Measurement, 1972
The hypothesis that it is unwise to change answers to multiple choice questions was tested using multiple regression analysis. The hypothesis was rejected as results showed that there are gains to be made by changing responses. (Author/CK)
Descriptors: Guessing (Tests), Hypothesis Testing, Measurement Techniques, Multiple Choice Tests
Peer reviewed Peer reviewed
Veale, James R.; Foreman, Dale I. – Journal of Educational Measurement, 1983
Statistical procedures for measuring heterogeneity of test item distractor distributions, or cultural variation, are presented. These procedures are based on the notion that examinees' responses to the incorrect options of a multiple-choice test provide more information concerning cultural bias than their correct responses. (Author/PN)
Descriptors: Ethnic Bias, Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Bliss, Leonard B. – Journal of Educational Measurement, 1980
A mathematics achievement test with instructions to avoid guessing wildly was given to 168 elementary school pupils who were later asked to complete all the questions using a differently colored pencil. Results showed examinees, particularly the more able students, tend to omit too many items. (CTM)
Descriptors: Anxiety, Guessing (Tests), Intermediate Grades, Multiple Choice Tests
Peer reviewed Peer reviewed
Budescu, David; Bar-Hillel, Maya – Journal of Educational Measurement, 1993
Test taking and scoring are examined from the normative and descriptive perspectives of judgment and decision theory. The number-right scoring rule is endorsed because it discourages omissions and is robust against variability in respondent motivations, item vagaries, and limitations in judgments of uncertainty. (SLD)
Descriptors: Elementary Secondary Education, Guessing (Tests), Knowledge Level, Multiple Choice Tests
Peer reviewed Peer reviewed
Smith, Malbert, III; And Others – Journal of Educational Measurement, 1979
Results of multiple-choice tests in educational psychology were examined to discover the effects on students' scores of changing their original answer choices after reconsideration. Eighty-six percent of the students changed one or more answers, and six out of seven students who made changes improved their scores by doing so. (Author/CTM)
Descriptors: Academic Ability, Difficulty Level, Error Patterns, Guessing (Tests)
Peer reviewed Peer reviewed
Martinez, Michael E. – Journal of Educational Measurement, 1991
Figural response items (FRIs) in science were administered to 347 fourth graders, 365 eighth graders, and 322 twelfth graders. Item and test statistics from parallel FRIs and multiple-choice questions illustrate FRIs' more difficult and more discriminating nature. Relevance of guessing to FRIs and diagnostic value of the item type are highlighted.…
Descriptors: Comparative Testing, Constructed Response, Elementary School Students, Elementary Secondary Education