Publication Date
| In 2026 | 0 |
| Since 2025 | 89 |
| Since 2022 (last 5 years) | 457 |
| Since 2017 (last 10 years) | 1245 |
| Since 2007 (last 20 years) | 2519 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 131 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| Germany | 51 |
| United Kingdom | 51 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Peer reviewedCross, Lawrence H.; Frary, Robert B. – Educational and Psychological Measurement, 1978
The reliability and validity of multiple choice test scores resutling from empirical choice-weighting of alternatives was examined under two conditions: (1) examinees were told not to guess unless choices could be eliminated; and (2) examinees were told the total score would be the total number correct. Results favored the choice-weighting…
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Response Style (Tests)
Peer reviewedKane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewedRemer, Rory – Journal of Educational Measurement, 1978
The relative efficiency and cost-effectiveness of three methods of producing and administering a worksample simulation test of interpersonal communication competence employing a multiple choice response format is explored. (Author/JKS)
Descriptors: Communication Skills, Cost Effectiveness, Higher Education, Interpersonal Competence
Peer reviewedLevin, Joel R.; And Others – American Educational Research Journal, 1978
Children listened to sentences under two instructional sets (imagery or repetition) and answered multiple choice alternatives--either identical or similar in meaning to correct information in the sentences; and including or not including previously presented irrelevant information. The sources of interference predicted from recognition memory…
Descriptors: Intermediate Grades, Learning Theories, Memory, Multiple Choice Tests
Peer reviewedEakin, Richard R.; Long, Clifford A. – Educational and Psychological Measurement, 1977
A scoring technique for true-false tests is presented. The technique, paired item scoring, involves combining two statements and having the student select one of the four resultants possible: true-true, false-true, true-false, and false-false. The combined item is treated as a multiple choice item. (Author/JKS)
Descriptors: Guessing (Tests), Measurement Techniques, Multiple Choice Tests, Objective Tests
Peer reviewedStevens, J. M.; Harris, F. T. C. – Medical Education, 1977
An automated question bank maintained by the Department of Research and Services in Education at the Middlesex Hospital Medical School provides a printed copy of each of 25,000 multiple choice questions (95 percent relating to the whole spectrum of the medical curriculum). Problems with this procedure led to experimental work storing the data on…
Descriptors: Cost Effectiveness, Higher Education, Information Retrieval, Information Storage
Peer reviewedWhitely, Susan E. – Journal of Educational Psychology, 1976
The results indicate that although relational concepts influence the cognitive aptitudes which are reflected in analogy item performance, success in solving analogies does not depend on individual differences in some major aspects of processing relationships. (Author/DEP)
Descriptors: Cognitive Measurement, Cognitive Processes, College Students, Individual Differences
Peer reviewedAiken, Lewis R. – Journal of Research and Development in Education, 1987
A critical review is presented of research conducted during the past 20 years on multiple-choice tests of achievement and aptitude. The design and use of multiple-choice tests is emphasized, but information concerning the socioeducational implications of relying on such tests is also included. (Author/CB)
Descriptors: Academic Achievement, Academic Aptitude, Educational Sociology, Multiple Choice Tests
Peer reviewedPopovich, Nicholas G.; Rogers, Wallace J. – American Journal of Pharmaceutical Education, 1987
A study to determine student knowledge and confidence in that knowledge when answering multiple-choice examination questions in a nonprescription drug course is described. An alternate approach to methods of confidence testing was investigated. The knowledge and experience survey is appended. (Author/MLW)
Descriptors: College Students, Confidence Testing, Drug Education, Drug Use
Peer reviewedWoodrow, Janice E. J. – Computers and Education, 1988
Describes the design and operation of two macros written in the programming language of Microsoft's EXCEL for educational research applications. The first macro determines the frequency of responses to a Likert-type questionnaire or multiple-choice test; the second performs a one-way analysis of variance test. (Author/LRW)
Descriptors: Analysis of Variance, Computer Software, Educational Research, Microcomputers
Peer reviewedDivgi, D. R. – Journal of Educational Measurement, 1986
This paper discusses various issues involved in using the Rasch Model with multiple-choice tests and questions the suitability of this model for multiple-choice items. Results of some past studies supporting the model are shown to be irrelevant. The effects of the model's misfit on test equating are demonstrated. (Author JAZ)
Descriptors: Equated Scores, Goodness of Fit, Latent Trait Theory, Mathematical Models
Peer reviewedLeal, Linda – Journal of Educational Psychology, 1987
The relationship between general metamemory and how well university students performed on in-class multiple choice exams was studied. Correlational and contingency analysis showed a positive relation between classroom performance and students' recommended use of organizational and self testing strategies when they planned study for a free-recall…
Descriptors: Academic Achievement, Correlation, Higher Education, Metacognition
Peer reviewedGillund, Gary; Shiffrin, Richard M. – Psychological Review, 1984
The Search of Associative Memory (SAM) model for recall is extended by assuming that a familiarity process is used for recognition. The model, formalized in a computer simulation program, correctly predicts a number of findings in the literature as well as results from an experiment on the word-frequency effect. (Author/BW)
Descriptors: Association (Psychology), Computer Simulation, Cues, Memory
Peer reviewedHodson, D. – Research in Science and Technological Education, 1984
Investigated the effect on student performance of changes in question structure and sequence on a GCE 0-level multiple-choice chemistry test. One finding noted is that there was virtually no change in test reliability on reducing the number of options (from five to per test item). (JN)
Descriptors: Academic Achievement, Chemistry, Multiple Choice Tests, Science Education
Peer reviewedSchwartz, Sybil – Applied Psycholinguistics, 1983
Compares and contrasts the abilities of normal and learning disabled students to abstract spelling patterns in the course of their acquisition of spelling skills. The performance of the learning disabled was significantly below that of the normal students. In addition, error analysis indicates that the responses of the learning disabled spellers…
Descriptors: Cognitive Development, Comparative Analysis, Dictation, Language Acquisition


