NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 241 to 255 of 532 results Save | Export
Peer reviewed Peer reviewed
Serlin, Ronald C.; Kaiser, Henry F. – Educational and Psychological Measurement, 1978
When multiple-choice tests are scored in the usual manner, giving each correct answer one point, information concerning response patterns is lost. A method for utilizing this information is suggested. An example is presented and compared with two conventional methods of scoring. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Colgan, L. H. – International Journal of Mathematical Education in Science and Technology, 1977
Multiple-choice mathematics tests and other measures of achievement in a mathematics course for engineering students were compared. Although the results were highly correlated, there were some discrepancies. (SD)
Descriptors: Achievement Tests, Engineering Education, Higher Education, Mathematics Education
Peer reviewed Peer reviewed
Frisbie, David A.; Druva, Cynthia A. – Journal of Educational Measurement, 1986
This study was designed to examine the level of dependence within multiple true-false test-item clusters by computing sets of item correlations with data from a test composed of both multiple true-false and multiple-choice items. (Author/LMO)
Descriptors: Cluster Analysis, Correlation, Higher Education, Multiple Choice Tests
Hester, D. A. – Didaskalos, 1976
The question whether the range of assessment for a course in Greek literature in translation can be broadened to include multiple choice tests is explored. An experimental test is described and evaluated. (RM)
Descriptors: Greek Literature, Language Instruction, Multiple Choice Tests, Objective Tests
Peer reviewed Peer reviewed
Krauft, Conrad C.; Beggs, Donald L. – Journal of Experimental Education, 1973
The purpose of the study was to determine whether a subject weighted (SW) multiple-choice test taking procedure would result in higher and more reliable scores than the conventional (C) multiple-choice test taking procedure in general at different levels of risk taking. (Author)
Descriptors: Attitudes, Educational Research, Multiple Choice Tests, Questionnaires
Peer reviewed Peer reviewed
Cureton, Edward E. – Educational and Psychological Measurement, 1971
A rebuttal of Frary's 1969 article in Educational and Psychological Measurement. (MS)
Descriptors: Error of Measurement, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Willson, Victor L. – Educational and Psychological Measurement, 1982
The Serlin-Kaiser procedure is used to complete a principal components solution for scoring weights for all options of a given item. Coefficient alpha is maximized for a given multiple choice test. (Author/GK)
Descriptors: Analysis of Covariance, Factor Analysis, Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Bush, Martin E. – Quality Assurance in Education: An International Perspective, 2006
Purpose: To provide educationalists with an understanding of the key quality issues relating to multiple-choice tests, and a set of guidelines for the quality assurance of such tests. Design/methodology/approach: The discussion of quality issues is structured to reflect the order in which those issues naturally arise. It covers the design of…
Descriptors: Multiple Choice Tests, Test Reliability, Educational Quality, Quality Control
Peer reviewed Peer reviewed
Oosterhof, Albert C.; Glasnapp, Douglas R. – Journal of Experimental Education, 1974
The present study was initiated to investigate the comparability of multiple-choice and true-false item formats when the time necessary to respond to each type of item was equated empirically. (Editor)
Descriptors: Data Analysis, Guessing (Tests), Multiple Choice Tests, Objective Tests
Frary, Robert B. – 1980
Ordinal response modes for multiple choice tests are those under which the examinee marks one or more choices in an effort to identify the correct choice, or include it in a proper subset of the choices. Two ordinal response modes: answer-until-correct, and Coomb's elimination of choices which examinees identify as wrong, were analyzed for scoring…
Descriptors: Guessing (Tests), Multiple Choice Tests, Responses, Scoring
Garvin, Alfred D.; Ralston, Nancy C. – 1970
Confidence Weighting (CW), after Ebel, and Multiple Responding (MR), after Coombs, are compared empirically to determine which improved test reliability more in the case of a course pretest derived from the final examination. It was hypothesized that MR, which purportedly measures partial knowledge, would be more effective than CW, which measures…
Descriptors: College Students, Educational Testing, Multiple Choice Tests, Objective Tests
Kane, Michael T.; Moloney, James M. – 1974
Gilman and Ferry have shown that when the student's score on a multiple choice test is the total number of responses necessary to get all items correct, substantial increases in reliability can occur. In contrast, similar procedures giving partial credit on multiple choice items have resulted in relatively small gains in reliability. The analysis…
Descriptors: Feedback, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Schmeiser, Cynthia Board; Whitney, Douglas R. – 1973
Violations of four selected principles of writing multiple-choice items were introduced into an undergraduate religion course mid-term examination. Three of the flaws significantly increased test difficulty. KR-sub-20 values were lower for all of the tests containing the flawed items than for the "good" versions of the items but significantly so…
Descriptors: Item Analysis, Multiple Choice Tests, Research Reports, Test Construction
Peer reviewed Peer reviewed
Frary, Robert B.; And Others – Journal of Experimental Education, 1977
To date a theoretical basis has not been developed for determining changes in reliability when score points from random guessing are eliminated and those from non-randon guessing are retained. This paper presents a derivation of an expression for the reliability coefficient which displays the effect of deleting score components due to random…
Descriptors: Data Analysis, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Lord, Frederic M. – Journal of Educational Measurement, 1977
Two approaches for determining the optimal number of choices for a test item, presently in the literature, are compared with two new approaches. (Author)
Descriptors: Forced Choice Technique, Latent Trait Theory, Multiple Choice Tests, Test Items
Pages: 1  |  ...  |  13  |  14  |  15  |  16  |  17  |  18  |  19  |  20  |  21  |  ...  |  36