NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 22 results Save | Export
Kane, Michael T.; Moloney, James M. – 1974
Gilman and Ferry have shown that when the student's score on a multiple choice test is the total number of responses necessary to get all items correct, substantial increases in reliability can occur. In contrast, similar procedures giving partial credit on multiple choice items have resulted in relatively small gains in reliability. The analysis…
Descriptors: Feedback, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Peer reviewed Peer reviewed
Brandenburg, Dale C.; Whitney, Douglas R. – Journal of Educational Measurement, 1972
Primary purpose of this study was to investigate the effect of various scoring methods on the reliability and validity of the Primary Test of Economic Understanding (PTEU). the PTEU was designed to be scored using the matched pair procedure. (Authors)
Descriptors: Grade 3, Objective Tests, Response Style (Tests), Scoring Formulas
Cross, Lawrence H.; And Others – 1980
A new scoring procedure for multiple choice tests attempts to assess partial knowledge and to restrict guessing. It is a variant of Coombs' elimination scoring method, adapted for use with the carbon-shield answer sheets commonly used with answer-until-correct scoring. Examinees are directed to erase the carbon shields of choices they are certain…
Descriptors: Answer Sheets, Guessing (Tests), Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Stauffer, A. J. – Educational and Psychological Measurement, 1974
Descriptors: Attitude Change, Attitude Measures, Comparative Analysis, Educational Research
Peer reviewed Peer reviewed
Echternacht, Gary – Educational and Psychological Measurement, 1976
Compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. Scoring methods compared were: formula scoring, a priori scoring, empirical scoring with an internal criterion, and two modifications of formula scoring. The empirically determined scoring system is seen as superior. (RC)
Descriptors: Aptitude Tests, Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Peer reviewed Peer reviewed
Albanese, Mark A. – Evaluation and the Health Professions, 1982
Findings regarding formats and scoring formulas for multiple-choice test items with more than one correct response are presented. Strong cluing effects in the Type K format, increasing the correct score percentage and reducing test reliability, recommend using the Type X format. Alternative scoring methods are discussed. (Author/CM)
Descriptors: Health Occupations, Multiple Choice Tests, Professional Education, Response Style (Tests)
Peer reviewed Peer reviewed
Poizner, Sharon B.; And Others – Applied Psychological Measurement, 1978
Binary, probability, and ordinal scoring procedures for multiple-choice items were examined. In two situations, it was found that both the probability and ordinal scoring systems were more reliable than the binary scoring method. (Author/CTM)
Descriptors: Confidence Testing, Guessing (Tests), Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Birenbaum, Menucha; Fatsuoka, Kikumi K. – Journal of Educational Measurement, 1983
The outcomes of two scoring methods (one based on an error analysis and the second on a conventional method) on free-response tests, compared in terms of reliability and dimensionality, indicates the conventional method is inferior in both aspects. (Author/PN)
Descriptors: Achievement Tests, Algorithms, Data, Junior High Schools
Cross, Lawrence H. – 1975
A novel scoring procedure was investigated in order to obtain scores from a conventional multiple-choice test that would be free of the guessing component or contain a known guessing component even though examinees were permitted to guess at will. Scores computed with the experimental procedure are based not only on the number of items answered…
Descriptors: Algebra, Comparative Analysis, Guessing (Tests), High Schools
Bayuk, Robert J. – 1973
An investigation was conducted to determine the effects of response-category weighting and item weighting on reliability and predictive validity. Response-category weighting refers to scoring in which, for each category (including omit and "not read"), a weight is assigned that is proportional to the mean criterion score of examinees selecting…
Descriptors: Aptitude Tests, Correlation, Predictive Validity, Research Reports
Peer reviewed Peer reviewed
Cross, Lawrence; Frary, Robert – Journal of Educational Measurement, 1977
Corrected-for-guessing scores on multiple-choice tests depend upon the ability and willingness of examinees to guess when they have some basis for answering, and to avoid guessing when they have no basis. The present study determined the extent to which college students were able and willing to comply with formula-scoring directions. (Author/CTM)
Descriptors: Guessing (Tests), Higher Education, Individual Characteristics, Multiple Choice Tests
PDF pending restoration PDF pending restoration
Kane, Michael T.; Moloney, James M. – 1976
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Kansup, Wanlop; Hakstian, A. Ralph – Journal of Educational Measurement, 1975
Effects of logically weighting incorrect item options in conventional tests and different scoring functions with confidence tests on reliability and validity were examined. Ninth graders took conventionally administered Verbal and Mathematical Reasoning tests, scored conventionally and by a procedure assigning degree-of-correctness weights to…
Descriptors: Comparative Analysis, Confidence Testing, Junior High School Students, Multiple Choice Tests
Peer reviewed Peer reviewed
Hakstian, A. Ralph; Kansup, Wanlop – Journal of Educational Measurement, 1975
A comparison of reliability and validity was made for three testing procedures: 1) responding conventionally to Verbal Ability and Mathematical Reasoning tests; 2) using a confidence weighting response procedure with the same tests; and 3) using the elimination response method. The experimental testing procedures were not psychometrically superior…
Descriptors: Comparative Analysis, Confidence Testing, Guessing (Tests), Junior High School Students
Donlon, Thomas F. – 1975
This study empirically determined the optimizing weight to be applied to the Wrongs Total Score in scoring rubrics of the general form = R - kW, where S is the Score, R the Rights Total, k the weight and W the Wrongs Total, if reliability is to be maximized. As is well known, the traditional formula score rests on a theoretical framework which is…
Descriptors: Achievement Tests, Comparative Analysis, Guessing (Tests), Multiple Choice Tests
Previous Page | Next Page ยป
Pages: 1  |  2