NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Claudy, John G. – Applied Psychological Measurement, 1978
Option weighting is an alternative to increasing test length as a means of improving the reliability of a test. The effects on test reliability of option weighting procedures were compared in two empirical studies using four independent sets of items. Biserial weights were found to be superior. (Author/CTM)
Descriptors: Higher Education, Item Analysis, Scoring Formulas, Test Items
Peer reviewed Peer reviewed
Bejar, Issac I.; Weiss, David J. – Educational and Psychological Measurement, 1977
The reliabilities yielded by several differential option weighting scoring procedures were compared among themselves as well as against conventional testing. It was found that increases in reliability due to differential option weighting were a function of inter-item correlations. Suggestions for the implementation of differential option weighting…
Descriptors: Correlation, Forced Choice Technique, Item Analysis, Scoring Formulas
Peer reviewed Peer reviewed
Reilly, Richard R. – Educational and Psychological Measurement, 1975
Because previous reports have suggested that the lowered validity of tests scored with empirical option weights might be explained by a capitalization of the keying procedures on omitting tendencies, a procedure was devised to key options empirically with a "correction-for-guessing" constraint. (Author)
Descriptors: Achievement Tests, Graduate Study, Guessing (Tests), Scoring Formulas
Peer reviewed Peer reviewed
Collet, Leverne S. – Journal of Educational Measurement, 1971
The purpose of this paper was to provide an empirical test of the hypothesis that elimination scores are more reliable and valid than classical corrected-for-guessing scores or weighted-choice scores. The evidence presented supports the hypothesized superiority of elimination scoring. (Author)
Descriptors: Evaluation, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Kane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Echternacht, Gary – Educational and Psychological Measurement, 1976
Compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. Scoring methods compared were: formula scoring, a priori scoring, empirical scoring with an internal criterion, and two modifications of formula scoring. The empirically determined scoring system is seen as superior. (RC)
Descriptors: Aptitude Tests, Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Peer reviewed Peer reviewed
Jacobs, Stanley S. – Journal of Educational Measurement, 1971
Descriptors: Guessing (Tests), Individual Differences, Measurement Techniques, Multiple Choice Tests
Hendrickson, Gerry F.; Green, Bert F., Jr. – 1972
It has been shown that Guttman weighting of test options results in marked increases in the internal consistency of a test. However, the effect of this type of weighting on the structure of the test is not known. Hence, the purpose of this study is to compare the factor structure of Guttman-weighted and rights-only-weighted tests and to relate the…
Descriptors: Analysis of Variance, Correlation, Factor Analysis, Item Analysis
Bayuk, Robert J. – 1973
An investigation was conducted to determine the effects of response-category weighting and item weighting on reliability and predictive validity. Response-category weighting refers to scoring in which, for each category (including omit and "not read"), a weight is assigned that is proportional to the mean criterion score of examinees selecting…
Descriptors: Aptitude Tests, Correlation, Predictive Validity, Research Reports
Peer reviewed Peer reviewed
Kansup, Wanlop; Hakstian, A. Ralph – Journal of Educational Measurement, 1975
Effects of logically weighting incorrect item options in conventional tests and different scoring functions with confidence tests on reliability and validity were examined. Ninth graders took conventionally administered Verbal and Mathematical Reasoning tests, scored conventionally and by a procedure assigning degree-of-correctness weights to…
Descriptors: Comparative Analysis, Confidence Testing, Junior High School Students, Multiple Choice Tests
Sabers, Darrell L.; White, Gordon W. – 1971
A procedure for scoring multiple-choice tests by assigning different weights to every option of a test item is investigated. The weighting method used was based on that proposed by Davis, which involves taking the upper and lower 27% of a sample, according to some criterion measure, and using the percentages of these groups marking an item option…
Descriptors: Computer Oriented Programs, Item Analysis, Measurement Techniques, Multiple Choice Tests
Donlon, Thomas F. – 1975
This study empirically determined the optimizing weight to be applied to the Wrongs Total Score in scoring rubrics of the general form = R - kW, where S is the Score, R the Rights Total, k the weight and W the Wrongs Total, if reliability is to be maximized. As is well known, the traditional formula score rests on a theoretical framework which is…
Descriptors: Achievement Tests, Comparative Analysis, Guessing (Tests), Multiple Choice Tests
Echternacht, Gary – 1971
Confidence testing has been used in varying forms over the past 40 years as a method for increasing the amount of information available from objective test items. This paper traces the development of the procedure from Hevner's beginning method up to the various methods in use today and describes both the testing procedures and scoring methods…
Descriptors: Confidence Testing, Guessing (Tests), Individual Characteristics, Measurement Techniques
Echternacht, Gary – 1973
Estimates for the variance of empirically determined scoring weights are given. It is shown that test item writers should write distractors that discriminate on the criterion variable when this type of scoring is used. (Author)
Descriptors: Item Analysis, Measurement Techniques, Multiple Choice Tests, Performance Criteria
Peer reviewed Peer reviewed
Downey, Ronald G. – Applied Psychological Measurement, 1979
This research attempted to interrelate several methods of producing option weights (i.e., Guttman internal and external weights and judges' weights) and examined their effects on reliability and on concurrent, predictive, and face validity. It was concluded that option weighting offered limited, if any, improvement over unit weighting. (Author/CTM)
Descriptors: Achievement Tests, Answer Keys, Comparative Testing, High Schools
Previous Page | Next Page ยป
Pages: 1  |  2