NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Romig, John Elwood; Therrien, William J.; Lloyd, John W. – Journal of Special Education, 2017
We used meta-analysis to examine the criterion validity of four scoring procedures used in curriculum-based measurement of written language. A total of 22 articles representing 21 studies (N = 21) met the inclusion criteria. Results indicated that two scoring procedures, correct word sequences and correct minus incorrect sequences, have acceptable…
Descriptors: Meta Analysis, Curriculum Based Assessment, Written Language, Scoring Formulas
Budescu, David V. – 1979
This paper outlines a technique for differentially weighting options of a multiple choice test in a fashion that maximizes the item predictive validity. The rule can be applied with different number of categories and the "optimal" number of categories can be determined by significance tests and/or through the R2 criterion. Our theoretical analysis…
Descriptors: Multiple Choice Tests, Predictive Validity, Scoring Formulas, Test Items
Peer reviewed Peer reviewed
Aamodt, Michael G.; Pierce, Walter L., Jr. – Educational and Psychological Measurement, 1987
Data from five separate samples were weighted using the vertical percent method (England) and the rare response method (Telenson, Alexander, and Barrett) to investigate their relative effectiveness for scoring biographical information blanks. Vertical percent scoring yielded significant validity coefficients for all samples, while rare response…
Descriptors: Biographical Inventories, Employees, Job Performance, Predictive Validity
Bayuk, Robert J. – 1973
An investigation was conducted to determine the effects of response-category weighting and item weighting on reliability and predictive validity. Response-category weighting refers to scoring in which, for each category (including omit and "not read"), a weight is assigned that is proportional to the mean criterion score of examinees selecting…
Descriptors: Aptitude Tests, Correlation, Predictive Validity, Research Reports
Sabers, Darrell L.; White, Gordon W. – 1971
A procedure for scoring multiple-choice tests by assigning different weights to every option of a test item is investigated. The weighting method used was based on that proposed by Davis, which involves taking the upper and lower 27% of a sample, according to some criterion measure, and using the percentages of these groups marking an item option…
Descriptors: Computer Oriented Programs, Item Analysis, Measurement Techniques, Multiple Choice Tests
Peer reviewed Peer reviewed
Downey, Ronald G. – Applied Psychological Measurement, 1979
This research attempted to interrelate several methods of producing option weights (i.e., Guttman internal and external weights and judges' weights) and examined their effects on reliability and on concurrent, predictive, and face validity. It was concluded that option weighting offered limited, if any, improvement over unit weighting. (Author/CTM)
Descriptors: Achievement Tests, Answer Keys, Comparative Testing, High Schools