NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: ED051302
Record Type: RIE
Publication Date: 1971
Pages: 11
Abstractor: N/A
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
The Effect of Differential Weighting of Individual Item Responses on the Predictive Validity and Reliability of an Aptitude Test.
Sabers, Darrell L.; White, Gordon W.
A procedure for scoring multiple-choice tests by assigning different weights to every option of a test item is investigated. The weighting method used was based on that proposed by Davis, which involves taking the upper and lower 27% of a sample, according to some criterion measure, and using the percentages of these groups marking an item option to obtain the weight for that option. These percentages were then used to enter a weighting table to derive the appropriate option weight. Weights assigned to one item need not be similar to those of another item; an incorrect response to a difficult question may carry more weight than the correct response to an easier question. Weights for scoring the Iowa Algebra Aptitude Test were determined by computer by the use of achievement tests and the IAAT itself given to two groups of ninth grade algebra and two groups of ninth grade modern mathematics students. Correlations between the pairs of weights were used as measure of the reliability of the choice weights. The data suggests that more than 1,000 examinees would be required to provide reliable scoring weights for the distracters in this test. The cross-validation of the weights indicates a limited increase in both predictive validity and reliability. It is suggested that the main utility of the technique may be to increase reliability where greater reliability of measurement is needed. (DG)
Publication Type: N/A
Education Level: N/A
Audience: N/A
Language: N/A
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A