Descriptor
| Research Reports | 6 |
| Scoring Formulas | 6 |
| Test Reliability | 6 |
| Scoring | 3 |
| Test Interpretation | 3 |
| Test Results | 3 |
| Weighted Scores | 3 |
| Item Analysis | 2 |
| Measurement Techniques | 2 |
| Multiple Choice Tests | 2 |
| Performance Criteria | 2 |
| More ▼ | |
Source
Publication Type
| Reports - Research | 3 |
| Speeches/Meeting Papers | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
| Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Bayuk, Robert J. – 1973
An investigation was conducted to determine the effects of response-category weighting and item weighting on reliability and predictive validity. Response-category weighting refers to scoring in which, for each category (including omit and "not read"), a weight is assigned that is proportional to the mean criterion score of examinees selecting…
Descriptors: Aptitude Tests, Correlation, Predictive Validity, Research Reports
Yap, Kim Onn – 1978
A simulation study was designed to assess the severity of regression effects when a set of selection scores is also used as pretest scores as this pertains to RMC Model A of the Elementary and Secondary Education Act Title I evaluation and reporting system. Data sets were created with various characteristics (varying data reliability and…
Descriptors: Achievement Gains, Analysis of Variance, Elementary Secondary Education, Low Achievement
Bejar, Issac I. – 1976
The concept of testing for partial knowledge is considered with the concept of tailored testing. Following the special usage of latent trait theory, the word valdity is used to mean the correlation of a test with the construct the test measures. The concept of a method factor in the test is also considered as a part of the validity. The possible…
Descriptors: Achievement Tests, Adaptive Testing, Computer Assisted Testing, Confidence Testing
Echternacht, Gary – 1973
Estimates for the variance of empirically determined scoring weights are given. It is shown that test item writers should write distractors that discriminate on the criterion variable when this type of scoring is used. (Author)
Descriptors: Item Analysis, Measurement Techniques, Multiple Choice Tests, Performance Criteria
Echternacht, Gary – 1973
This study compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. The scoring methods under consideration were: (1) formula scoring, (2) a priori scoring, (3) empirical scoring with an internal criterion, and (4) two modifications of formula scoring. The study indicates a clear…
Descriptors: Item Analysis, Measurement Techniques, Multiple Choice Tests, Performance Criteria
Winters, Lynn – 1979
Four systems for scoring student essays were used to classify eleventh grade and undergraduate students according to writing ability. The reliabilities of the raters and the validities of the systems in classifying students were emphasized. Two analytic scoring systems--which assume that quality writing is characterized by the inclusion of certain…
Descriptors: Academic Ability, Analytical Criticism, Essays, Evaluation Criteria


