Descriptor
| Comparative Analysis | 2 |
| Cutting Scores | 2 |
| Mathematical Models | 2 |
| Standard Setting (Scoring) | 2 |
| Computer Simulation | 1 |
| Estimation (Mathematics) | 1 |
| Evaluators | 1 |
| Generalizability Theory | 1 |
| Item Response Theory | 1 |
| Latent Trait Theory | 1 |
| Pass Fail Grading | 1 |
| More ▼ | |
Source
| Journal of Educational… | 2 |
Publication Type
| Journal Articles | 2 |
| Reports - Descriptive | 1 |
| Reports - Research | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedKane, Michael T. – Journal of Educational Measurement, 1987
The use of item response theory models for analyzing the results of judgmental standard setting studies (the Angoff technique) for establishing minimum pass levels is discussed. A comparison of three methods indicates the traditional approach may not be best. A procedure based on generalizability theory is suggested. (GDC)
Descriptors: Comparative Analysis, Cutting Scores, Generalizability Theory, Latent Trait Theory
Peer reviewedPlake, Barbara S.; Kane, Michael T. – Journal of Educational Measurement, 1991
Several methods for determining a passing score on an examination from individual raters' estimates of minimal pass levels were compared through simulation. The methods used differed in the weighting estimates for each item received in the aggregation process. Reasons why the simplest procedure is most preferred are discussed. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Cutting Scores, Estimation (Mathematics)


