Descriptor
| Computer Assisted Instruction | 2 |
| Evaluators | 2 |
| Interrater Reliability | 2 |
| Minimum Competencies | 2 |
| Minimum Competency Testing | 2 |
| Scoring | 2 |
| Standard Setting (Scoring) | 2 |
| Test Interpretation | 2 |
| Difficulty Level | 1 |
| Examiners | 1 |
| Feedback | 1 |
| More ▼ | |
Source
| Educational Measurement:… | 2 |
Author
| Plake, Barbara S. | 1 |
| Reid, Jerry B. | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Evaluative | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedPlake, Barbara S.; And Others – Educational Measurement: Issues and Practice, 1991
Possible sources of intrajudge inconsistency in standard setting are reviewed, and approaches are presented to improve the accuracy of rating. Procedures for providing judges with feedback through discussion or computerized communication are discussed. Monitoring and maintaining judges' consistency throughout the rating process are essential. (SLD)
Descriptors: Computer Assisted Instruction, Evaluators, Examiners, Feedback
Peer reviewedReid, Jerry B. – Educational Measurement: Issues and Practice, 1991
Training judges to generate item ratings in standard setting once the reference group has been defined is discussed. It is proposed that sensitivity to the factors that determine difficulty can be improved through training. Three criteria for determining when training is sufficient are offered. (SLD)
Descriptors: Computer Assisted Instruction, Difficulty Level, Evaluators, Interrater Reliability


