Descriptor
| Criterion Referenced Tests | 2 |
| Cutting Scores | 2 |
| Evaluation Criteria | 2 |
| Standard Setting (Scoring) | 2 |
| Test Reliability | 2 |
| Academic Standards | 1 |
| Formative Evaluation | 1 |
| Mastery Tests | 1 |
| Minimum Competency Testing | 1 |
| Models | 1 |
| Performance Factors | 1 |
| More ▼ | |
Source
| Journal of Educational… | 1 |
Author
| Gross, Leon J. | 1 |
| Herman, Joan L. | 1 |
Publication Type
| Journal Articles | 1 |
| Opinion Papers | 1 |
| Reports - Evaluative | 1 |
| Reports - Research | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedGross, Leon J. – Journal of Educational Measurement, 1982
Addressing Glass' argument (EJ 198 842) that a lack of interrelater reliability is an inherent deficiency in the Nedelsky technique, poor rater training and the need for a group decision procedure are presented as standard setting problems. (CM)
Descriptors: Academic Standards, Criterion Referenced Tests, Cutting Scores, Evaluation Criteria
Herman, Joan L. – 1982
A formative evaluation minimum competency test model is examined. The model systematically uses assessment information to support and facilitate program improvement. In terms of the model, four inter-related qualities are essential for a sound testing program. The content validity perspective looks at how well the district has defined competency…
Descriptors: Criterion Referenced Tests, Cutting Scores, Evaluation Criteria, Formative Evaluation


