Descriptor
| Content Validity | 2 |
| Difficulty Level | 2 |
| Higher Education | 2 |
| Test Items | 2 |
| Beginning Teachers | 1 |
| Comparative Analysis | 1 |
| Cutting Scores | 1 |
| Education Majors | 1 |
| Elementary Secondary Education | 1 |
| Evaluators | 1 |
| Item Analysis | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 2 |
| Journal Articles | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
| Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| National Teacher Examinations | 1 |
What Works Clearinghouse Rating
Peer reviewedPoggio, John P.; And Others – Educational and Psychological Measurement, 1987
College faculty served as judges to rate the instructional validity of items on the National Teacher Examinations Core Battery. The ratings were examined in relation to actual test performance, as well as panelists' ratings of item difficulty and relevance. (Author/GDC)
Descriptors: Beginning Teachers, Content Validity, Difficulty Level, Education Majors
Hambleton, Ronald K.; And Others – 1987
The study compared two promising item response theory (IRT) item-selection methods, optimal and content-optimal, with two non-IRT item selection methods, random and classical, for use in fixed-length certification exams. The four methods were used to construct 20-item exams from a pool of approximately 250 items taken from a 1985 certification…
Descriptors: Comparative Analysis, Content Validity, Cutting Scores, Difficulty Level


