Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| Classification | 2 |
| Error of Measurement | 2 |
| Computation | 1 |
| Cutting Scores | 1 |
| Evaluation Criteria | 1 |
| Evaluation Methods | 1 |
| Integrity | 1 |
| Item Response Theory | 1 |
| Measurement | 1 |
| Measurement Techniques | 1 |
| Measures (Individuals) | 1 |
| More ▼ | |
Source
| Online Submission | 2 |
Author
| Herreid, Charlene | 1 |
| Karkee, Thakur B. | 1 |
| Micceri, Theodore | 1 |
| Parasher, Pradnya | 1 |
| Waugh, Gordon W. | 1 |
| Wright, Karen R. | 1 |
Publication Type
| Reports - Evaluative | 2 |
| Speeches/Meeting Papers | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene – Online Submission, 2009
An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…
Descriptors: Error of Measurement, Minority Groups, Measurement, Computation
Karkee, Thakur B.; Wright, Karen R. – Online Submission, 2004
Different item response theory (IRT) models may be employed for item calibration. Change of testing vendors, for example, may result in the adoption of a different model than that previously used with a testing program. To provide scale continuity and preserve cut score integrity, item parameter estimates from the new model must be linked to the…
Descriptors: Measures (Individuals), Evaluation Criteria, Testing, Integrity


