Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| Models | 3 |
| Predictive Measurement | 3 |
| Item Response Theory | 2 |
| Academic Ability | 1 |
| Adaptive Testing | 1 |
| Adults | 1 |
| Armed Forces | 1 |
| Bayesian Statistics | 1 |
| Cognitive Measurement | 1 |
| Cognitive Processes | 1 |
| Comparative Analysis | 1 |
| More ▼ | |
Source
| Applied Psychological… | 3 |
Author
| Dunlap, William P. | 1 |
| Johnson, Matthew S. | 1 |
| Loken, Eric | 1 |
| Rulison, Kelly L. | 1 |
| Sinharay, Sandip | 1 |
| Stern, Hal S. | 1 |
Publication Type
| Journal Articles | 3 |
| Reports - Research | 3 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Rulison, Kelly L.; Loken, Eric – Applied Psychological Measurement, 2009
A difficult result to interpret in Computerized Adaptive Tests (CATs) occurs when an ability estimate initially drops and then ascends continuously until the test ends, suggesting that the true ability may be higher than implied by the final estimate. This study explains why this asymmetry occurs and shows that early mistakes by high-ability…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Academic Ability
Sinharay, Sandip; Johnson, Matthew S.; Stern, Hal S. – Applied Psychological Measurement, 2006
Model checking in item response theory (IRT) is an underdeveloped area. There is no universally accepted tool for checking IRT models. The posterior predictive model-checking method is a popular Bayesian model-checking tool because it has intuitive appeal, is simple to apply, has a strong theoretical basis, and can provide graphical or numerical…
Descriptors: Predictive Measurement, Item Response Theory, Bayesian Statistics, Models
Peer reviewedDunlap, William P.; And Others – Applied Psychological Measurement, 1989
The reliability of derived measures from 4 cognitive paradigms was studied using 19 Navy enlisted men (aged between 18 and 24 years). The paradigms were: graphemic and phonemic analysis; semantic memory retrieval; lexical decision making; and letter classification. Results indicate that derived scores may have low reliability. (SLD)
Descriptors: Adults, Armed Forces, Cognitive Measurement, Cognitive Processes

Direct link
