Descriptor
| Adaptive Testing | 2 |
| Computer Assisted Testing | 2 |
| Item Banks | 2 |
| Statistical Distributions | 2 |
| Test Construction | 2 |
| Test Length | 2 |
| Ability Identification | 1 |
| Cognitive Ability | 1 |
| Cognitive Measurement | 1 |
| Comparative Testing | 1 |
| Computer Simulation | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 1 |
| Numerical/Quantitative Data | 1 |
| Reports - Evaluative | 1 |
| Reports - Research | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| California Achievement Tests | 1 |
What Works Clearinghouse Rating
Peer reviewedWainer, Howard; And Others – Journal of Educational Measurement, 1992
Computer simulations were run to measure the relationship between testlet validity and factors of item pool size and testlet length for both adaptive and linearly constructed testlets. Making a testlet adaptive yields only modest increases in aggregate validity because of the peakedness of the typical proficiency distribution. (Author/SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
Rizavi, Saba; Hariharan, Swaminathan – Online Submission, 2001
The advantages that computer adaptive testing offers over linear tests have been well documented. The Computer Adaptive Test (CAT) design is more efficient than the Linear test design as fewer items are needed to estimate an examinee's proficiency to a desired level of precision. In the ideal situation, a CAT will result in examinees answering…
Descriptors: Guessing (Tests), Test Construction, Test Length, Computer Assisted Testing


