Descriptor
Source
| Applied Psychological… | 1 |
Author
| Ang, Cheng | 1 |
| Chang, Yu-Wen | 1 |
| Davison, Mark L. | 1 |
| De Ayala, R. J. | 1 |
| DeAyala, R. J. | 1 |
| Du Bose, Pansy | 1 |
| Glowacki, Margaret L. | 1 |
| Green, Kathy E. | 1 |
| Kluever, Raymond C. | 1 |
| Koch, William R. | 1 |
| Kromrey, Jeffrey D. | 1 |
| More ▼ | |
Publication Type
| Speeches/Meeting Papers | 9 |
| Reports - Evaluative | 5 |
| Reports - Research | 4 |
| Journal Articles | 1 |
Education Level
Audience
| Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Alabama High School… | 1 |
| Raven Progressive Matrices | 1 |
What Works Clearinghouse Rating
Peer reviewedStocking, Martha L.; And Others – Applied Psychological Measurement, 1993
A method of automatically selecting items for inclusion in a test with constraints on item content and statistical properties was applied to real data. Tests constructed manually from the same data and constraints were compared to tests constructed automatically. Results show areas in which automated assembly can improve test construction. (SLD)
Descriptors: Algorithms, Automation, Comparative Testing, Computer Assisted Testing
Ang, Cheng; Miller, M. David – 1993
The power of the procedure of W. Stout to detect deviations from essential unidimensionality in two-dimensional data was investigated for minor, moderate, and large deviations from unidimensionality using criteria for deviations from unidimensionality based on prior research. Test lengths of 20 and 40 items and sample sizes of 700 and 1,500 were…
Descriptors: Ability, Comparative Testing, Correlation, Item Response Theory
Chang, Yu-Wen; Davison, Mark L. – 1992
Standard errors and bias of unidimensional and multidimensional ability estimates were compared in a factorial, simulation design with two item response theory (IRT) approaches, two levels of test correlation (0.42 and 0.63), two sample sizes (500 and 1,000), and a hierarchical test content structure. Bias and standard errors of subtest scores…
Descriptors: Comparative Testing, Computer Simulation, Correlation, Error of Measurement
DeAyala, R. J.; Koch, William R. – 1987
A nominal response model-based computerized adaptive testing procedure (nominal CAT) was implemented using simulated data. Ability estimates from the nominal CAT were compared to those from a CAT based upon the three-parameter logistic model (3PL CAT). Furthermore, estimates from both CAT procedures were compared with the known true abilities used…
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
De Ayala, R. J. – 1992
One important and promising application of item response theory (IRT) is computerized adaptive testing (CAT). The implementation of a nominal response model-based CAT (NRCAT) was studied. Item pool characteristics for the NRCAT as well as the comparative performance of the NRCAT and a CAT based on the three-parameter logistic (3PL) model were…
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
Glowacki, Margaret L. – 1991
The purpose of this study was to determine which equating models are appropriate for the Alabama High School Graduation Examination (AHSGE) by equating two previously administered fall forms for each subject area of the AHSGE and determining whether differences exist in the test score distributions or passing scores resulting from the equating…
Descriptors: Comparative Testing, Cutting Scores, Equated Scores, Graduation Requirements
Green, Kathy E.; Kluever, Raymond C. – 1991
Item components that might contribute to the difficulty of items on the Raven Colored Progressive Matrices (CPM) and the Standard Progressive Matrices (SPM) were studied. Subjects providing responses to CPM items were 269 children aged 2 years 9 months to 11 years 8 months, most of whom were referred for testing as potentially gifted. A second…
Descriptors: Academically Gifted, Children, Comparative Testing, Difficulty Level
Du Bose, Pansy; Kromrey, Jeffrey D. – 1993
Empirical evidence is presented of the relative efficiency of two potential linkage plans to be used when equivalent test forms are being administered. Equating is a process by which scores on one form of a test are converted to scores on another form of the same test. A Monte Carlo study was conducted to examine equating stability and statistical…
Descriptors: Art Education, Comparative Testing, Computer Simulation, Equated Scores
Silva, Sharron J. – 1985
Test item selection techniques based on traditional item analysis methods were compared to techniques based on item response theory. The consistency of mastery classifications in criterion referenced reading tests was examined. Pretest and posttest data were available for 945 first and second grade students and for 1796 fourth to sixth grade…
Descriptors: Analysis of Variance, Comparative Testing, Criterion Referenced Tests, Elementary Education


