Descriptor
Source
| Applied Psychological… | 14 |
Author
| Wang, Tianyou | 2 |
| van der Linden, Wim J. | 2 |
| Ankenmann, Robert D. | 1 |
| Chang, Hua-Hua | 1 |
| Chen, Shu-Ying | 1 |
| Cheng, Philip E. | 1 |
| De Ayala, R. J. | 1 |
| Divgi, D. R. | 1 |
| Folk, Valerie Greaud | 1 |
| Gialluca, Kathleen A. | 1 |
| Green, Bert F. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 14 |
| Reports - Evaluative | 8 |
| Reports - Research | 4 |
| Reports - Descriptive | 2 |
Education Level
Audience
Location
| Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Armed Services Vocational… | 1 |
| Law School Admission Test | 1 |
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Peer reviewedCheng, Philip E.; Liou, Michelle – Applied Psychological Measurement, 2000
Reviewed methods of estimating theta suitable for computerized adaptive testing (CAT) and discussed the differences between Fisher and Kullback-Leibler information criteria for selecting items. Examined the accuracy of different CAT algorithms using samples from the National Assessment of Educational Progress. Results show when correcting for…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewedDivgi, D. R. – Applied Psychological Measurement, 1989
Two methods for estimating the reliability of a computerized adaptive test (CAT) without using item response theory are presented. The data consist of CAT and paper-and-pencil scores from identical or equivalent samples, and scores for all examinees on one or more covariates, using the Armed Services Vocational Aptitude Battery. (TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Predictive Validity
Peer reviewedvan der Linden, Wim J.; Reese, Lynda M. – Applied Psychological Measurement, 1998
Proposes a model for constrained computerized adaptive testing in which the information in the test at the trait level (theta) estimate is maximized subject to the number of possible constraints on the content of the test. Test assembly relies on a linear-programming approach. Illustrates the approach through simulation with items from the Law…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedChen, Shu-Ying; Ankenmann, Robert D.; Chang, Hua-Hua – Applied Psychological Measurement, 2000
Compared five item selection rules with respect to the efficiency and precision of trait (theta) estimation at the early stages of computerized adaptive testing (CAT). The Fisher interval information, Fisher information with a posterior distribution, Kullback-Leibler information, and Kullback-Leibler information with a posterior distribution…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Selection
Peer reviewedFolk, Valerie Greaud; Green, Bert F. – Applied Psychological Measurement, 1989
Some effects of using unidimensional item response theory (IRT) were examined when the assumption of unidimensionality was violated. Adaptive and nonadaptive tests were used. It appears that use of a unidimensional model can bias parameter estimation, adaptive item selection, and ability estimation for the two types of testing. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Peer reviewedNicewander, W. Alan; Thomasson, Gary L. – Applied Psychological Measurement, 1999
Derives three reliability estimates for the Bayes modal estimate (BME) and the maximum-likelihood estimate (MLE) of theta in computerized adaptive tests (CATs). Computes the three reliability estimates and the true reliabilities of both BME and MLE for seven simulated CATs. Results show the true reliabilities for BME and MLE to be nearly identical…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewedWang, Shudong; Wang, Tianyou – Applied Psychological Measurement, 2001
Evaluated the relative accuracy of the weighted likelihood estimate (WLE) of T. Warm (1989) compared to the maximum likelihood estimate (MLE), expected a posteriori estimate, and maximum a posteriori estimate. Results of the Monte Carlo study, which show the relative advantages of each approach, suggest that the test termination rule has more…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedWang, Tianyou; Hanson, Bradley A.; Lau, Che-Ming A. – Applied Psychological Measurement, 1999
Extended the use of a beta prior in trait estimation to the maximum expected a posteriori (MAP) method of Bayesian estimation. This new method, essentially unbiased MAP, was compared with MAP, essentially unbiased expected a posteriori, weighted likelihood, and maximum-likelihood estimation methods. The new method significantly reduced bias in…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedSykes, Robert C.; Ito, Kyoko – Applied Psychological Measurement, 1997
Evaluated the equivalence of scores and one-parameter logistic model item difficulty estimates obtained from computer-based and paper-and-pencil forms of a licensure examination taken by 418 examinees. There was no effect of either order or mode of administration on the equivalences. (SLD)
Descriptors: Computer Assisted Testing, Estimation (Mathematics), Health Personnel, Item Response Theory
Peer reviewedvan der Linden, Wim J. – Applied Psychological Measurement, 1999
Proposes a procedure for empirical initialization of the trait (theta) estimator in adaptive testing that is based on the statistical relation between theta and background variables known prior to test administration. Illustrates the procedure for an adaptive version of a test from the Dutch General Aptitude Battery. (SLD)
Descriptors: Adaptive Testing, Aptitude Tests, Bayesian Statistics, Computer Assisted Testing
Peer reviewedHetter, Rebecca D.; And Others – Applied Psychological Measurement, 1994
Effects on computerized adaptive test score of using a paper-and-pencil (P&P) calibration to select items and estimate scores were compared with effects of using computer calibration. Results with 2,999 Navy recruits support the use of item parameters calibrated from either P&P or computer administrations. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedVale, C. David; Gialluca, Kathleen A. – Applied Psychological Measurement, 1988
To determine which produced the most accurate item parameter estimates, four methods of item response theory were evaluated: (1) heuristic estimates; (2) the ANCILLES program; (3) the LOGIST program; and (4) the ASCAL program. LOGIST and ASCAL produced estimates of superior and essentially equivalent accuracy. (SLD)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software, Estimation (Mathematics)
Peer reviewedLunz, Mary E.; And Others – Applied Psychological Measurement, 1992
The effects of reviewing items and altering responses on the efficiency of computerized adaptive tests and resultant ability estimates of the examinees were explored for medical technology students (220 students could and 492 students could not review and alter their responses). Data do not support disallowing review. (SLD)
Descriptors: Ability, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Peer reviewedDe Ayala, R. J. – Applied Psychological Measurement, 1992
A computerized adaptive test (CAT) based on the nominal response model (NR CAT) was implemented, and the performance of the NR CAT and a CAT based on the three-parameter logistic model was compared. The NR CAT produced trait estimates comparable to those of the three-parameter test. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Equations (Mathematics)


