NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Diao, Qi; van der Linden, Wim J. – Applied Psychological Measurement, 2013
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Descriptors: Automation, Test Construction, Test Format, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Hol, A. Michiel; Vorst, Harrie C. M.; Mellenbergh, Gideon J. – Applied Psychological Measurement, 2007
In a randomized experiment (n = 515), a computerized and a computerized adaptive test (CAT) are compared. The item pool consists of 24 polytomous motivation items. Although items are carefully selected, calibration data show that Samejima's graded response model did not fit the data optimally. A simulation study is done to assess possible…
Descriptors: Student Motivation, Simulation, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Armstrong, Ronald D.; Jones, Douglas H.; Koppel, Nicole B.; Pashley, Peter J. – Applied Psychological Measurement, 2004
A multiple-form structure (MFS) is an ordered collection or network of testlets (i.e., sets of items). An examinee's progression through the network of testlets is dictated by the correctness of an examinee's answers, thereby adapting the test to his or her trait level. The collection of paths through the network yields the set of all possible…
Descriptors: Law Schools, Adaptive Testing, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
Neuman, George; Baydoun, Ramzi – Applied Psychological Measurement, 1998
Studied the cross-mode equivalence of paper-and-pencil and computer-based clerical tests with 141 undergraduates. Found no differences across modes for the two types of tests. Differences can be minimized when speeded computerized tests follow the same administration and response procedures as the paper format. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
Sykes, Robert C.; Ito, Kyoko – Applied Psychological Measurement, 1997
Evaluated the equivalence of scores and one-parameter logistic model item difficulty estimates obtained from computer-based and paper-and-pencil forms of a licensure examination taken by 418 examinees. There was no effect of either order or mode of administration on the equivalences. (SLD)
Descriptors: Computer Assisted Testing, Estimation (Mathematics), Health Personnel, Item Response Theory
Peer reviewed Peer reviewed
Henly, Susan J.; And Others – Applied Psychological Measurement, 1989
A group of covariance structure models was examined to ascertain the similarity between conventionally administered and computerized adaptive versions of the Differential Aptitude Test (DAT). Results for 332 students indicate that the computerized version of the DAT is an adequate representation of the conventional test battery. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Applied Psychological Measurement, 2006
Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Format, Equated Scores
Peer reviewed Peer reviewed
And Others; Mann, Irene T. – Applied Psychological Measurement, 1979
Several methodological problems (particularly the assumed bipolarity of scales, instructions regarding use of the midpoint, and concept-scale interaction) which may contribute to a lack of precision in the semantic differential technique were investigated. Results generally supported the use of the semantic differential. (Author/JKS)
Descriptors: Analysis of Variance, Computer Assisted Testing, Higher Education, Rating Scales