Publication Date
| In 2026 | 3 |
| Since 2025 | 206 |
| Since 2022 (last 5 years) | 1098 |
| Since 2017 (last 10 years) | 2172 |
| Since 2007 (last 20 years) | 3308 |
Descriptor
Source
Author
Publication Type
Education Level
Location
| Australia | 111 |
| Turkey | 108 |
| China | 93 |
| United Kingdom | 93 |
| Germany | 87 |
| Iran | 71 |
| Spain | 66 |
| Taiwan | 66 |
| Canada | 65 |
| Indonesia | 57 |
| Netherlands | 54 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 2 |
| Meets WWC Standards with or without Reservations | 2 |
| Does not meet standards | 4 |
Peer reviewedMoore, N. C.; And Others – Journal of Clinical Psychology, 1984
Assessed the attitudes of 59 maternity patients toward the use of a computer for psychological testing. Results showed patients were almost unanimous in finding the computer acceptable and easy to use, and most would be willing to use the computer again. (JAC)
Descriptors: Computer Assisted Testing, Hospitals, Mother Attitudes, Participant Satisfaction
Veldkamp, Bernard P.; Ariel, Adelaide – 2002
Several methods have been developed for use on constrained adaptive testing. Item pool partitioning, multistage testing, and testlet-based adaptive testing are methods that perform well for specific cases of adaptive testing. The weighted deviation model and the Shadow Test approach can be more generally applied. These methods are based on…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Test Construction
Ariel, Adelaide; Veldkamp, Bernard P.; van der Linden, Wim J. – 2002
Preventing items in adaptive testing from being over- or underexposed is one of the main problems in computerized adaptive testing. Though the problem of overexposed items can be solved using a probabilistic item-exposure control method, such methods are unable to deal with the problem of underexposed items. Using a system of rotating item pools,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Test Construction
Spray, Judith; Lin, Chuan-Ju; Chen, Troy T. – 2002
Automated test assembly is a technology for producing multiple, equivalent test forms from an item pool. An important consideration for test security in automated test assembly is the inclusion of the same items on these multiple forms. Although it is possible to use item selection as a formal constraint in assembling forms, the number of…
Descriptors: Computer Assisted Testing, Item Banks, Test Construction, Test Format
Reese, Lynda M.; Schnipke, Deborah L.; Luebke, Stephen W. – 1999
Most large-scale testing programs facing computerized adaptive testing (CAT) must face the challenge of maintaining extensive content requirements, but content constraints in computerized adaptive testing (CAT) can compromise the precision and efficiency that could be achieved by a pure maximum information adaptive testing algorithm. This…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Simulation
Glas, C. A. W. – 2003
In computerized adaptive testing, updating item parameter estimates using adaptive testing data is often called online calibration. This study investigated how to evaluate whether the adaptive testing data used for online calibration sufficiently fit the item response model used. Three approaches were investigated, based on a Lagrange multiplier…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Quality Control
Deng, Hui; Chang, Hua-Hua – 2001
The purpose of this study was to compare a proposed revised a-stratified, or alpha-stratified, USTR method of test item selection with the original alpha-stratified multistage computerized adaptive testing approach (STR) and the use of maximum Fisher information (FSH) with respect to test efficiency and item pool usage using simulated computerized…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Selection
Papanastasiou, Elena C. – 2002
Due to the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT, from an examinees point of view, is that in many…
Descriptors: Adaptive Testing, Cheating, Computer Assisted Testing, Review (Reexamination)
Peer reviewedDavis, Laurie Laughlin; Pastor, Dena A.; Dodd, Barbara G.; Chiang, Claire; Fitzpatrick, Steven J. – Journal of Applied Measurement, 2003
Examined the effectiveness of the Sympson-Hetter technique and rotated content balancing relative to no exposure control and no content rotation conditions in a computerized adaptive testing system based on the partial credit model. Simulation results show the Sympson-Hetter technique can be used with minimal impact on measurement precision,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Simulation
Peer reviewedWeber, Bernhard; Schneider, Barbara; Fritze, Jurgen; Gille, Boris; Hornung, Stefan; Kuhner, Thorsten; Maurer, Konrad – Computers in Human Behavior, 2003
Investigated the acceptance of computerized assessment, particularly compared to conventional paper-and-pencil techniques, in seriously impaired psychiatric inpatients. Describes the development of a self-rating questionnaire (OPQ, Operation and Preference Questionnaire) and reports results that showed computerized assessment was convincingly…
Descriptors: Comparative Analysis, Computer Assisted Testing, Intermode Differences, Questionnaires
Peer reviewedZwick, Rebecca; Thayer, Dorothy T. – Applied Psychological Measurement, 2002
Used a simulation to investigate the applicability to computerized adaptive test data of a differential item functioning (DIF) analysis method. Results show the performance of this empirical Bayes enhancement of the Mantel Haenszel DIF analysis method to be quite promising. (SLD)
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Item Bias
Peer reviewedHuba, George J. – Educational and Psychological Measurement, 1988
Alternate versions of the Western Personnel Test were administered via computer and in the standard paper and pencil format to two groups of adults (N=50). Results suggest that the two forms of administration yield comparable results, and that separate norms for the computer-administered version are not necessary. (TJH)
Descriptors: Aptitude Tests, Comparative Analysis, Computer Assisted Testing, Occupational Tests
Peer reviewedMaguire, Kenneth B.; And Others – Psychology in the Schools, 1991
Administered Peabody Picture Vocabulary Test-Revised (PPVT-R) to 112 elementary school students in computer-automated and standard formats to investigate comparability of test results. Correlations found between standard and modified versions were positive, substantial, and acceptable for clinical use. Considers usefulness of adapted psychological…
Descriptors: Computer Assisted Testing, Elementary Education, Elementary School Students, Test Format
Peer reviewedAllen, C. Christopher; And Others – Journal of Clinical Psychology, 1993
Explored construct validity of computer-assisted battery of neuropsychological tests with 82 psychiatric inpatients and 89 normal volunteers. Principal components analysis of inpatients scores revealed simple reaction time, response accuracy, visuomotor skill, and complex processing and memory components. Found similar factorial structure in…
Descriptors: Computer Assisted Testing, Construct Validity, Institutionalized Persons, Psychiatric Hospitals
Peer reviewedCampbell, Keith A.; Rohlman, Diane S.; Storzbach, Daniel; Binder, Laurence M.; Anger, W. Kent; Kovera, Craig A.; Davis, Kelly L.; Grossman, Sandra J. – Assessment, 1999
Administered 12 psychological and 7 neurobehavioral performance tests twice to nonclinical normative samples of 30 adults (computer format only) and 30 adults (computer and conventional administration) with one week between administrations. Results suggest that individual test-retest reliability is not affected when tests are administered as part…
Descriptors: Adults, Computer Assisted Testing, Neuropsychology, Psychological Testing


