Publication Date
| In 2026 | 0 |
| Since 2025 | 30 |
| Since 2022 (last 5 years) | 169 |
| Since 2017 (last 10 years) | 330 |
| Since 2007 (last 20 years) | 614 |
Descriptor
| Computer Assisted Testing | 1058 |
| Test Items | 1058 |
| Adaptive Testing | 449 |
| Test Construction | 386 |
| Item Response Theory | 255 |
| Item Banks | 223 |
| Foreign Countries | 194 |
| Difficulty Level | 166 |
| Test Format | 160 |
| Item Analysis | 158 |
| Simulation | 142 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 24 |
| Practitioners | 20 |
| Teachers | 13 |
| Students | 2 |
| Administrators | 1 |
Location
| Germany | 17 |
| Australia | 13 |
| Japan | 12 |
| Taiwan | 12 |
| Turkey | 12 |
| United Kingdom | 12 |
| China | 11 |
| Oregon | 10 |
| Canada | 9 |
| Netherlands | 9 |
| United States | 9 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 8 |
| Americans with Disabilities… | 1 |
| Head Start | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedMooney, G. A.; Bligh, J. G.; Leinster, S. J. – Medical Teacher, 1998
Presents a system of classification for describing computer-based assessment techniques based on the level of action and educational activity they offer. Illustrates 10 computer-based assessment techniques and discusses their educational value. Contains 14 references. (Author)
Descriptors: Adaptive Testing, Classification, Computer Assisted Testing, Foreign Countries
Peer reviewedVispoel, Walter P. – Journal of Educational Measurement, 1998
Studied effects of administration mode [computer adaptive test (CAT) versus self-adaptive test (SAT)], item-by-item answer feedback, and test anxiety on results from computerized vocabulary tests taken by 293 college students. CATs were more reliable than SATs, and administration time was less when feedback was provided. (SLD)
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Feedback
Peer reviewedReise, Steven P. – Applied Psychological Measurement, 2001
This book contains a series of research articles about computerized adaptive testing (CAT) written for advanced psychometricians. The book is divided into sections on: (1) item selection and examinee scoring in CAT; (2) examples of CAT applications; (3) item banks; (4) determining model fit; and (5) using testlets in CAT. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Banks
Wise, Steven L.; Kong, Xiaojing – Applied Measurement in Education, 2005
When low-stakes assessments are administered, the degree to which examinees give their best effort is often unclear, complicating the validity and interpretation of the resulting test scores. This study introduces a new method, based on item response time, for measuring examinee test-taking effort on computer-based test items. This measure, termed…
Descriptors: Psychometrics, Validity, Reaction Time, Test Items
A Closer Look at Using Judgments of Item Difficulty to Change Answers on Computerized Adaptive Tests
Vispoel, Walter P.; Clough, Sara J.; Bleiler, Timothy – Journal of Educational Measurement, 2005
Recent studies have shown that restricting review and answer change opportunities on computerized adaptive tests (CATs) to items within successive blocks reduces time spent in review, satisfies most examinees' desires for review, and controls against distortion in proficiency estimates resulting from intentional incorrect answering of items prior…
Descriptors: Mathematics, Item Analysis, Adaptive Testing, Computer Assisted Testing
Brosvic, Gary M.; Epstein, Michael L.; Dihoff, Roberta E.; Cook, Michael L. – Psychological Record, 2006
The present studies were undertaken to examine the effects of manipulating delay-interval task (Study 1) and timing of feedback (Study 2) on acquisition and retention. Participants completed a 100-item cumulative final examination, which included 50 items from each laboratory examination, plus 50 entirely new items. Acquisition and retention were…
Descriptors: Individual Testing, Multiple Choice Tests, Feedback, Test Items
Threlfall, John; Pool, Peter; Homer, Matthew; Swinnerton, Bronwen – Educational Studies in Mathematics, 2007
This article explores the effect on assessment of "translating" paper and pencil test items into their computer equivalents. Computer versions of a set of mathematics questions derived from the paper-based end of key stage 2 and 3 assessments in England were administered to age appropriate pupil samples, and the outcomes compared.…
Descriptors: Test Items, Student Evaluation, Foreign Countries, Test Validity
Dorans, Neil J.; Schmitt, Alicia P. – 1991
Differential item functioning (DIF) assessment attempts to identify items or item types for which subpopulations of examinees exhibit performance differentials that are not consistent with the performance differentials typically seen for those subpopulations on collections of items that purport to measure a common construct. DIF assessment…
Descriptors: Computer Assisted Testing, Constructed Response, Educational Assessment, Item Bias
van der Linden, Wim J. – 1995
Dichotomous item response theory (IRT) models can be viewed as families of stochastically ordered distributions of responses to test items. This paper explores several properties of such distributions. The focus is on the conditions under which stochastic order in families of conditional distributions is transferred to their inverse distributions,…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Foreign Countries
Bergstrom, Betty A.; Stahl, John A. – 1992
This paper reports a method for assessing the adequacy of existing item banks for computer adaptive testing. The method takes into account content specifications, test length, and stopping rules, and can be used to determine if an existing item bank is adequate to administer a computer adaptive test efficiently across differing levels of examinee…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Clariana, Roy B. – 1990
Research has shown that multiple-choice questions formed by transforming or paraphrasing a reading passage provide a measure of student comprehension. It is argued that similar transformation and paraphrasing of lesson questions is an appropriate way to form parallel multiple-choice items to be used as a posttest measure of student comprehension.…
Descriptors: Comprehension, Computer Assisted Testing, Difficulty Level, Measurement Techniques
Jelden, D. L. – 1987
A study of 696 undergraduates at the University of Northern Colorado was undertaken to determine the effects of computerized unit test item feedback on final examination scores. The study, which employed the PHOENIX computer managed instruction system, included students at all undergraduate levels enrolled in an Oceanography course. To determine…
Descriptors: College Students, Computer Assisted Instruction, Computer Assisted Testing, Feedback
Lee, Jo Ann; And Others – 1984
The difficulty of test items administered by paper and pencil were compared with the difficulty of the same items administered by computer. The study was conducted to determine if an interaction exists between mode of test administration and ability. An arithmetic reasoning test was constructed for this study. All examinees had taken the Armed…
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Difficulty Level
Samejima, Fumiko – 1981
In defense of retaining the "latent trait theory" term, instead of replacing it with "item response theory" as some recent research would have it, the following objectives are outlined: (1) investigation of theory and method for estimating the operating characteristics of discrete item responses using a minimum number of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Factor Analysis, Latent Trait Theory
Peer reviewedWainer, Howard; Kiely, Gerard L. – Journal of Educational Measurement, 1987
The testlet, a bundle of test items, alleviates some problems associated with computerized adaptive testing: context effects, lack of robustness, and item difficulty ordering. While testlets may be linear or hierarchical, the most useful ones are four-level hierarchical units, containing 15 items and partitioning examinees into 16 classes. (GDC)
Descriptors: Adaptive Testing, Computer Assisted Testing, Context Effect, Item Banks

Direct link
