Publication Date
| In 2026 | 3 |
| Since 2025 | 240 |
| Since 2022 (last 5 years) | 1373 |
| Since 2017 (last 10 years) | 2831 |
| Since 2007 (last 20 years) | 4821 |
Descriptor
| Computer Assisted Testing | 7218 |
| Foreign Countries | 2054 |
| Test Construction | 1112 |
| Student Evaluation | 1067 |
| Evaluation Methods | 1061 |
| Test Items | 1058 |
| Adaptive Testing | 1053 |
| Educational Technology | 905 |
| Comparative Analysis | 835 |
| Scores | 832 |
| Higher Education | 825 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 182 |
| Researchers | 146 |
| Teachers | 122 |
| Policymakers | 40 |
| Administrators | 36 |
| Students | 15 |
| Counselors | 9 |
| Parents | 4 |
| Media Staff | 3 |
| Support Staff | 3 |
Location
| Australia | 170 |
| United Kingdom | 153 |
| Turkey | 126 |
| China | 117 |
| Germany | 108 |
| Canada | 106 |
| Spain | 94 |
| Taiwan | 89 |
| Netherlands | 73 |
| Iran | 72 |
| United States | 68 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 5 |
van der Linden, Wim J. – 1997
The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple expression in closed form. In addition, it is…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai – 2001
The multistage alpha-stratified computerized adaptive testing (CAT) design advocated a new philosophy of pool management and item selection using low discriminating items first. It has been demonstrated through simulation studies to be effective both in reducing item overlap rate and enhancing pool utilization with certain pool types. Based on…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Selection
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – 2003
This paper proposes an item selection algorithm that can be used to neutralize the effect of time limits in computer adaptive testing. The method is based on a statistical model for the response-time distributions of the test takers on the items in the pool that is updated each time a new item has been administered. Predictions from the model are…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Linear Programming
Pommerich, Mary; Segall, Daniel O. – 2003
Research discussed in this paper was conducted as part of an ongoing large-scale simulation study to evaluate methods of calibrating pretest items for computerized adaptive testing (CAT) pools. The simulation was designed to mimic the operational CAT Armed Services Vocational Aptitude Battery (ASVAB) testing program, in which a single pretest item…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Maximum Likelihood Statistics
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – 2000
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. For computerized adaptive tests (CAT) with dichotomous items, several person-fit statistics for detecting nonfitting item score patterns have been proposed. Both for paper-and-pencil (P&P) test and CATs, detection of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Response Theory
Zhu, Daming; Fan, Meichu – 1999
The convention for selecting starting points (that is, initial items) on a computerized adaptive test (CAT) is to choose as starting points items of medium difficulty for all examinees. Selecting a starting point based on prior information about an individual's ability was first suggested many years ago, but has been believed unimportant provided…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Bishop, Dan – InCider, 1983
Following a discussion of drill/practice and tutorial programing techniques (SE 533 144), this part focuses on techniques dealing with text problems. Various listings are included to demonstrate such methods as the READ/DATA approach in presenting questions to students. (JN)
Descriptors: Computer Assisted Testing, Computer Programs, Elementary Secondary Education, Instructional Materials
Peer reviewedAnderson, Jonathan – Journal of Research in Reading, 1983
Reports a number of modifications to the computer readability program STAR (Simple Tests Approach to Readability) designed to make it more useful. (FL)
Descriptors: Computer Assisted Testing, Content Analysis, Readability, Readability Formulas
Jelden, D. L. – AEDS Monitor, 1982
Describes a procedure for using the computer to assist in evaluating the progress of students on pretests, unit tests, posttests or a combination of tests. The use of computers to evaluate cognitive objectives of a course is examined. Twenty-four references are listed. (MER)
Descriptors: Cognitive Tests, Computer Assisted Testing, Criterion Referenced Tests, Flow Charts
Proctor, Andrew J. – Journal of Physical Education and Recreation, 1980
As computers become increasingly available to public schools, physical education teachers and coaches will have access to the many services and conveniences the computer offers. Physical education majors should be kept current with technology that affects their professional development. (CJ)
Descriptors: Computer Assisted Testing, Course Content, Higher Education, Measurement Equipment
Peer reviewedStocking, Martha L. – Journal of Educational and Behavioral Statistics, 1996
An alternative method for scoring adaptive tests, based on number-correct scores, is explored and compared with a method that relies more directly on item response theory. Using the number-correct score with necessary adjustment for intentional differences in adaptive test difficulty is a statistically viable scoring method. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Response Theory
Peer reviewedPotenza, Maria T.; Stocking, Martha L. – Journal of Educational Measurement, 1997
Common strategies for dealing with flawed items in conventional testing, grounded in principles of fairness to examinees, are re-examined in the context of adaptive testing. The additional strategy of retesting from a pool cleansed of flawed items is found, through a Monte Carlo study, to bring about no practical improvement. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Monte Carlo Methods
Peer reviewedFolk, Valerie Greaud; Green, Bert F. – Applied Psychological Measurement, 1989
Some effects of using unidimensional item response theory (IRT) were examined when the assumption of unidimensionality was violated. Adaptive and nonadaptive tests were used. It appears that use of a unidimensional model can bias parameter estimation, adaptive item selection, and ability estimation for the two types of testing. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Peer reviewedDodd, Barbara G.; And Others – Applied Psychological Measurement, 1989
General guidelines are developed to assist practitioners in devising operational computerized adaptive testing systems based on the graded response model. The effects of the following major variables were examined: item pool size; stepsize used along the trait continuum until maximum likelihood estimation could be calculated; and stopping rule…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Item Banks
Peer reviewedKingsbury, G. Gage; Zara, Anthony R. – Applied Measurement in Education, 1991
This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Computer Simulation


