Publication Date
| In 2026 | 5 |
| Since 2025 | 210 |
| Since 2022 (last 5 years) | 1102 |
| Since 2017 (last 10 years) | 2176 |
| Since 2007 (last 20 years) | 3312 |
Descriptor
Source
Author
Publication Type
Education Level
Location
| Australia | 111 |
| Turkey | 109 |
| China | 94 |
| United Kingdom | 93 |
| Germany | 87 |
| Iran | 71 |
| Spain | 66 |
| Taiwan | 66 |
| Canada | 65 |
| Indonesia | 58 |
| Netherlands | 54 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 2 |
| Meets WWC Standards with or without Reservations | 2 |
| Does not meet standards | 4 |
Patience, Wayne M.; Reckase, Mark D. – 1978
The feasibility of implementing self-paced computerized tailored testing evaluation methods in an undergraduate measurement and evaluation course, and possible differences in achievement levels under a paced versus self-paced testing schedule were investigated. A maximum likelihood tailored testing procedure based on the simple logistic model had…
Descriptors: Academic Achievement, Achievement Tests, Adaptive Testing, Computer Assisted Testing
Surkan, Alvin J.; Evans, Richard M. – 1979
Since fall 1976, an undergraduate measurement class has utilized a 32K microcomputer programmed in APL to present replicate sets of questions randomly selected from 80-question item pools representing each of three domains of knowledge based on Sax's Principles of Educational Measurement and Evaluation. Multiple choice questions are presented on a…
Descriptors: Computer Assisted Testing, Computer Programs, Higher Education, Item Analysis
Millman, Jason – 1977
A unique system is described for creating tests by computer. It is unique because, instead of storing items in the computer, item algorithms similar to Hively's notion of item forms are banked. Every item, and thus every test, represents a sample from domains consisting of thousands of items. The paper contains a discussion of the special…
Descriptors: Computer Assisted Testing, Computer Programs, Criterion Referenced Tests, Item Banks
Peer reviewedEaves, Ronald C.; Smith, Earl – Journal of Experimental Education, 1986
The effects of examination format and previous experience with microcomputers on the test scores of 96 undergraduate students were investigated. Results indicated no significant differences in the scores obtained on the two types of test administration (microcomputer and traditional paper and pencil). Computer experience was not an important…
Descriptors: College Students, Computer Assisted Testing, Educational Media, Higher Education
Clifton, Charles, Jr.,; And Others – Journal of Verbal Learning and Verbal Behavior, 1984
Describes two experiments which demonstrated that readers use specific lexical information in comprehending sentences to anticipate and prepare for the appearance of lexical noun phrases and to postulate "gaps" that are associated with "fillers." Results also indicated that lexically based expectations involve the use of information about…
Descriptors: Computer Assisted Testing, Grammar, Lexicology, Pragmatics
Bewley, William L.; Chung, Gregory K. W. K.; Kim, Jin-Ok; Lee, John J.; Saadat, Farzad – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2006
Because of the great promise of distance learning for delivering cost-effective instruction, there is great interest in determining whether or not it actually is effective, and--more interesting--determining what variables of design and implementation make it more or less effective. Unfortunately, much of the research has been based on simple…
Descriptors: Computer Assisted Testing, Teaching Methods, Psychometrics, Course Objectives
Hambleton, Ronald K.; Sireci, Stephen G.; Swaminathan, H.; Xing, Dehui; Rizavi, Saba – 2003
The purposes of this research study were to develop and field test anchor-based judgmental methods for enabling test specialists to estimate item difficulty statistics. The study consisted of three related field tests. In each, researchers worked with six Law School Admission Test (LSAT) test specialists and one or more of the LSAT subtests. The…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Difficulty Level
Pommerich, Mary; Segall, Daniel O. – 2003
Research discussed in this paper was conducted as part of an ongoing large-scale simulation study to evaluate methods of calibrating pretest items for computerized adaptive testing (CAT) pools. The simulation was designed to mimic the operational CAT Armed Services Vocational Aptitude Battery (ASVAB) testing program, in which a single pretest item…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Maximum Likelihood Statistics
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – 2000
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. For computerized adaptive tests (CAT) with dichotomous items, several person-fit statistics for detecting nonfitting item score patterns have been proposed. Both for paper-and-pencil (P&P) test and CATs, detection of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Response Theory
Peer reviewedGallagher, Ann; Bennett, Randy Elliot; Cahalan, Cara; Rock, Donald A. – Educational Assessment, 2002
Evaluated whether variance due to computer-based presentation was associated with performance on a new constructed-response type, Mathematical Expression, that requires students to enter expressions. No statistical evidence of construct-irrelevant variance was detected for the 178 undergraduate and graduate students, but some examinees reported…
Descriptors: College Students, Computer Assisted Testing, Constructed Response, Educational Technology
Peer reviewedLevinson, Edward M.; Zeman, Heather L.; Ohler, Denise L. – Career Development Quarterly, 2002
Assesses the reliability and validity of the Web-based version of the Career Key. Participants completed the Web-based version of the Career Key and the Self-Directed Search-Form R and completed a second Career Key administration 2 weeks later. Test-retest reliability ranged between .75 and .84. With the exception of the conventional scale, all…
Descriptors: Career Counseling, Computer Assisted Testing, Concurrent Validity, Test Reliability
Peer reviewedFolk, Valerie Greaud; Green, Bert F. – Applied Psychological Measurement, 1989
Some effects of using unidimensional item response theory (IRT) were examined when the assumption of unidimensionality was violated. Adaptive and nonadaptive tests were used. It appears that use of a unidimensional model can bias parameter estimation, adaptive item selection, and ability estimation for the two types of testing. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Peer reviewedGajar, Anna H. – Journal of Learning Disabilities, 1989
A computer analysis of the compositions written by university students with (N=30) and without (N-60) learning disabilities (LD) found LD students were not as fluent in word production and in the number of different words used but did produce longer sentences and T-units than nondisabled peers. (DB)
Descriptors: College Students, Computer Assisted Testing, Higher Education, Learning Disabilities
Peer reviewedKingsbury, G. Gage; Zara, Anthony R. – Applied Measurement in Education, 1991
This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Computer Simulation
Peer reviewedKapes, Jerome T.; Vansickle, Timothy R. – Measurement and Evaluation in Counseling and Development, 1992
Examined equivalence of mode of administration of the Career Decision-Making System, comparing paper-and-pencil version and computer-based version. Findings from 61 undergraduate students indicated that the computer-based version was significantly more reliable than paper-and-pencil version and was generally equivalent in other respects.…
Descriptors: Comparative Testing, Computer Assisted Testing, Higher Education, Test Format

Direct link
