Publication Date
| In 2026 | 0 |
| Since 2025 | 30 |
| Since 2022 (last 5 years) | 169 |
| Since 2017 (last 10 years) | 330 |
| Since 2007 (last 20 years) | 614 |
Descriptor
| Computer Assisted Testing | 1058 |
| Test Items | 1058 |
| Adaptive Testing | 449 |
| Test Construction | 386 |
| Item Response Theory | 255 |
| Item Banks | 223 |
| Foreign Countries | 194 |
| Difficulty Level | 166 |
| Test Format | 160 |
| Item Analysis | 158 |
| Simulation | 142 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 24 |
| Practitioners | 20 |
| Teachers | 13 |
| Students | 2 |
| Administrators | 1 |
Location
| Germany | 17 |
| Australia | 13 |
| Japan | 12 |
| Taiwan | 12 |
| Turkey | 12 |
| United Kingdom | 12 |
| China | 11 |
| Oregon | 10 |
| Canada | 9 |
| Netherlands | 9 |
| United States | 9 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 8 |
| Americans with Disabilities… | 1 |
| Head Start | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
PDF pending restorationvan der Linden, Wim J.; Zwarts, Michel A. – 1986
The use of item response theory (IRT) is a prerequisite to successful use of computerized test systems. In item response models, as opposed to classical test theory, the abilities of the examinees and the properties of the items are parameterized separately. Therefore, when measuring the abilities of examinees, the model implicitly corrects for…
Descriptors: Ability Identification, Adaptive Testing, Aptitude Tests, Computer Assisted Testing
Choppin, Bruce H. – 1983
In the answer-until-correct mode of multiple-choice testing, respondents are directed to continue choosing among the alternatives to each item until they find the correct response. There is no consensus as to how to convert the resulting pattern of responses into a measure because of two conflicting models of item response behavior. The first…
Descriptors: Computer Assisted Testing, Difficulty Level, Guessing (Tests), Knowledge Level
Ree, Malcolm James – 1978
The computer can assist test construction in the following four ways: (1) storage or banking of test items; (2) banking of item attributes; (3) test construction; and (4) test printing. Automated Item Banking (AIB) is a computerized item storage and test construction system which illustrates these capabilities. It was developed, implemented, and…
Descriptors: Aptitude Tests, Computer Assisted Testing, Computers, Higher Education
Ree, Malcom James; Jensen, Harald E. – 1980
By means of computer simulation of test responses, the reliability of item analysis data and the accuracy of equating were examined for hypothetical samples of 250, 500, 1000, and 2000 subjects for two tests with 20 equating items plus 60 additional items on the same scale. Birnbaum's three-parameter logistic model was used for the simulation. The…
Descriptors: Computer Assisted Testing, Equated Scores, Error of Measurement, Item Analysis
Thompson, Bruce; Levitov, Justin E. – Collegiate Microcomputer, 1985
Discusses features of a microcomputer program, SCOREIT, used at New Orleans' Loyola University and several high schools to score and analyze test results. Benefits and dimensions of the program's automated test and item analysis are outlined, and several examples illustrating test and item analyses by SCOREIT are presented. (MBR)
Descriptors: Computer Assisted Testing, Computer Software, Difficulty Level, Higher Education
Pommerich, Mary; Burden, Timothy – 2000
A small-scale study was conducted to compare test-taking strategies, problem-solving strategies, and general impressions about the test across computer and paper-and-pencil administration modes. Thirty-six examinees (high school students) participated in the study. Each examinee took a test in one of the content areas of English, Mathematics,…
Descriptors: Adaptive Testing, Attitudes, Comparative Analysis, Computer Assisted Testing
Peer reviewedWainer, Howard; Lewis, Charles – Journal of Educational Measurement, 1990
Three different applications of the testlet concept are presented, and the psychometric models most suitable for each application are described. Difficulties that testlets can help overcome include (1) context effects; (2) item ordering; and (3) content balancing. Implications for test construction are discussed. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Elementary Secondary Education, Item Response Theory
Peer reviewedHarasym, Peter H.; And Others – Journal of Educational Computing Research, 1993
Discussion of the use of human markers to mark responses on write-in questions focuses on a study that determined the feasibility of using a computer program to mark write-in responses for the Medical Council of Canada Qualifying Examination. The computer performance was compared with that of physician markers. (seven references) (LRW)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software Development, Computer Software Evaluation
Peer reviewedDe Ayala, R. J. – Applied Psychological Measurement, 1992
A computerized adaptive test (CAT) based on the nominal response model (NR CAT) was implemented, and the performance of the NR CAT and a CAT based on the three-parameter logistic model was compared. The NR CAT produced trait estimates comparable to those of the three-parameter test. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Equations (Mathematics)
Peer reviewedJones, Douglas H.; Jin, Zhiying – Psychometrika, 1994
Replenishing item pools for on-line ability testing requires innovative and efficient data collection. A method is proposed to collect test item calibration data in an on-line testing environment sequentially using locally D-optimum designs, thereby achieving high Fisher information for the item parameters. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Data Collection
Peer reviewedStyles, Irene; Andrich, David – Educational and Psychological Measurement, 1993
This paper describes the use of the Rasch model to help implement computerized administration of the standard and advanced forms of Raven's Progressive Matrices (RPM), to compare relative item difficulties, and to convert scores between the standard and advanced forms. The sample consisted of 95 girls and 95 boys in Australia. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education
Peer reviewedMarshall, Thomas E.; And Others – Journal of Educational Technology Systems, 1996
Examines the strategies used in answering a computerized multiple-choice test where all questions on a semantic topic were grouped together or randomly distributed. Findings indicate that students grouped by performance on the test used different strategies in completing the test due to distinct cognitive processes between the groups. (AEF)
Descriptors: Academic Achievement, Cognitive Processes, Computer Assisted Testing, Higher Education
Sykes, Robert C.; Ito, Kyoko – 1995
Whether the presence of bidimensionality has any effect on the adaptive recalibration of test items was studied through live-data simulation of computer adaptive testing (CAT) forms. The source data were examinee responses to the 298 scored multiple choice items of a licensure examination in a health care profession. Three 75-item part-forms,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Estimation (Mathematics)
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
Items with the highest discrimination parameter values in a logistic item response theory (IRT) model do not necessarily give maximum information. This paper shows which discrimination parameter values (as a function of the guessing parameter and the distance between person ability and item difficulty) give maximum information for the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Halkitis, Perry N.; And Others – 1996
The relationship between test item characteristics and testing time was studied for a computer-administered licensing examination. One objective of the study was to develop a model to predict testing time on the basis of known item characteristics. Response latencies (i.e., the amount of time taken by examinees to read, review, and answer items)…
Descriptors: Computer Assisted Testing, Difficulty Level, Estimation (Mathematics), Licensing Examinations (Professions)


