Publication Date
| In 2026 | 0 |
| Since 2025 | 13 |
| Since 2022 (last 5 years) | 97 |
| Since 2017 (last 10 years) | 218 |
| Since 2007 (last 20 years) | 351 |
Descriptor
| Computer Assisted Testing | 514 |
| Scoring | 514 |
| Test Items | 111 |
| Test Construction | 102 |
| Automation | 95 |
| Essays | 82 |
| Foreign Countries | 81 |
| Scores | 79 |
| Adaptive Testing | 78 |
| Evaluation Methods | 77 |
| Computer Software | 75 |
| More ▼ | |
Source
Author
| Bennett, Randy Elliot | 11 |
| Attali, Yigal | 9 |
| Anderson, Paul S. | 7 |
| Williamson, David M. | 6 |
| Bejar, Isaac I. | 5 |
| Ramineni, Chaitanya | 5 |
| Stocking, Martha L. | 5 |
| Xi, Xiaoming | 5 |
| Zechner, Klaus | 5 |
| Bridgeman, Brent | 4 |
| Davey, Tim | 4 |
| More ▼ | |
Publication Type
Education Level
Location
| Australia | 10 |
| China | 10 |
| New York | 9 |
| Japan | 7 |
| Netherlands | 6 |
| Canada | 5 |
| Germany | 5 |
| Iran | 4 |
| Taiwan | 4 |
| United Kingdom | 4 |
| United Kingdom (England) | 4 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
O'Neil, Harold F., Jr.; Schacter, John – 1997
This document reviews several theoretical frameworks of problem-solving, provides a definition of the construct, suggests ways of measuring the construct, focuses on issues for assessment, and provides specifications for the computer-based assessment of problem solving. As defined in the model of the Center for Research on Evaluation, Standards,…
Descriptors: Computer Assisted Testing, Computer Software, Criteria, Educational Assessment
Martinez, Michael E.; And Others – 1990
Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…
Descriptors: Automation, Computer Assisted Testing, Educational Technology, Multiple Choice Tests
Peer reviewedAiken, Lewis R. – Educational and Psychological Measurement, 1996
This article describes a set of 11 menu-driven procedures written in BASICA for MS-DOS based microcomputers for constructing several types of rating scales, attitude scales, and checklists, and for scoring responses to the constructed instruments. The uses of the program are described in detail. (SLD)
Descriptors: Attitude Measures, Check Lists, Computer Assisted Testing, Computer Software
Li, Yuan H.; Schafer, William D. – Applied Psychological Measurement, 2005
Under a multidimensional item response theory (MIRT) computerized adaptive testing (CAT) testing scenario, a trait estimate (theta) in one dimension will provide clues for subsequently seeking a solution in other dimensions. This feature may enhance the efficiency of MIRT CAT's item selection and its scoring algorithms compared with its…
Descriptors: Adaptive Testing, Item Banks, Computation, Psychological Studies
Riccomini, Paul J.; Stecker, Pamela M. – Journal of Special Education Technology, 2005
Two types of independent practice activities to improve accuracy of pre-service teachers' measurement of oral reading fluency (ORF) were contrasted. Forty pre-service teachers, enrolled in an introductory special education course, received instructor-delivered classroom instruction on measuring ORF. After lecture and guided practice, participants…
Descriptors: Oral Reading, Educational Technology, Reading Fluency, Preservice Teachers
James, Cindy L. – Assessing Writing, 2006
How do scores from writing samples generated by computerized essay scorers compare to those generated by ''untrained'' human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample…
Descriptors: Writing (Composition), Predictive Validity, Scoring, Validity
Ben-Simon, Anat; Bennett, Randy Elliott – Journal of Technology, Learning, and Assessment, 2007
This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Slater, Sharon C.; Schaeffer, Gary A. – 1996
The General Computer Adaptive Test (CAT) of the Graduate Record Examinations (GRE) includes three operational sections that are separately timed and scored. A "no score" is reported if the examinee answers fewer than 80% of the items or if the examinee does not answer all of the items and leaves the section before time expires. The 80%…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Equal Education
Grobe, Robert P.; And Others – 1982
Assessment of Baseline Curriculum (ABC Tests) is a locally developed objectives-referenced assessment system that matches the Dallas Independent School System's Baseline Curriculum. Designed for instructional use and not for accountability, a major purpose of the ABC Test is to measure student attainment of objectives in the Mathematics and…
Descriptors: Communication Skills, Computer Assisted Testing, Criterion Referenced Tests, Educational Objectives
Anderson, Paul S.; Alexander, Diane – 1986
The Multi-Digit (MDT) testing procedure is a computer-scored testing innovation conceptualized in 1982. It is fully compatible with multiple choice and true/false tests well suited for the testing of discreet terms and concepts such as in fill-in-the-blank examinations. The student reads the question and selects the appropriate response from an…
Descriptors: Computer Assisted Testing, Computer Software, Criminal Law, Higher Education
McBride, James R. – 1979
In an adaptive test, the test administrator chooses test items sequentially during the test, in such a way as to adapt test difficulty to examinee ability as shown during testing. An effectively designed adaptive test can resolve the dilemma inherent in conventional test design. By tailoring tests to individuals, the adaptive test can…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Military Personnel
Harris, Dickie A.; Penell, Roger J. – 1977
This study used a series of simulations to answer questions about the efficacy of adaptive testing raised by empirical studies. The first study showed that for reasonable high entry points, parameters estimated from paper-and-pencil test protocols cross-validated remarkably well to groups actually tested at a computer terminal. This suggested that…
Descriptors: Adaptive Testing, Computer Assisted Testing, Cost Effectiveness, Difficulty Level
Peer reviewedHanton, Samuel D.; Ryan, Julie B. – Journal of Optometric Education, 1986
A study of the reliability of a computer-assisted test of optometric clinical diagnostic skills that uses actual patient cases revealed that the test is most useful when used in conjunction with subjective clinical grading for evaluating problem-solving skills. (MSE)
Descriptors: Clinical Diagnosis, Computer Assisted Testing, Higher Education, Medical Case Histories
Peer reviewedRoeber, Edward D. – Educational Measurement: Issues and Practice, 1984
In every instance in the process of constructing and using a test, the microcomputer can aid the classroom teacher. However, the teacher will not apply the microcomputer to classroom testing without added training both in classroom testing and in using the microcomputer. (BW)
Descriptors: Computer Assisted Testing, Educational Testing, Elementary Secondary Education, Item Analysis
Peer reviewedMunn, R. J.; And Others – Journal of Chemical Education, 1981
Describes a "gradebook program" capable of recording, editing, recoding, weighting, and scaling multiple choice test scores. (SK)
Descriptors: Chemistry, College Science, Computer Assisted Testing, Computer Oriented Programs

Direct link
