Publication Date
| In 2026 | 0 |
| Since 2025 | 13 |
| Since 2022 (last 5 years) | 97 |
| Since 2017 (last 10 years) | 218 |
| Since 2007 (last 20 years) | 351 |
Descriptor
| Computer Assisted Testing | 514 |
| Scoring | 514 |
| Test Items | 111 |
| Test Construction | 102 |
| Automation | 95 |
| Essays | 82 |
| Foreign Countries | 81 |
| Scores | 79 |
| Adaptive Testing | 78 |
| Evaluation Methods | 77 |
| Computer Software | 75 |
| More ▼ | |
Source
Author
| Bennett, Randy Elliot | 11 |
| Attali, Yigal | 9 |
| Anderson, Paul S. | 7 |
| Williamson, David M. | 6 |
| Bejar, Isaac I. | 5 |
| Ramineni, Chaitanya | 5 |
| Stocking, Martha L. | 5 |
| Xi, Xiaoming | 5 |
| Zechner, Klaus | 5 |
| Bridgeman, Brent | 4 |
| Davey, Tim | 4 |
| More ▼ | |
Publication Type
Education Level
Location
| Australia | 10 |
| China | 10 |
| New York | 9 |
| Japan | 7 |
| Netherlands | 6 |
| Canada | 5 |
| Germany | 5 |
| Iran | 4 |
| Taiwan | 4 |
| United Kingdom | 4 |
| United Kingdom (England) | 4 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedMcMinn, Mark R.; Ellens, Brent M.; Soref, Erez – Assessment, 1999
Surveyed 364 members of the Society for Personality Assessment to determine how they use computer-based test interpretation software (CBTI) in their work, and their perspectives on the ethics of using CBTI. Psychologists commonly use CBTI for test scoring, but not to formulate a case or as an alternative to a written report. (SLD)
Descriptors: Behavior Patterns, Computer Assisted Testing, Computer Software, Ethics
Peer reviewedWang, LihShing; Li, Chun-Shan – Journal of Applied Measurement, 2001
Used Monte Carlo simulation to compare the relative measurement efficiency of polytomous modeling and dichotomous modeling under different scoring schemes and termination criteria. Results suggest that polytomous computerized adaptive testing (CAT) yields marginal gains over dichotomous CAT when termination criteria are more stringent. Discusses…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Monte Carlo Methods
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – ETS Research Report Series, 2008
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multitrait) rating dimensions and their relationships to holistic scores and "e-rater"® essay feature variables in the context of the TOEFL® computer-based test (CBT) writing assessment. Data analyzed in the study were analytic and holistic…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scoring
Peer reviewedDavey, Tim; And Others – Journal of Educational Measurement, 1997
The development and scoring of a recently introduced computer-based writing skills test is described. The test asks the examinee to edit a writing passage presented on a computer screen. Scoring difficulties are addressed through the combined use of option weighting and the sequential probability ratio test. (SLD)
Descriptors: Computer Assisted Testing, Educational Innovation, Probability, Scoring
Hung, Pi-Hsia; Lin, Yu-Fen; Hwang, Gwo-Jen – Educational Technology & Society, 2010
Ubiquitous computing and mobile technologies provide a new perspective for designing innovative outdoor learning experiences. The purpose of this study is to propose a formative assessment design for integrating PDAs into ecology observations. Three learning activities were conducted in this study. An action research approach was applied to…
Descriptors: Foreign Countries, Feedback (Response), Action Research, Observation
Kump, Ann – 1992
Directions are given for scoring typing tests taken on a typewriter or on a computer using special software. The speed score (gross words per minute) is obtained by determining the total number of strokes typed, and dividing by 25. The accuracy score is obtained by comparing the examinee's test paper to the appropriate scoring key and counting the…
Descriptors: Computer Assisted Testing, Employment Qualifications, Guidelines, Job Applicants
Chung, Gregory K. W. K.; O'Neil, Harold F., Jr. – 1997
This report examines the feasibility of scoring essays using computer-based techniques. Essays have been incorporated into many of the standardized testing programs. Issues of validity and reliability must be addressed to deploy automated approaches to scoring fully. Two approaches that have been used to classify documents, surface- and word-based…
Descriptors: Automation, Computer Assisted Testing, Essays, Scoring
Peer reviewedStocking, Martha L. – Journal of Educational and Behavioral Statistics, 1996
An alternative method for scoring adaptive tests, based on number-correct scores, is explored and compared with a method that relies more directly on item response theory. Using the number-correct score with necessary adjustment for intentional differences in adaptive test difficulty is a statistically viable scoring method. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Response Theory
Peer reviewedWilliamson, David M.; Bejar, Isaac I.; Hone, Anne S. – Journal of Educational Measurement, 1999
Contrasts "mental models" used by automated scoring for the simulation division of the computerized Architect Registration Examination with those used by experienced human graders for 3,613 candidate solutions. Discusses differences in the models used and the potential of automated scoring to enhance the validity evidence of scores. (SLD)
Descriptors: Architects, Comparative Analysis, Computer Assisted Testing, Judges
Peer reviewedBennett, Randy Elliot; Morley, Mary; Quardt, Dennis – Applied Psychological Measurement, 2000
Describes three open-ended response types that could broaden the conception of mathematical problem solving used in computerized admissions tests: (1) mathematical expression (ME); (2) generating examples (GE); and (3) and graphical modeling (GM). Illustrates how combining ME, GE, and GM can form extended constructed response problems. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Constructed Response, Mathematics Tests
Peer reviewedMcHenry, Bill; Griffith, Leonard; McHenry, Jim – T.H.E. Journal, 2004
Imagine administering an online standardized test to an entire class of 11th-grade students when, halfway through the exam, the server holding the test hits a snag and throws everyone offline. Imagine another scenario in which an elementary school has very few computers so teachers must bus their students to the local high school for a timed test.…
Descriptors: Computer Assisted Testing, Risk, Evaluation Methods, Federal Legislation
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David M. – ETS Research Report Series, 2008
This report presents the results of a research and development effort for SpeechRater? Version 1.0 (v1.0), an automated scoring system for the spontaneous speech of English language learners used operationally in the Test of English as a Foreign Language™ (TOEFL®) Practice Online assessment (TPO). The report includes a summary of the validity…
Descriptors: Speech, Scoring, Scoring Rubrics, Scoring Formulas
Hu, Xiangen, Ed.; Barnes, Tiffany, Ed.; Hershkovitz, Arnon, Ed.; Paquette, Luc, Ed. – International Educational Data Mining Society, 2017
The 10th International Conference on Educational Data Mining (EDM 2017) is held under the auspices of the International Educational Data Mining Society at the Optics Velley Kingdom Plaza Hotel, Wuhan, Hubei Province, in China. This years conference features two invited talks by: Dr. Jie Tang, Associate Professor with the Department of Computer…
Descriptors: Data Analysis, Data Collection, Graphs, Data Use
Kaplan, Randy M.; Bennett, Randy Elliot – 1994
This study explores the potential for using a computer-based scoring procedure for the formulating-hypotheses (F-H) item. This item type presents a situation and asks the examinee to generate explanations for it. Each explanation is judged right or wrong, and the number of creditable explanations is summed to produce an item score. Scores were…
Descriptors: Automation, Computer Assisted Testing, Correlation, Higher Education
Ho, James K. – Collegiate Microcomputer, 1987
Explains how spreadsheet software can be used in the design and grading of academic tests and in assigning grades. Macro programs and menu-driven software are highlighted and an example using IBM PCs and Lotus 1-2-3 software is given. (Author/LRW)
Descriptors: Computer Assisted Testing, Data Processing, Grading, Menu Driven Software

Direct link
