Publication Date
| In 2026 | 0 |
| Since 2025 | 13 |
| Since 2022 (last 5 years) | 97 |
| Since 2017 (last 10 years) | 218 |
| Since 2007 (last 20 years) | 351 |
Descriptor
| Computer Assisted Testing | 514 |
| Scoring | 514 |
| Test Items | 111 |
| Test Construction | 102 |
| Automation | 95 |
| Essays | 82 |
| Foreign Countries | 81 |
| Scores | 79 |
| Adaptive Testing | 78 |
| Evaluation Methods | 77 |
| Computer Software | 75 |
| More ▼ | |
Source
Author
| Bennett, Randy Elliot | 11 |
| Attali, Yigal | 9 |
| Anderson, Paul S. | 7 |
| Williamson, David M. | 6 |
| Bejar, Isaac I. | 5 |
| Ramineni, Chaitanya | 5 |
| Stocking, Martha L. | 5 |
| Xi, Xiaoming | 5 |
| Zechner, Klaus | 5 |
| Bridgeman, Brent | 4 |
| Davey, Tim | 4 |
| More ▼ | |
Publication Type
Education Level
Location
| Australia | 10 |
| China | 10 |
| New York | 9 |
| Japan | 7 |
| Netherlands | 6 |
| Canada | 5 |
| Germany | 5 |
| Iran | 4 |
| Taiwan | 4 |
| United Kingdom | 4 |
| United Kingdom (England) | 4 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedWainer, Howard; Lewis, Charles – Journal of Educational Measurement, 1990
Three different applications of the testlet concept are presented, and the psychometric models most suitable for each application are described. Difficulties that testlets can help overcome include (1) context effects; (2) item ordering; and (3) content balancing. Implications for test construction are discussed. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Elementary Secondary Education, Item Response Theory
Peer reviewedPatience, Wayne – Journal of Educational Measurement, 1990
The four main subsystems of the MicroCAT Testing System for developing, administering, scoring, and analyzing computerized tests using conventional or item response theory methods are described. Judgments of three users of the system are included in the evaluation of this software. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Software, Computer Software Reviews
Peer reviewedHarasym, Peter H.; And Others – Journal of Educational Computing Research, 1993
Discussion of the use of human markers to mark responses on write-in questions focuses on a study that determined the feasibility of using a computer program to mark write-in responses for the Medical Council of Canada Qualifying Examination. The computer performance was compared with that of physician markers. (seven references) (LRW)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software Development, Computer Software Evaluation
Stokes, Valerie – Learning & Leading with Technology, 2005
In this article, one school district shares its experiences with computer-based testing and the immediate access to data it can provide. The Barren County School District in Glasgow, Kentucky, has partnered with the Northwest Evaluation Association to use their computer-based Measure of Academic Progress (MAP). The MAP testing platform is…
Descriptors: Data Analysis, Student Evaluation, Scoring, Academic Ability
Taricani, Ellen M.; Clariana, Roy B. – Educational Technology Research and Development, 2006
In this descriptive investigation, we seek to confirm and extend a technique for automatically scoring concept maps. Sixty unscored concept maps from a published dissertation were scored using a computer-based technique adapted from Schvaneveldt (1990) and colleague's Pathfinder network approach. The scores were based on link lines drawn between…
Descriptors: Measurement Techniques, Scoring, Concept Mapping, Computer Assisted Testing
PDF pending restorationMills, Craig N.; Stocking, Martha L. – 1995
Computerized adaptive testing (CAT), while well-grounded in psychometric theory, has had few large-scale applications for high-stakes, secure tests in the past. This is now changing as the cost of computing has declined rapidly. As is always true where theory is translated into practice, many practical issues arise. This paper discusses a number…
Descriptors: Adaptive Testing, Computer Assisted Testing, High Stakes Tests, Item Banks
Peer reviewedMills, Craig N.; Stocking, Martha L. – Applied Measurement in Education, 1996
Issues that must be addressed in the large-scale application of computerized adaptive testing are explored, including considerations of test design, scoring, test administration, item and item bank development, and other aspects of test construction. Possible solutions and areas in which additional work is needed are identified. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Higher Education
Peer reviewedEndler, Norman S.; Parker, James D. A. – Educational and Psychological Measurement, 1990
C. Davis and M. Cowles (1989) analyzed a total trait anxiety score on the Endler Multidimensional Anxiety Scales (EMAS)--a unidimensional construct that this multidimensional measure does not assess. Data are reanalyzed using the appropriate scoring procedure for the EMAS. Subjects included 145 undergraduates in 1 of 4 testing conditions. (SLD)
Descriptors: Anxiety, Comparative Testing, Computer Assisted Testing, Construct Validity
Peer reviewedBosman, Fred; And Others – Computers in Human Behavior, 1994
Describes the use of interactive videodiscs in Dutch secondary vocational school departments of pharmaceutical education for testing theoretical knowledge and practical skills in a simulated real-life situation. An example is given, feedback and scoring are explained, and criteria for reliability with a classical text analysis are discussed.…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Computer Simulation, Criteria
Burstein, Jill C.; Kaplan, Randy M. – 1995
There is a considerable interest at Educational Testing Service (ETS) to include performance-based, natural language constructed-response items on standardized tests. Such items can be developed, but the projected time and costs required to have these items scored by human graders would be prohibitive. In order for ETS to include these types of…
Descriptors: Computer Assisted Testing, Constructed Response, Cost Effectiveness, Hypothesis Testing
Adams, Raymond J.; Khoo, Siek-Toon – 1993
The Quest program offers a comprehensive test and questionnaire analysis environment by providing a data analyst (a computer program) with access to the most recent developments in Rasch measurement theory, as well as a range of traditional analysis procedures. This manual helps the user use Quest to construct and validate variables based on…
Descriptors: Computer Assisted Testing, Computer Software, Estimation (Mathematics), Foreign Countries
Chung, Gregory K. W. K.; Baker, Eva L. – 1997
This report documents the technology initiatives of the Center for Research on Evaluation, Standards, and Student Testing (CRESST) in two broad areas: (1) using technology to improve the quality, utility, and feasibility of existing measures; and (2) using technology to design and develop new assessments and measurement approaches available…
Descriptors: Computer Assisted Testing, Constructed Response, Educational Planning, Educational Technology
Peer reviewedClauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S. – Applied Measurement in Education, 1997
Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…
Descriptors: Algorithms, Computer Assisted Testing, Computer Simulation, Evaluators
Peer reviewedSegall, Daniel O. – Psychometrika, 1996
Maximum likelihood and Bayesian procedures are presented for item selection and scoring of multidimensional adaptive tests. A demonstration with simulated response data illustrates that multidimensional adaptive testing can provide equal or higher reliabilities with fewer items than are required in one-dimensional adaptive testing. (SLD)
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Equations (Mathematics)
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)

Direct link
