Publication Date
| In 2026 | 0 |
| Since 2025 | 13 |
| Since 2022 (last 5 years) | 97 |
| Since 2017 (last 10 years) | 218 |
| Since 2007 (last 20 years) | 351 |
Descriptor
| Computer Assisted Testing | 514 |
| Scoring | 514 |
| Test Items | 111 |
| Test Construction | 102 |
| Automation | 95 |
| Essays | 82 |
| Foreign Countries | 81 |
| Scores | 79 |
| Adaptive Testing | 78 |
| Evaluation Methods | 77 |
| Computer Software | 75 |
| More ▼ | |
Source
Author
| Bennett, Randy Elliot | 11 |
| Attali, Yigal | 9 |
| Anderson, Paul S. | 7 |
| Williamson, David M. | 6 |
| Bejar, Isaac I. | 5 |
| Ramineni, Chaitanya | 5 |
| Stocking, Martha L. | 5 |
| Xi, Xiaoming | 5 |
| Zechner, Klaus | 5 |
| Bridgeman, Brent | 4 |
| Davey, Tim | 4 |
| More ▼ | |
Publication Type
Education Level
Location
| Australia | 10 |
| China | 10 |
| New York | 9 |
| Japan | 7 |
| Netherlands | 6 |
| Canada | 5 |
| Germany | 5 |
| Iran | 4 |
| Taiwan | 4 |
| United Kingdom | 4 |
| United Kingdom (England) | 4 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Gobert, Janice D.; Koedinger, Kenneth R. – Society for Research on Educational Effectiveness, 2011
The National frameworks for science emphasize inquiry skills (NRC, 1996), however, in typical classroom practice, science learning often focuses on rote learning in part because science process skills are difficult to assess (Fadel, Honey, & Pasnick, 2007) and rote knowledge is prioritized on high-stakes tests. Short answer assessments of…
Descriptors: Performance Based Assessment, Predictive Validity, High Stakes Tests, Rote Learning
Zytowski, Donald G. – Journal of Career Assessment, 2008
The present highly developed status of psychological and educational testing in the United States is in part the result of many efforts over the past 100 years to develop economical and reliable methods of scoring. The present article traces a number of methods, ranging from hand scoring to present-day computer applications, stimulated by the need…
Descriptors: Educational Testing, Achievement Tests, Computers, Scoring
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
Miller, Mark J.; Cowger, Ernest, Jr.; Young, Tony; Tobacyk, Jerome; Sheets, Tillman; Loftus, Christina – College Student Journal, 2008
This study examined the degree of similarity between scores on the Self-Directed Search and an online instrument measuring Holland types. A relatively high congruency score was found between the two measures. Implications for career counselors are discussed.
Descriptors: Career Counseling, Personality Assessment, Congruence (Psychology), Personality Traits
Van Moere, Alistair; Suzuki, Masanori; Downey, Ryan; Cheng, Jian – Australian Review of Applied Linguistics, 2009
This paper discusses the development of an assessment to satisfy the International Civil Aviation Organization (ICAO) Language Proficiency Requirements. The Versant Aviation English Test utilizes speech recognition technology and a computerized testing platform, such that test administration and scoring are fully automated. Developed in…
Descriptors: Scoring, Test Construction, Language Proficiency, Standards
Peer reviewedRussell, G. K. G.; And Others – Journal of Clinical Psychology, 1986
A computerized version of the Minnesota Multiphasic Personality Inventory was developed that incorporated both administration and scoring. This method was compared with the original manual form. The results indicated that the test-retest reliability was high regardless of the method of administration and that similar results were obtained on the…
Descriptors: Computer Assisted Testing, Reliability, Scoring, Test Scoring Machines
Georgiadou, Elissavet; Triantafillou, Evangelos; Economides, Anastasios A. – Journal of Technology, Learning, and Assessment, 2007
Since researchers acknowledged the several advantages of computerized adaptive testing (CAT) over traditional linear test administration, the issue of item exposure control has received increased attention. Due to CAT's underlying philosophy, particular items in the item pool may be presented too often and become overexposed, while other items are…
Descriptors: Adaptive Testing, Computer Assisted Testing, Scoring, Test Items
Wang, Hui-Yu; Chen, Shyi-Ming – Educational Technology & Society, 2007
In this paper, we present two new methods for evaluating students' answerscripts based on the similarity measure between vague sets. The vague marks awarded to the answers in the students' answerscripts are represented by vague sets, where each element u[subscript i] in the universe of discourse U belonging to a vague set is represented by a…
Descriptors: Artificial Intelligence, Student Evaluation, Evaluation Methods, Educational Technology
Wang, Jinhao; Brown, Michelle Stallone – Contemporary Issues in Technology and Teacher Education (CITE Journal), 2008
The purpose of the current study was to analyze the relationship between automated essay scoring (AES) and human scoring in order to determine the validity and usefulness of AES for large-scale placement tests. Specifically, a correlational research design was used to examine the correlations between AES performance and human raters' performance.…
Descriptors: Scoring, Essays, Computer Assisted Testing, Sentence Structure
Luecht, Richard M. – 2001
The Microsoft Certification Program (MCP) includes many new computer-based item types, based on complex cases involving the Windows 2000 (registered) operating system. This Innovative Item Technology (IIT) has presented challenges beyond traditional psychometric considerations such as capturing and storing the relevant response data from…
Descriptors: Certification, Coding, Computer Assisted Testing, Data Collection
Papanastasiou, Elena C. – 2002
Due to the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT, from an examinees point of view, is that in many…
Descriptors: Adaptive Testing, Cheating, Computer Assisted Testing, Review (Reexamination)
Patelis, Thanos – College Entrance Examination Board, 2000
Because different types of computerized tests exist and continue to emerge, the term "computer-based testing" does not encompass all of the various models that may exist. As a result, test delivery model (TDM) is used to describe the variety of methods that exist in delivering tests to examinees. The criterion that is used to distinguish…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Delivery Systems
Peer reviewedHuba, G. J. – Educational and Psychological Measurement, 1986
The runs test for random sequences of responding is proposed for application in long inventories with dichotomous items as an index of sterotyped responding. This index is useful for detecting whether the client shifts between response alternatives more or less frequently than would be expected by chance. (LMO)
Descriptors: Computer Assisted Testing, Personality Measures, Response Style (Tests), Scoring
Wise, Steven L. – 1999
Outside of large-scale testing programs, the computerized adaptive test (CAT) has thus far had only limited impact on measurement practice. In smaller-scale testing contexts, limited data are often available, which precludes the establishment of calibrated item pools for use by traditional (i.e., item response theory (IRT) based) CATs. This paper…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Scores
Anderson, Richard Ivan – Journal of Computer-Based Instruction, 1982
Describes confidence testing methods (confidence weighting, probabilistic marking, multiple alternative selection) as alternative to computer-based, multiple choice tests and explains potential benefits (increased reliability, improved examinee evaluation of alternatives, extended diagnostic information and remediation prescriptions, happier…
Descriptors: Computer Assisted Testing, Confidence Testing, Multiple Choice Tests, Probability

Direct link
