Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 6 |
| Since 2017 (last 10 years) | 14 |
| Since 2007 (last 20 years) | 34 |
Descriptor
Source
| Journal of Educational… | 57 |
Author
| Clariana, Roy B. | 2 |
| Frick, Theodore W. | 2 |
| Lee, Kathryn S. | 2 |
| Osborne, Randall E. | 2 |
| Pomplun, Mark | 2 |
| Stowell, Jeffrey R. | 2 |
| Allan, Wesley D. | 1 |
| Applegate, Brooks | 1 |
| Araya, Roberto | 1 |
| Bennett, Dan | 1 |
| Bergstrom, Betty | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 57 |
| Reports - Research | 46 |
| Reports - Evaluative | 8 |
| Reports - Descriptive | 5 |
| Numerical/Quantitative Data | 2 |
| Tests/Questionnaires | 2 |
Education Level
Audience
| Researchers | 2 |
Location
| China | 2 |
| Massachusetts | 2 |
| Australia | 1 |
| Canada | 1 |
| Chile (Santiago) | 1 |
| China (Shanghai) | 1 |
| Germany | 1 |
| Hong Kong | 1 |
| Illinois | 1 |
| Louisiana | 1 |
| Portugal | 1 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
| Pell Grant Program | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Pomplun, Mark; Ritchie, Timothy – Journal of Educational Computing Research, 2004
This study investigated the statistical and practical significance of context effects for items randomized within testlets for administration during a series of computerized non-adaptive tests. One hundred and twenty-five items from four primary school reading tests were studied. Logistic regression analyses identified from one to four items for…
Descriptors: Psychometrics, Context Effect, Effect Size, Primary Education
Peer reviewedLemaire, Benoit; Dessus, Philippe – Journal of Educational Computing Research, 2001
Describes Apex (Assistant for Preparing Exams), a tool for evaluating student essays based on their content. By comparing an essay and the text of a given course on a semantic basis, the system can measure how well the essay matches the text. Various assessments are presented to the student regarding the topic, outline, and coherence of the essay.…
Descriptors: Computer Assisted Testing, Computer Oriented Programs, Computer Uses in Education, Educational Technology
Peer reviewedShermis, Mark D.; Mzumara, Howard R.; Bublitz, Scott T. – Journal of Educational Computing Research, 2001
This study of undergraduates examined differences between computer adaptive testing (CAT) and self-adaptive testing (SAT), including feedback conditions and gender differences. Results of the Test Anxiety Inventory, Computer Anxiety Rating Scale, and a Student Attitude Questionnaire showed measurement efficiency is differentially affected by test…
Descriptors: Adaptive Testing, Computer Anxiety, Computer Assisted Testing, Gender Issues
Peer reviewedFrick, Theodore W. – Journal of Educational Computing Research, 1989
Demonstrates how Bayesian reasoning can be used to adjust the length of computer-guided practice exercises and computer-based tests to help make mastery or nonmastery decisions. Individualization of instruction is discussed, and the results of an empirical study that used the sequential probability ratio test (SPRT) are presented. (25 references)…
Descriptors: Adaptive Testing, Computer Assisted Instruction, Computer Assisted Testing, Higher Education
Peer reviewedFrick, Theodore W. – Journal of Educational Computing Research, 1992
Discussion of expert systems and computerized adaptive tests describes two versions of EXSPRT, a new approach that combines uncertain inference in expert systems with sequential probability ratio test (SPRT) stopping rules. Results of two studies comparing EXSPRT to adaptive mastery testing based on item response theory and SPRT approaches are…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Expert Systems
Peer reviewedLalley, James P. – Journal of Educational Computing Research, 1998
Compares the effectiveness of textual feedback to video feedback during two computer-assisted biology lessons administered to secondary students. Lessons consisted of a brief text introduction followed by multiple-choice questions with text or video feedback. Findings indicated that video feedback resulted in superior learning and comprehension,…
Descriptors: Comparative Analysis, Computer Assisted Instruction, Computer Assisted Testing, Feedback
Peer reviewedHarasym, Peter H.; And Others – Journal of Educational Computing Research, 1993
Discussion of the use of human markers to mark responses on write-in questions focuses on a study that determined the feasibility of using a computer program to mark write-in responses for the Medical Council of Canada Qualifying Examination. The computer performance was compared with that of physician markers. (seven references) (LRW)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software Development, Computer Software Evaluation
Peer reviewedMoe, Kim C.; Johnson, Marilyn F. – Journal of Educational Computing Research, 1988
Describes study of 315 secondary school students that investigated their reactions to computerized testing and assessed the practicability of this testing method in the classroom. Two versions of a standardized aptitude test battery used are described, one computerized and one printed, and a survey assessing students' reactions is discussed. (9…
Descriptors: Adaptive Testing, Analysis of Variance, Aptitude Tests, Computer Assisted Testing
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential
Riedel, Eric; Dexter, Sara L.; Scharber, Cassandra; Doering, Aaron – Journal of Educational Computing Research, 2006
Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy…
Descriptors: Preservice Teacher Education, Writing Evaluation, Case Studies, Formative Evaluation
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Peer reviewedOlsen, James B.; And Others – Journal of Educational Computing Research, 1989
Describes study that was designed to compare student achievement scores from three different testing methods: paper-administered testing, computer-administered testing, and computerized adaptive testing. The California Assessment Program (CAP) item banks for grades three and six which this study incorporated are described, and results are…
Descriptors: Academic Achievement, Adaptive Testing, Analysis of Variance, Comparative Analysis

Direct link
