Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 8 |
Descriptor
Source
Author
| Attali, Yigal | 1 |
| Budiharto, Widodo | 1 |
| Burk, John | 1 |
| Burstein, Jill | 1 |
| Chao, K.-J. | 1 |
| Chen, N.-S. | 1 |
| Clariana, Roy B. | 1 |
| Coniam, David | 1 |
| Dikli, Semire | 1 |
| Hall, Jane | 1 |
| Heryadi, Yaya | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 11 |
| Reports - Evaluative | 4 |
| Reports - Descriptive | 3 |
| Reports - Research | 3 |
| Guides - Non-Classroom | 1 |
Education Level
| Higher Education | 7 |
| Postsecondary Education | 7 |
| Elementary Secondary Education | 4 |
| Secondary Education | 2 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| ACT Assessment | 1 |
| National Assessment of… | 1 |
| SAT (College Admission Test) | 1 |
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Wijanarko, Bambang Dwi; Heryadi, Yaya; Toba, Hapnes; Budiharto, Widodo – Education and Information Technologies, 2021
Automated question generation is a task to generate questions from structured or unstructured data. The increasing popularity of online learning in recent years has given momentum to automated question generation in education field for facilitating learning process, learning material retrieval, and computer-based testing. This paper report on the…
Descriptors: Foreign Countries, Undergraduate Students, Engineering Education, Computer Software
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Chao, K.-J.; Hung, I.-C.; Chen, N.-S. – Journal of Computer Assisted Learning, 2012
Online learning has been rapidly developing in the last decade. However, there is very little literature available about the actual adoption of online synchronous assessment approaches and any guidelines for effective assessment design and implementation. This paper aims at designing and evaluating the possibility of applying online synchronous…
Descriptors: Electronic Learning, Student Evaluation, Online Courses, Computer Software
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael – ALT-J: Research in Learning Technology, 2010
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
Descriptors: Handwriting, Essay Tests, Interrater Reliability, Grading
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
McPherson, Douglas – Interactive Technology and Smart Education, 2009
Purpose: The purpose of this paper is to describe how and why Texas A&M University at Qatar (TAMUQ) has developed a system aiming to effectively place students in freshman and developmental English programs. The placement system includes: triangulating data from external test scores, with scores from a panel-marked hand-written essay (HWE),…
Descriptors: Student Placement, Educational Testing, English (Second Language), Second Language Instruction
Johnson, Martin; Nadas, Rita – Learning, Media and Technology, 2009
Within large scale educational assessment agencies in the UK, there has been a shift towards assessors marking digitally scanned copies rather than the original paper scripts that were traditionally used. This project uses extended essay examination scripts to consider whether the mode in which an essay is read potentially influences the…
Descriptors: Reading Comprehension, Educational Assessment, Internet, Essay Tests
Peer reviewedVockell, Edward L.; Hall, Jane – Clearing House, 1988
Explores how computers can assist teachers in developing tests. Describes "TESTWORKS," a computerized test generator. Lists 12 other test-generating programs available for Apple II computers, detailing data-entry features, hard copy features, and program features that permit interactive testing. Discusses using word processing or…
Descriptors: Computer Assisted Testing, Computer Software, Computer Software Reviews, Elementary Secondary Education
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation

Direct link
