Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 5 |
Descriptor
Source
| ETS Research Report Series | 3 |
| International Journal of… | 1 |
| Technology in Language… | 1 |
| Turkish Online Journal of… | 1 |
Author
| Attali, Yigal | 1 |
| Bateson, Gordon | 1 |
| Blanchard, Daniel | 1 |
| Burstein, Jill | 1 |
| Chen Li | 1 |
| Evanini, Keelan | 1 |
| Fatemeh Mohajeri | 1 |
| Heilman, Michael | 1 |
| Ikkyu Choi | 1 |
| Jakub Novák | 1 |
| Jiangang Hao | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Research | 5 |
| Reports - Descriptive | 1 |
Education Level
| Elementary Education | 2 |
| Grade 6 | 2 |
| Higher Education | 2 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
| Postsecondary Education | 2 |
| Secondary Education | 2 |
| Elementary Secondary Education | 1 |
| Grade 10 | 1 |
| Grade 11 | 1 |
| Grade 12 | 1 |
| More ▼ | |
Audience
Location
| Iran (Tehran) | 1 |
| Japan | 1 |
| Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Test of English as a Foreign… | 3 |
| International English… | 2 |
| Graduate Management Admission… | 1 |
| Test of English for… | 1 |
What Works Clearinghouse Rating
Ikkyu Choi; Jiangang Hao; Chen Li; Michael Fauss; Jakub Novák – ETS Research Report Series, 2024
A frequently encountered security issue in writing tests is nonauthentic text submission: Test takers submit texts that are not their own but rather are copies of texts prepared by someone else. In this report, we propose AutoESD, a human-in-the-loop and automated system to detect nonauthentic texts for a large-scale writing tests, and report its…
Descriptors: Writing Tests, Automation, Cheating, Plagiarism
Somayeh Fathali; Fatemeh Mohajeri – Technology in Language Teaching & Learning, 2025
The International English Language Testing System (IELTS) is a high-stakes exam where Writing Task 2 significantly influences the overall scores, requiring reliable evaluation. While trained human raters perform this task, concerns about subjectivity and inconsistency have led to growing interest in artificial intelligence (AI)-based assessment…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Artificial Intelligence
Bateson, Gordon – International Journal of Computer-Assisted Language Learning and Teaching, 2021
As a result of the Japanese Ministry of Education's recent edict that students' written and spoken English should be assessed in university entrance exams, there is an urgent need for tools to help teachers and students prepare for these exams. Although some commercial tools already exist, they are generally expensive and inflexible. To address…
Descriptors: Test Construction, Computer Assisted Testing, Internet, Writing Tests
Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel – ETS Research Report Series, 2015
This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…
Descriptors: Computer Assisted Testing, Automation, Language Tests, Second Language Learning
Liao, Chen-Huei; Kuo, Bor-Chen; Pai, Kai-Chih – Turkish Online Journal of Educational Technology - TOJET, 2012
Automated scoring by means of Latent Semantic Analysis (LSA) has been introduced lately to improve the traditional human scoring system. The purposes of the present study were to develop a LSA-based assessment system to evaluate children's Chinese sentence construction skills and to examine the effectiveness of LSA-based automated scoring function…
Descriptors: Foreign Countries, Program Effectiveness, Scoring, Personality
Attali, Yigal; Burstein, Jill – ETS Research Report Series, 2005
The e-rater® system has been used by ETS for automated essay scoring since 1999. This paper describes a new version of e-rater (v.2.0) that differs from the previous one (v.1.3) with regard to the feature set and model building approach. The paper describes the new version, compares the new and previous versions in terms of performance, and…
Descriptors: Essay Tests, Automation, Scoring, Comparative Analysis

Peer reviewed
Direct link
