NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Jing; Fife, James H.; Bejar, Isaac I.; Rupp, André A. – ETS Research Report Series, 2016
The "e-rater"® automated scoring engine used at Educational Testing Service (ETS) scores the writing quality of essays. In the current practice, e-rater scores are generated via a multiple linear regression (MLR) model as a linear combination of various features evaluated for each essay and human scores as the outcome variable. This…
Descriptors: Scoring, Models, Artificial Intelligence, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liu, Ou Lydia – ETS Research Report Series, 2014
This study investigates the relationship between test preparation and test performance on the "TOEFL iBT"® exam. Information on background variables and test preparation strategies was gathered from 14,593 respondents in China through an online survey. A Chinese standardized English test was used as a control for prior English ability.…
Descriptors: Research Reports, Test Preparation, Language Tests, College Entrance Examinations
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deane, Paul; Gurevich, Olga – ETS Research Report Series, 2008
For many purposes, it is useful to collect a corpus of texts all produced to the same stimulus, whether to measure performance (as on a test) or to test hypotheses about population differences. This paper examines several methods for measuring similarities in phrasing and content and demonstrates that these methods can be used to identify…
Descriptors: Test Content, Computational Linguistics, Native Speakers, Writing Tests