Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| African Americans | 1 |
| American Indians | 1 |
| Asians | 1 |
| Automation | 1 |
| College Entrance Examinations | 1 |
| Computer Assisted Testing | 1 |
| Demography | 1 |
| Essay Tests | 1 |
| Ethnic Groups | 1 |
| Foreign Countries | 1 |
| Graduate Study | 1 |
| More ▼ | |
Source
| ETS Research Report Series | 1 |
Publication Type
| Journal Articles | 1 |
| Reports - Research | 1 |
Education Level
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
| China | 1 |
| India | 1 |
| Japan | 1 |
| South Korea | 1 |
| Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Ramineni, Chaitanya; Williamson, David – ETS Research Report Series, 2018
Notable mean score differences for the "e-rater"® automated scoring engine and for humans for essays from certain demographic groups were observed for the "GRE"® General Test in use before the major revision of 2012, called rGRE. The use of e-rater as a check-score model with discrepancy thresholds prevented an adverse impact…
Descriptors: Scores, Computer Assisted Testing, Test Scoring Machines, Automation

Peer reviewed
