Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 5 |
Descriptor
Bias | 5 |
English (Second Language) | 5 |
Language Tests | 5 |
Scoring | 3 |
Second Language Learning | 3 |
Chinese | 2 |
Dialects | 2 |
Evaluators | 2 |
Familiarity | 2 |
Item Response Theory | 2 |
Korean | 2 |
More ▼ |
Author
Gass, Susan | 2 |
Myford, Carol | 2 |
Winke, Paula | 2 |
Attali, Yigal | 1 |
Michael D. Carey | 1 |
Mollaun, Pam | 1 |
Stefan Szocs | 1 |
Xi, Xiaoming | 1 |
Publication Type
Journal Articles | 4 |
Reports - Research | 4 |
Tests/Questionnaires | 2 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
India | 1 |
Michigan | 1 |
North America | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 5 |
International English… | 1 |
What Works Clearinghouse Rating
Michael D. Carey; Stefan Szocs – Language Testing, 2024
This controlled experimental study investigated the interaction of variables associated with rating the pronunciation component of high-stakes English-language-speaking tests such as IELTS and TOEFL iBT. One hundred experienced raters who were all either familiar or unfamiliar with Brazilian-accented English or Papua New Guinean Tok Pisin-accented…
Descriptors: Dialects, Pronunciation, Suprasegmentals, Familiarity
Winke, Paula; Gass, Susan; Myford, Carol – Language Testing, 2013
Based on evidence that listeners may favor certain foreign accents over others (Gass & Varonis, 1984; Major, Fitzmaurice, Bunta, & Balasubramanian, 2002; Tauroza & Luk, 1997) and that language-test raters may better comprehend and/or rate the speech of test takers whose native languages (L1s) are more familiar on some level (Carey,…
Descriptors: Native Language, Bias, Dialects, Pronunciation
Winke, Paula; Gass, Susan; Myford, Carol – ETS Research Report Series, 2011
This study investigated whether raters' second language (L2) background and the first language (L1) of test takers taking the TOEFL iBT® Speaking test were related through scoring. After an initial 4-hour training period, a group of 107 raters (mostly of learners of Chinese, Korean, and Spanish), listened to a selection of 432 speech samples that…
Descriptors: Second Language Learning, Evaluators, Speech Tests, English (Second Language)
Xi, Xiaoming; Mollaun, Pam – Educational Testing Service, 2009
This study investigated the scoring of the Test of English as a Foreign Language[TM] Internet-based Test (TOEFL iBT[TM]) Speaking section by bilingual or multilingual speakers of English and 1 or more Indian languages. We explored the extent to which raters from India, after being trained and certified, were able to score the Speaking section for…
Descriptors: Foreign Countries, English (Second Language), Internet, Language Tests
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)