NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Language Testing47
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yasuyo Sawaki; Yutaka Ishii; Hiroaki Yamada; Takenobu Tokunaga – Language Testing, 2025
This study examined the consistency between instructor ratings of learner-generated summaries and those estimated by a large language model (LLM) on summary content checklist items designed for undergraduate second language (L2) writing instruction in Japan. The effects of the LLM prompt design on the consistency between the two were also explored…
Descriptors: Interrater Reliability, Writing Teachers, College Faculty, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Suh Keong Kwon; Guoxing Yu – Language Testing, 2024
In this study, we examined the effect of visual cues in a second language listening test on test takers' viewing behaviours and their test performance. Fifty-seven learners of English in Korea took a video-based listening test, with their eye movements recorded, and 23 of them were interviewed individually after the test. The participants viewed…
Descriptors: Foreign Countries, English (Second Language), Second Language Learning, Eye Movements
Peer reviewed Peer reviewed
Direct linkDirect link
Shungo Suzuki; Hiroaki Takatsu; Ryuki Matsuura; Miina Koyama; Mao Saeki; Yoichi Matsuyama – Language Testing, 2025
The current study proposes a new approach to weakness identification in diagnostic language assessment (DLA) for speaking skills. We also propose to design actionable and contextualised diagnostic feedback through the systematic integration of feedback and remedial learning activities. Focusing on lexical use in second language speaking, the…
Descriptors: English (Second Language), Speech Skills, Artificial Intelligence, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Michael D. Carey; Stefan Szocs – Language Testing, 2024
This controlled experimental study investigated the interaction of variables associated with rating the pronunciation component of high-stakes English-language-speaking tests such as IELTS and TOEFL iBT. One hundred experienced raters who were all either familiar or unfamiliar with Brazilian-accented English or Papua New Guinean Tok Pisin-accented…
Descriptors: Dialects, Pronunciation, Suprasegmentals, Familiarity
Peer reviewed Peer reviewed
Direct linkDirect link
Yu-Tzu Chang; Ann Tai Choe; Daniel Holden; Daniel R. Isbell – Language Testing, 2024
In this Brief Report, we describe an evaluation of and revisions to a rubric adapted from the Jacobs et al.'s (1981) ESL COMPOSITION PROFILE, with four rubric categories and 20-point rating scales, in the context of an intensive English program writing placement test. Analysis of 4 years of rating data (2016-2021, including 434 essays) using…
Descriptors: Language Tests, Rating Scales, Second Language Learning, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Nishizawa, Hitoshi – Language Testing, 2023
In this study, I investigate the construct validity and fairness pertaining to the use of a variety of Englishes in listening test input. I obtained data from a post-entry English language placement test administered at a public university in the United States. In addition to expectedly familiar American English, the test features Hawai'i,…
Descriptors: Construct Validity, Listening Comprehension Tests, Language Tests, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Farshad Effatpanah; Purya Baghaei; Mona Tabatabaee-Yazdi; Esmat Babaii – Language Testing, 2025
This study aimed to propose a new method for scoring C-Tests as measures of general language proficiency. In this approach, the unit of analysis is sentences rather than gaps or passages. That is, the gaps correctly reformulated in each sentence were aggregated as sentence score, and then each sentence was entered into the analysis as a polytomous…
Descriptors: Item Response Theory, Language Tests, Test Items, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Alpizar, David; Li, Tongyun; Norris, John M.; Gu, Lixiong – Language Testing, 2023
The C-test is a type of gap-filling test designed to efficiently measure second language proficiency. The typical C-test consists of several short paragraphs with the second half of every second word deleted. The words with deleted parts are considered as items nested within the corresponding paragraph. Given this testlet structure, it is commonly…
Descriptors: Psychometrics, Language Tests, Second Language Learning, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yan, Xun; Chuang, Ping-Lin – Language Testing, 2023
This study employed a mixed-methods approach to examine how rater performance develops during a semester-long rater certification program for an English as a Second Language (ESL) writing placement test at a large US university. From 2016 to 2018, we tracked three groups of novice raters (n = 30) across four rounds in the certification program.…
Descriptors: Evaluators, Interrater Reliability, Item Response Theory, Certification
Peer reviewed Peer reviewed
Direct linkDirect link
Warnby, Marcus; Malmström, Hans; Hansen, Kajsa Yang – Language Testing, 2023
The academic section of the Vocabulary Levels Test (VLT-Ac) and the Academic Vocabulary Test (AVT) both assess meaning-recognition knowledge of written receptive academic vocabulary, deemed central for engagement in academic activities. Depending on the purpose and context of the testing, either of the tests can be appropriate, but for research…
Descriptors: Foreign Countries, Scores, Written Language, Receptive Language
Peer reviewed Peer reviewed
Direct linkDirect link
Min, Shangchao; Zhang, Juan; Li, Yue; He, Lianzhen – Language Testing, 2022
Local language tests are an arena where national language standards can be operationalized to create a hub for integrating assessment results and language support. Few studies, however, have examined the operationalization of national standards in local language assessment contexts. In this study, we proposed a model to present the integration of…
Descriptors: Language Tests, Listening Comprehension Tests, Second Language Learning, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Min, Shangchao; He, Lianzhen – Language Testing, 2022
In this study, we present the development of individualized feedback for a large-scale listening assessment by combining standard setting and cognitive diagnostic assessment (CDA) approaches. We used the performance data from 3,358 students' item-level responses to a field test of a national EFL test primarily intended for tertiary-level EFL…
Descriptors: Feedback (Response), Second Language Learning, Second Language Instruction, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
May, Lyn; Nakatsuhara, Fumiyo; Lam, Daniel; Galaczi, Evelina – Language Testing, 2020
In this paper we report on a project in which we developed tools to support the classroom assessment of learners' interactional competence (IC) and provided learning oriented feedback in the context of preparation for a high-stakes face-to-face speaking test. Six trained examiners provided stimulated verbal reports (n = 72) on 12 paired…
Descriptors: Intercultural Communication, High Stakes Tests, Feedback (Response), Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Sun-Young; Lee, Senyung; Lidster, Ryan – Language Testing, 2021
In this study we investigated the potential for a shared-first-language (shared-L1) effect on second language (L2) listening test scores using differential item functioning (DIF) analyses. We did this in order to understand how accented speech may influence performance at the item level, while controlling for key variables including listening…
Descriptors: Listening Comprehension Tests, Language Tests, Native Language, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4