Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 11 |
Descriptor
| Cues | 13 |
| Language Tests | 13 |
| Writing Tests | 13 |
| English (Second Language) | 11 |
| Scores | 10 |
| Second Language Learning | 9 |
| Grammar | 5 |
| Second Language Instruction | 5 |
| Comparative Analysis | 4 |
| Correlation | 4 |
| Difficulty Level | 4 |
| More ▼ | |
Source
| ETS Research Report Series | 3 |
| Language Testing | 3 |
| ProQuest LLC | 2 |
| International Journal of… | 1 |
| Language Teaching Research… | 1 |
| Language Testing in Asia | 1 |
| Written Communication | 1 |
Author
| Lee, Yong-Won | 2 |
| Agawa, Toshie | 1 |
| Ahmadi, Saeed | 1 |
| Asano, Keiko | 1 |
| Bavali, Mohammad | 1 |
| Breland, Hunter | 1 |
| Cho, Yeonsuk | 1 |
| Deane, Paul | 1 |
| Gentile, Claudia | 1 |
| Gurevich, Olga | 1 |
| Hamp-Lyons, Liz | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 11 |
| Journal Articles | 10 |
| Tests/Questionnaires | 4 |
| Dissertations/Theses -… | 2 |
| Speeches/Meeting Papers | 1 |
Education Level
| Higher Education | 3 |
| Postsecondary Education | 3 |
| High Schools | 1 |
| Secondary Education | 1 |
Audience
Location
| China | 1 |
| Iran | 1 |
| Japan | 1 |
| Massachusetts | 1 |
| United Kingdom | 1 |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
| Test of English as a Foreign… | 4 |
| International English… | 2 |
What Works Clearinghouse Rating
Sam Salmi; Mohammad Taghi Farvardin – Language Teaching Research Quarterly, 2025
The purpose of this study was to examine the effectiveness of explicit corrective feedback (CF) strategies (i.e., metalinguistic feedback and explicit correction) versus implicit CF methods (i.e., recasts and explanation questions) in helping English language learners acquire the that-trace filter. To this end, one hundred twenty intermediate…
Descriptors: Error Correction, Feedback (Response), Language Tests, Grammar
Yan, Xun; Staples, Shelley – Language Testing, 2020
The argument-based approach to validity (Kane, 2013) focuses on two steps: (1) making claims about the proposed interpretation and use of test scores as a coherent, interpretive argument; and (2) evaluating those claims based on theoretical and empirical evidence related to test performances and scores. This paper discusses the role of…
Descriptors: Writing Tests, Language Tests, Language Proficiency, Test Validity
Ahmadi, Saeed; Riasati, Mohammad Javad; Bavali, Mohammad – International Journal of Instruction, 2019
The present study aimed at investigating whether academic IELTS candidates perform differently in writing on either a chart topic or a table topic of the IELTS writing task 1 with regard to the four IELTS writing marking criteria i.e. task achievement, coherence and cohesion, lexical resource, and grammar range and accuracy. The study adopted a…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Writing Tests
Shi, Bibing; Huang, Liyan; Lu, Xiaofei – Language Testing, 2020
The continuation task, a new form of reading-writing integrated task in which test-takers read an incomplete story and then write the continuation and ending of the story, has been increasingly used in writing assessment, especially in China. However, language-test developers' understanding of the effects of important task-related factors on…
Descriptors: Cues, Writing Tests, Writing Evaluation, English (Second Language)
Khuder, Baraa; Harwood, Nigel – Written Communication, 2019
This mixed-methods study investigates writers' task representation and the factors affecting it in test-like and non-test-like conditions. Five advanced-level L2 writers wrote two argumentative essays each, one in test-like conditions and the other in non-test-like conditions where the participants were allowed to use all the time and online…
Descriptors: Second Language Learning, Task Analysis, Advanced Students, Essays
Koizumi, Rie; In'nami, Yo; Asano, Keiko; Agawa, Toshie – Language Testing in Asia, 2016
Background: While numerous articles on Criterion® have been published and its validity evidence has accumulated, test users need to obtain relevant validity evidence for their local context and develop their own validity argument. This paper aims to provide validity evidence for the interpretation and use of Criterion® for assessing second…
Descriptors: Writing Evaluation, College Students, Correlation, Second Language Learning
Cho, Yeonsuk; Rijmen, Frank; Novák, Jakub – Language Testing, 2013
This study examined the influence of prompt characteristics on the averages of all scores given to test taker responses on the TOEFL iBT[TM] integrated Read-Listen-Write (RLW) writing tasks for multiple administrations from 2005 to 2009. In the context of TOEFL iBT RLW tasks, the prompt consists of a reading passage and a lecture. To understand…
Descriptors: English (Second Language), Language Tests, Writing Tests, Cues
Thakkar, Darshan – ProQuest LLC, 2013
It is generally theorized that English Language Learner (ELL) students do not succeed on state standardized tests because ELL students lack the cognitive academic language skills necessary to function on the large scale content assessments. The purpose of this dissertation was to test that theory. Through the use of quantitative methodology, ELL…
Descriptors: Correlation, English Language Learners, Standardized Tests, Academic Discourse
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – ETS Research Report Series, 2008
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multitrait) rating dimensions and their relationships to holistic scores and "e-rater"® essay feature variables in the context of the TOEFL® computer-based test (CBT) writing assessment. Data analyzed in the study were analytic and holistic…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scoring
Lim, Gad S. – ProQuest LLC, 2009
Performance assessments have become the norm for evaluating language learners' writing abilities in international examinations of English proficiency. Two aspects of these assessments are usually systematically varied: test takers respond to different prompts, and their responses are read by different raters. This raises the possibility of undue…
Descriptors: Performance Based Assessment, Language Tests, Performance Tests, Test Validity
Deane, Paul; Gurevich, Olga – ETS Research Report Series, 2008
For many purposes, it is useful to collect a corpus of texts all produced to the same stimulus, whether to measure performance (as on a test) or to test hypotheses about population differences. This paper examines several methods for measuring similarities in phrasing and content and demonstrates that these methods can be used to identify…
Descriptors: Test Content, Computational Linguistics, Native Speakers, Writing Tests
Hamp-Lyons, Liz; Prochnow, Sheila – 1991
This study investigated the effect of writing task topic on learner performance in a second-language writing test, in this case the Michigan English Language Assessment Battery designed to test proficiency in English as a Second Language. The 64 topics or "prompts" used in the test (offered as pairs of options) were categorized according…
Descriptors: Cues, Difficulty Level, English (Second Language), Language Tests
Lee, Yong-Won; Breland, Hunter; Muraki, Eiji – ETS Research Report Series, 2004
This study has investigated the comparability of computer-based testing (CBT) writing prompts in the Test of English as a Foreign Language™ (TOEFL®) for examinees of different native language backgrounds. A total of 81 writing prompts introduced from July 1998 through August 2000 were examined using a three-step logistic regression procedure for…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing

Peer reviewed
Direct link
