NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ali Al-Barakat; Rommel AlAli; Omayya Al-Hassan; Khaled Al-Saud – Educational Process: International Journal, 2025
Background/purpose: The study tries to discover how predictive thinking can be incorporated into writing activities to assist students in developing their creative skills in writing learning environments. Through this study, teachers will be able to adopt a new teaching method that helps transform the way creative writing is taught in language…
Descriptors: Thinking Skills, Creative Writing, Writing Instruction, Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Osama Koraishi – Language Teaching Research Quarterly, 2024
This study conducts a comprehensive quantitative evaluation of OpenAI's language model, ChatGPT 4, for grading Task 2 writing of the IELTS exam. The objective is to assess the alignment between ChatGPT's grading and that of official human raters. The analysis encompassed a multifaceted approach, including a comparison of means and reliability…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Ferrara, Steve – Applied Measurement in Education, 2016
As an alternative to rubric scoring, comparative judgment generates essay scores by aggregating decisions about the relative quality of the essays. Comparative judgment eliminates certain scorer biases and potentially reduces training requirements, thereby allowing a large number of judges, including teachers, to participate in essay evaluation.…
Descriptors: Essays, Scoring, Comparative Analysis, Evaluators
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kural, Faruk – Journal of Language and Linguistic Studies, 2018
The present paper, which is a study based on midterm exam results of 53 University English prep-school students, examines correlation between a direct writing test, measured holistically by multiple-trait scoring, and two indirect writing tests used in a competence exam, one of which is a multiple-choice cloze test and the other a rewrite test…
Descriptors: Writing Evaluation, Cloze Procedure, Comparative Analysis, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Humphry, Stephen M.; McGrane, Joshua A. – Australian Educational Researcher, 2015
This paper presents a method for equating writing assessments using pairwise comparisons which does not depend upon conventional common-person or common-item equating designs. Pairwise comparisons have been successfully applied in the assessment of open-ended tasks in English and other areas such as visual art and philosophy. In this paper,…
Descriptors: Writing Evaluation, Evaluation Methods, Comparative Analysis, Writing Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ahmed, Tamim; Hanif, Maria – Journal of Education and Practice, 2016
This study is intended to investigate student's achievement capability among two families i.e. Low and High income families and designed for primary level learners. A Reading, Arithmetic and Writing (RAW) Achievement test that was developed as a part of another research study (Tamim Ahmed Khan, 2015) was adopted for this study. Both English medium…
Descriptors: Low Income, Performance Based Assessment, Elementary School Students, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Al-Sayed, Rania Kamal Muhammad; Abdel-Haq, Eman Muhammad; El-Deeb, Mervat Abou-Bakr; Ali, Mahsoub Abdel-Sadeq – Online Submission, 2016
The present study aimed at developing the memoir writing skills as a creative non-fiction genre of second year distinguished governmental language preparatory school pupils using the a WebQuest model. Fifty participants from second year at Hassan Abu-Bakr Distinguished Governmental Language School at Al-Qanater Al-Khairia(Qalubia Governorate) were…
Descriptors: Foreign Countries, Autobiographies, Personal Narratives, Writing Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Peckham, Irvin – College Composition and Communication, 2010
This article compares essays written in response to the ACT Essay prompt and a locally developed prompt used for placement. The two writing situations differ by time and genre: the ACT Essay is timed and argumentative; the locally developed is untimed and explanatory. The article analyzes the differences in student performance and predictive…
Descriptors: Postsecondary Education, Writing Tests, Context Effect, Time
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Jinyan – Assessing Writing, 2008
Using generalizability theory, this study examined both the rating variability and reliability of ESL students' writing in the provincial English examinations in Canada. Three years' data were used in order to complete the analyses and examine the stability of the results. The major research question that guided this study was: Are there any…
Descriptors: Generalizability Theory, Foreign Countries, English (Second Language), Writing Tests
Crehan, Kevin D.; Hudson, Rhoton – 1998
The aim of this study was to explore a method of improving the objectivity, reliability, and efficiency of scoring performance assessments that involve constructed written responses. Millman (1997) has suggested an alternative to using model responses at each score category. The proposed strategy, hypothesized to increase scorer reliability and…
Descriptors: Comparative Analysis, Constructed Response, Cost Effectiveness, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – ETS Research Report Series, 2005
The e-rater® system has been used by ETS for automated essay scoring since 1999. This paper describes a new version of e-rater (v.2.0) that differs from the previous one (v.1.3) with regard to the feature set and model building approach. The paper describes the new version, compares the new and previous versions in terms of performance, and…
Descriptors: Essay Tests, Automation, Scoring, Comparative Analysis