NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Neha Biju; Nasser Said Gomaa Abdelrasheed; Khilola Bakiyeva; K. D. V. Prasad; Biruk Jember – Language Testing in Asia, 2024
In recent years, language practitioners have paid increasing attention to artificial intelligence (AI)'s role in language programs. This study investigated the impact of AI-assisted language assessment on L2 learners' foreign language anxiety (FLA), attitudes, motivation, and writing skills. The study adopted a sequential exploratory mixed-methods…
Descriptors: Artificial Intelligence, Computer Software, Computer Assisted Testing, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Shohamy, Elana; Tannenbaum, Michal; Gani, Anna – International Journal of Bilingual Education and Bilingualism, 2022
Notwithstanding the introduction of education multilingual policies worldwide, testing and assessment procedures still rely almost exclusively on the monolingual construct. This paper describes a study, part of a larger project fostering a new multilingual education policy in Israeli schools, exploring bi/multilingual assessment. It included two…
Descriptors: Scores, Comparative Analysis, Hebrew, Arabic
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Ahyoung Alicia; Lee, Shinhye; Chapman, Mark; Wilmes, Carsten – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2019
This study aimed to investigate how Grade 1-2 English language learners (ELLs) differ in their performance on a writing test in two test modes: paper and online. Participants were 139 ELLs in the United States. They completed three writing tasks, representing three test modes: (1) a paper in which students completed their writing using a…
Descriptors: Elementary School Students, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
DeCarlo, Lawrence T. – ETS Research Report Series, 2008
Rater behavior in essay grading can be viewed as a signal-detection task, in that raters attempt to discriminate between latent classes of essays, with the latent classes being defined by a scoring rubric. The present report examines basic aspects of an approach to constructed-response (CR) scoring via a latent-class signal-detection model. The…
Descriptors: Scoring, Responses, Test Format, Bias
Wolfe, Edward; And Others – 1993
The two studies described here compare essays composed on word processors with those composed with pen and paper for a standardized writing assessment. The following questions guided these studies: (1) Are there differences in test administration and writing processes associated with handwritten versus word-processor writing assessments? (2) Are…
Descriptors: Adults, Comparative Analysis, Computer Uses in Education, Essays
Arizona Department of Education, 2006
Arizona's Instrument to Measure Standards (AIMS), a Standards-Based test, provides educators and the public with valuable information regarding the progress of Arizona's students toward mastering Arizona's reading, writing and mathematics Standards. This specific test, Arizona's Instrument to Measure Standards Dual Purpose Assessment (AIMS DPA) is…
Descriptors: Grade 8, Reference Materials, Test Items, Scoring
Hendrickson, Amy; Patterson, Brian; Melican, Gerald – College Board, 2008
Presented at the Annual National Council on Measurement in Education (NCME) in New York in March 2008. This presentation explores how different item weighting can affect the effective weights, validity coefficents and test reliability of composite scores among test takers.
Descriptors: Multiple Choice Tests, Test Format, Test Validity, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness