NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 166 to 180 of 956 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ilhan, Mustafa; Öztürk, Nagihan Boztunç; Sahin, Melek Gülsah – Participatory Educational Research, 2020
In this research, the effect of an item's type and cognitive level on its difficulty index was investigated. The data source of the study consisted of the responses of the 12535 students in the Turkey sample (6079 and 6456 students from eighth and fourth grade respectively) of TIMSS 2015. The responses were a total of 215 items at the eighth-grade…
Descriptors: Test Items, Difficulty Level, Cognitive Processes, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Aryadoust, Vahid – Computer Assisted Language Learning, 2020
The aim of the present study is two-fold. First, it uses eye-tracking to investigate the dynamics of item reading, both in multiple choice and matching items, before and during two hearings of listening passages in a computerized while-listening performance (WLP) test. Second, it investigates answer changing during the two hearings, which include…
Descriptors: Eye Movements, Test Items, Secondary School Students, Reading Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Akhavan Masoumi, Ghazal; Sadeghi, Karim – Language Testing in Asia, 2020
This study aimed to examine the effect of test format on test performance by comparing Multiple Choice (MC) and Constructed Response (CR) vocabulary tests in an EFL setting. Also, this paper investigated the function of gender in MC and CR vocabulary measures. To this end, five 20-item stem-equivalent vocabulary tests (CR, and 3-, 4-, 5-, and…
Descriptors: Language Tests, Test Items, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Eberharter, Kathrin; Kormos, Judit; Guggenbichler, Elisa; Ebner, Viktoria S.; Suzuki, Shungo; Moser-Frötscher, Doris; Konrad, Eva; Kremmel, Benjamin – Language Testing, 2023
In online environments, listening involves being able to pause or replay the recording as needed. Previous research indicates that control over the listening input could improve the measurement accuracy of listening assessment. Self-pacing also supports the second language (L2) comprehension processes of test-takers with specific learning…
Descriptors: Literacy, Native Language, Second Language Learning, Second Language Instruction
Al-Jarf, Reima – Online Submission, 2023
This article aims to give a comprehensive guide to planning and designing vocabulary tests which include Identifying the skills to be covered by the test; outlining the course content covered; preparing a table of specifications that shows the skill, content topics and number of questions allocated to each; and preparing the test instructions. The…
Descriptors: Vocabulary Development, Learning Processes, Test Construction, Course Content
Nebraska Department of Education, 2024
The Nebraska Student-Centered Assessment System (NSCAS) is a statewide assessment system that embodies Nebraska's holistic view of students and helps them prepare for success in postsecondary education, career, and civic life. It uses multiple measures throughout the year to provide educators and decision-makers at all levels with the insights…
Descriptors: Student Evaluation, Evaluation Methods, Elementary School Students, Middle School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Moon, Jung Aa; Keehner, Madeleine; Katz, Irvin R. – Educational Measurement: Issues and Practice, 2019
The current study investigated how item formats and their inherent affordances influence test-takers' cognition under uncertainty. Adult participants solved content-equivalent math items in multiple-selection multiple-choice and four alternative grid formats. The results indicated that participants' affirmative response tendency (i.e., judge the…
Descriptors: Affordances, Test Items, Test Format, Test Wiseness
Bronson Hui – ProQuest LLC, 2021
Vocabulary researchers have started expanding their assessment toolbox by incorporating timed tasks and psycholinguistic instruments (e.g., priming tasks) to gain insights into lexical development (e.g., Elgort, 2011; Godfroid, 2020b; Nakata & Elgort, 2020; Vandenberghe et al., 2021). These timed sensitive and implicit word measures differ…
Descriptors: Measures (Individuals), Construct Validity, Decision Making, Vocabulary Development
Peer reviewed Peer reviewed
Direct linkDirect link
Woodcock, Stuart; Howard, Steven J.; Ehrich, John – School Psychology, 2020
Standardized testing is ubiquitous in educational assessment, but questions have been raised about the extent to which these test scores accurately reflect students' genuine knowledge and skills. To more rigorously investigate this issue, the current study employed a within-subject experimental design to examine item format effects on primary…
Descriptors: Elementary School Students, Grade 3, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Liou, Pey-Yan; Bulut, Okan – Research in Science Education, 2020
The purpose of this study was to examine eighth-grade students' science performance in terms of two test design components, item format, and cognitive domain. The portion of Taiwanese data came from the 2011 administration of the Trends in International Mathematics and Science Study (TIMSS), one of the major international large-scale assessments…
Descriptors: Foreign Countries, Middle School Students, Grade 8, Science Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C. – Journal of Research in Science Teaching, 2016
Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…
Descriptors: Science Tests, Scoring, Automation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Lindner, Marlit A.; Schult, Johannes; Mayer, Richard E. – Journal of Educational Psychology, 2022
This classroom experiment investigates the effects of adding representational pictures to multiple-choice and constructed-response test items to understand the role of the response format for the multimedia effect in testing. Participants were 575 fifth- and sixth-graders who answered 28 science test items--seven items in each of four experimental…
Descriptors: Elementary School Students, Grade 5, Grade 6, Multimedia Materials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
National Academies Press, 2022
The National Assessment of Educational Progress (NAEP) -- often called "The Nation's Report Card" -- is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable…
Descriptors: Costs, Futures (of Society), National Competency Tests, Educational Trends
Peer reviewed Peer reviewed
Direct linkDirect link
Chaparro, Erin A.; Stoolmiller, Mike; Park, Yonghan; Baker, Scott K.; Basaraba, Deni; Fien, Hank; Smith, Jean L. Mercier – Assessment for Effective Intervention, 2018
Progress monitoring has been adopted as an integral part of multi-tiered support systems. Oral reading fluency (ORF) is the most established assessment for progress-monitoring purposes. To generate valid trend lines or slopes, ORF passages must be of equivalent difficulty. Recently, however, evidence indicates that ORF passages are not equivalent,…
Descriptors: Reading Fluency, Reading Tests, Grade 2, Difficulty Level
Pages: 1  |  ...  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  ...  |  64