NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20243
Since 2021 (last 5 years)12
Since 2016 (last 10 years)19
Source
Grantee Submission19
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ben Backes; James Cowan – Grantee Submission, 2024
We investigate two research questions using a recent statewide transition from paper to computer-based testing: first, the extent to which test mode effects found in prior studies can be eliminated in large-scale administration; and second, the degree to which online and paper assessments offer different information about underlying student…
Descriptors: Computer Assisted Testing, Test Format, Differences, Academic Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2022
As implementation of the "Next Generation Science Standards" moves forward, there is a need for new assessments that can measure students' integrated three-dimensional science learning. The National Research Council has suggested that these assessments be multicomponent tasks that utilize a combination of item formats including…
Descriptors: Multiple Choice Tests, Conditioning, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Megumi E. Takada; Christopher J. Lemons; Lakshmi Balasubramanian; Bonnie T. Hallman; Stephanie Al Otaiba; Cynthia S. Puranik – Grantee Submission, 2023
There have been a handful of studies on kindergarteners' motivational beliefs about writing, yet measuring these beliefs in young children continues to pose a set of challenges. The purpose of this exploratory, mixed-methods study was to examine how kindergarteners understand and respond to different assessment formats designed to capture their…
Descriptors: Kindergarten, Young Children, Student Attitudes, Student Motivation
Olney, Andrew M. – Grantee Submission, 2021
In contrast to simple feedback, which provides students with the correct answer, elaborated feedback provides an explanation of the correct answer with respect to the student's error. Elaborated feedback is thus a challenge for AI in education systems because it requires dynamic explanations, which traditionally require logical reasoning and…
Descriptors: Feedback (Response), Error Patterns, Artificial Intelligence, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cari F. Herrmann Abell – Grantee Submission, 2021
In the last twenty-five years, the discussion surrounding validity evidence has shifted both in language and scope, from the work of Messick and Kane to the updated Standards for Educational and Psychological Testing. However, these discussions haven't necessarily focused on best practices for different types of instruments or assessments, taking…
Descriptors: Test Format, Measurement Techniques, Student Evaluation, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Grantee Submission, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-In-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this paper, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2019
The "Next Generation Science Standards" calls for new assessments that measure students' integrated three-dimensional science learning. The National Research Council has suggested that these assessments utilize a combination of item formats including constructed-response and multiple-choice. In this study, students were randomly assigned…
Descriptors: Science Tests, Multiple Choice Tests, Test Format, Test Items
Wang, Zuowei; O'Reilly, Tenaha; Sabatini, John; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2021
We compared high school students' performance in a traditional comprehension assessment requiring them to identify key information and draw inferences from single texts, and a scenario-based assessment (SBA) requiring them to integrate, evaluate and apply information across multiple sources. Both assessments focused on a non-academic topic.…
Descriptors: Comparative Analysis, High School Students, Inferences, Reading Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Sinharay, Sandip – Grantee Submission, 2018
Tatsuoka (1984) suggested several extended caution indices and their standardized versions that have been used as person-fit statistics by researchers such as Drasgow, Levine, and McLaughlin (1987), Glas and Meijer (2003), and Molenaar and Hoijtink (1990). However, these indices are only defined for tests with dichotomous items. This paper extends…
Descriptors: Test Format, Goodness of Fit, Item Response Theory, Error Patterns
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lang, David; Stenhaug, Ben; Kizilcec, Rene – Grantee Submission, 2019
This research evaluates the psychometric properties of short-answer response items under a variety of grading rules in the context of a mobile learning platform in Africa. This work has three main findings. First, we introduce the concept of a differential device function (DDF), a type of differential item function that stems from the device a…
Descriptors: Foreign Countries, Psychometrics, Test Items, Test Format
Trina D. Spencer; Marilyn S. Thompson; Douglas B. Petersen; Yixing Liu; M. Adelaida Restrepo – Grantee Submission, 2023
For young Spanish-speaking children entering U. S. schools, it is imperative that educators foster growth in the home language and in the language of instruction to the fullest extent possible. Monitoring language development over time is crucial for promoting language development because it allows educators to individualize student instruction.…
Descriptors: Spanish Speaking, English (Second Language), Second Language Learning, Native Language
Hildenbrand, Lena; Wiley, Jennifer – Grantee Submission, 2021
Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students…
Descriptors: Test Format, Outcomes of Education, Cognitive Processes, Learning Activities
Peter Organisciak; Michele Newman; David Eby; Selcuk Acar; Denis Dumas – Grantee Submission, 2023
Purpose: Most educational assessments tend to be constructed in a close-ended format, which is easier to score consistently and more affordable. However, recent work has leveraged computation text methods from the information sciences to make open-ended measurement more effective and reliable for older students. This study asks whether such text…
Descriptors: Learning Analytics, Child Language, Semantics, Age Differences
Previous Page | Next Page »
Pages: 1  |  2