Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 12 |
Descriptor
Source
Grantee Submission | 12 |
Author
Adam C. Sales | 1 |
Andrew A. McReynolds | 1 |
April L. Zenisky | 1 |
Ashish Gurung | 1 |
Ben Backes | 1 |
Ben Seipel | 1 |
Bonnie T. Hallman | 1 |
Cari F. Herrmann Abell | 1 |
Christopher J. Lemons | 1 |
Cynthia S. Puranik | 1 |
David Eby | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Speeches/Meeting Papers | 5 |
Reports - Evaluative | 2 |
Numerical/Quantitative Data | 1 |
Education Level
Audience
Location
California | 1 |
Idaho | 1 |
Massachusetts | 1 |
Oklahoma | 1 |
Tennessee | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
Flesch Kincaid Grade Level… | 1 |
Gates MacGinitie Reading Tests | 1 |
Massachusetts Comprehensive… | 1 |
What Works Clearinghouse Rating
Ben Backes; James Cowan – Grantee Submission, 2024
We investigate two research questions using a recent statewide transition from paper to computer-based testing: first, the extent to which test mode effects found in prior studies can be eliminated in large-scale administration; and second, the degree to which online and paper assessments offer different information about underlying student…
Descriptors: Computer Assisted Testing, Test Format, Differences, Academic Achievement
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2022
As implementation of the "Next Generation Science Standards" moves forward, there is a need for new assessments that can measure students' integrated three-dimensional science learning. The National Research Council has suggested that these assessments be multicomponent tasks that utilize a combination of item formats including…
Descriptors: Multiple Choice Tests, Conditioning, Test Items, Item Response Theory
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Megumi E. Takada; Christopher J. Lemons; Lakshmi Balasubramanian; Bonnie T. Hallman; Stephanie Al Otaiba; Cynthia S. Puranik – Grantee Submission, 2023
There have been a handful of studies on kindergarteners' motivational beliefs about writing, yet measuring these beliefs in young children continues to pose a set of challenges. The purpose of this exploratory, mixed-methods study was to examine how kindergarteners understand and respond to different assessment formats designed to capture their…
Descriptors: Kindergarten, Young Children, Student Attitudes, Student Motivation
Olney, Andrew M. – Grantee Submission, 2021
In contrast to simple feedback, which provides students with the correct answer, elaborated feedback provides an explanation of the correct answer with respect to the student's error. Elaborated feedback is thus a challenge for AI in education systems because it requires dynamic explanations, which traditionally require logical reasoning and…
Descriptors: Feedback (Response), Error Patterns, Artificial Intelligence, Test Format
Cari F. Herrmann Abell – Grantee Submission, 2021
In the last twenty-five years, the discussion surrounding validity evidence has shifted both in language and scope, from the work of Messick and Kane to the updated Standards for Educational and Psychological Testing. However, these discussions haven't necessarily focused on best practices for different types of instruments or assessments, taking…
Descriptors: Test Format, Measurement Techniques, Student Evaluation, Rating Scales
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Grantee Submission, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-In-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this paper, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Wang, Zuowei; O'Reilly, Tenaha; Sabatini, John; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2021
We compared high school students' performance in a traditional comprehension assessment requiring them to identify key information and draw inferences from single texts, and a scenario-based assessment (SBA) requiring them to integrate, evaluate and apply information across multiple sources. Both assessments focused on a non-academic topic.…
Descriptors: Comparative Analysis, High School Students, Inferences, Reading Tests
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Trina D. Spencer; Marilyn S. Thompson; Douglas B. Petersen; Yixing Liu; M. Adelaida Restrepo – Grantee Submission, 2023
For young Spanish-speaking children entering U. S. schools, it is imperative that educators foster growth in the home language and in the language of instruction to the fullest extent possible. Monitoring language development over time is crucial for promoting language development because it allows educators to individualize student instruction.…
Descriptors: Spanish Speaking, English (Second Language), Second Language Learning, Native Language
Hildenbrand, Lena; Wiley, Jennifer – Grantee Submission, 2021
Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students…
Descriptors: Test Format, Outcomes of Education, Cognitive Processes, Learning Activities
Peter Organisciak; Michele Newman; David Eby; Selcuk Acar; Denis Dumas – Grantee Submission, 2023
Purpose: Most educational assessments tend to be constructed in a close-ended format, which is easier to score consistently and more affordable. However, recent work has leveraged computation text methods from the information sciences to make open-ended measurement more effective and reliable for older students. This study asks whether such text…
Descriptors: Learning Analytics, Child Language, Semantics, Age Differences