NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20241
Since 2021 (last 5 years)10
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sibic, Okan; Sesen, Burcin Acar – International Journal of Assessment Tools in Education, 2022
One of the main goals of science education is to make students gain science process skills. Thus, it is significant to measure whether students gain those skills or not. For this purpose, various tests have been produced and used in various studies. This study aims to examine science process skills tests which have been used in the theses produced…
Descriptors: Foreign Countries, Science Education, Science Process Skills, Masters Theses
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Vida, Leonardo J.; Bolsinova, Maria; Brinkhuis, Matthieu J. S. – International Educational Data Mining Society, 2021
The quality of exams drives test-taking behavior of examinees and is a proxy for the quality of teaching. As most university exams have strict time limits, and speededness is an important measure of the cognitive state of examinees, this might be used to assess the connection between exams' quality and examinees' performance. The practice of…
Descriptors: Accuracy, Test Items, Tests, Student Behavior
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2022
As implementation of the "Next Generation Science Standards" moves forward, there is a need for new assessments that can measure students' integrated three-dimensional science learning. The National Research Council has suggested that these assessments be multicomponent tasks that utilize a combination of item formats including…
Descriptors: Multiple Choice Tests, Conditioning, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baral, Sami; Botelho, Anthony; Santhanam, Abhishek; Gurung, Ashish; Cheng, Li; Heffernan, Neil – International Educational Data Mining Society, 2023
Teachers often rely on the use of a range of open-ended problems to assess students' understanding of mathematical concepts. Beyond traditional conceptions of student open-ended work, commonly in the form of textual short-answer or essay responses, the use of figures, tables, number lines, graphs, and pictographs are other examples of open-ended…
Descriptors: Mathematics Instruction, Mathematical Concepts, Problem Solving, Test Format
Kim, Dong-In; Julian, Marc; Hermann, Pam – Online Submission, 2022
In test equating, one critical equating property is the group invariance property which indicates that the equating function used to convert performance on each alternate form to the reporting scale should be the same for various subgroups. To mitigate the impact of disrupted learning on the item parameters during the COVID-19 pandemic, a…
Descriptors: COVID-19, Pandemics, Test Format, Equated Scores
Olney, Andrew M. – Grantee Submission, 2021
In contrast to simple feedback, which provides students with the correct answer, elaborated feedback provides an explanation of the correct answer with respect to the student's error. Elaborated feedback is thus a challenge for AI in education systems because it requires dynamic explanations, which traditionally require logical reasoning and…
Descriptors: Feedback (Response), Error Patterns, Artificial Intelligence, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cari F. Herrmann Abell – Grantee Submission, 2021
In the last twenty-five years, the discussion surrounding validity evidence has shifted both in language and scope, from the work of Messick and Kane to the updated Standards for Educational and Psychological Testing. However, these discussions haven't necessarily focused on best practices for different types of instruments or assessments, taking…
Descriptors: Test Format, Measurement Techniques, Student Evaluation, Rating Scales
Rogers, Angela – Mathematics Education Research Group of Australasia, 2021
Test developers are continually exploring the possibilities Computer Based Assessment (CBA) offers the Mathematics domain. This paper describes the trial of the Place Value Assessment Tool (PVAT) and its online equivalent, the PVAT-O. Both tests were administered using a counterbalanced research design to 253 Year 3-6 students across nine classes…
Descriptors: Mathematics Tests, Computer Assisted Testing, Number Concepts, Elementary School Students
Hildenbrand, Lena; Wiley, Jennifer – Grantee Submission, 2021
Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students…
Descriptors: Test Format, Outcomes of Education, Cognitive Processes, Learning Activities