NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lazenby, Katherine; Balabanoff, Morgan E.; Becker, Nicole M.; Moon, Alena; Barbera, Jack – Journal of Chemical Education, 2021
Identifying effective methods of assessment and developing robust assessments are key areas of research in chemistry education. This research is needed to evaluate instructional innovations and curricular reform. In this primer, we advocate for the use of a type of assessment, ordered multiple-choice (OMC), across postsecondary chemistry. OMC…
Descriptors: Test Construction, Multiple Choice Tests, College Science, STEM Education
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Chun Chen; Hsin-Kai Wu; Ching-Ting Hsin – Research in Science & Technological Education, 2024
Background and Purpose: As a growing number of instructional units have been developed to promote young children's scientific and engineering practices (SEPs), understanding how to evaluate and assess children's SEPs is imperative. However, paper-and-pencil assessments would not be suitable for young children because of their limited reading and…
Descriptors: Science Education, Engineering Education, Elementary School Students, Middle School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Trevor I.; Bendjilali, Nasrine – Physical Review Physics Education Research, 2022
Several recent studies have employed item response theory (IRT) to rank incorrect responses to commonly used research-based multiple-choice assessments. These studies use Bock's nominal response model (NRM) for applying IRT to categorical (nondichotomous) data, but the response rankings only utilize half of the parameters estimated by the model.…
Descriptors: Item Response Theory, Test Items, Multiple Choice Tests, Science Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kanli, Uygar; Ilican, Ömer – Journal of Turkish Science Education, 2020
This study was conducted to examine the achievements of students in the concepts of light and shadow measured in different assessment formats according to the learning style and gender. In this study, correlational survey model was used. The sample consisted of 10th grade (16 years) high school students (n=815) from different types of six high…
Descriptors: Foreign Countries, Scientific Concepts, Cognitive Style, Gender Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C.; Circi, Ruhan – International Journal of Testing, 2017
Artificial Neural Networks (ANNs) have been proposed as a promising approach for the classification of students into different levels of a psychological attribute hierarchy. Unfortunately, because such classifications typically rely upon internally produced item response patterns that have not been externally validated, the instability of ANN…
Descriptors: Artificial Intelligence, Classification, Student Evaluation, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Campbell, Mark L. – Journal of Chemical Education, 2015
Multiple-choice exams, while widely used, are necessarily imprecise due to the contribution of the final student score due to guessing. This past year at the United States Naval Academy the construction and grading scheme for the department-wide general chemistry multiple-choice exams were revised with the goal of decreasing the contribution of…
Descriptors: Multiple Choice Tests, Chemistry, Science Tests, Guessing (Tests)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kalkan, Ömür Kaya; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2016
Linear factor analysis models used to examine constructs underlying the responses are not very suitable for dichotomous or polytomous response formats. The associated problems cannot be eliminated by polychoric or tetrachoric correlations in place of the Pearson correlation. Therefore, we considered parameters obtained from the NOHARM and FACTOR…
Descriptors: Sample Size, Nonparametric Statistics, Factor Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Wilcox, Bethany R.; Pollock, Steven J. – Physical Review Special Topics - Physics Education Research, 2015
Standardized conceptual assessment represents a widely used tool for educational researchers interested in student learning within the standard undergraduate physics curriculum. For example, these assessments are often used to measure student learning across educational contexts and instructional strategies. However, to support the large-scale…
Descriptors: Science Instruction, Scientific Concepts, College Science, Physics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kachchaf, Rachel; Noble, Tracy; Rosebery, Ann; Wang, Yang; Warren, Beth; O'Connor, Mary Catherine – Grantee Submission, 2014
Most research on linguistic features of test items negatively impacting English language learners' (ELLs') performance has focused on lexical and syntactic features, rather than discourse features that operate at the level of the whole item. This mixed-methods study identified two discourse features in 162 multiple-choice items on a standardized…
Descriptors: English Language Learners, Science Tests, Test Items, Discourse Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
deBraga, Michael; Boyd, Cleo; Abdulnour, Shahad – Teaching & Learning Inquiry, 2015
A primary goal of university instruction is the students' demonstration of improved, highly developed critical thinking (CT) skills. However, how do faculty encourage CT and its potential concomitant increase in student workload without negatively impacting student perceptions of the course? In this investigation, an advanced biology course is…
Descriptors: Scholarship, Instruction, Learning, Critical Thinking
Peer reviewed Peer reviewed
Direct linkDirect link
Barniol, Pablo; Zavala, Genaro – Physical Review Special Topics - Physics Education Research, 2014
In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended…
Descriptors: Multiple Choice Tests, Geometric Concepts, Algebra, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Jooyong – British Journal of Educational Technology, 2010
The newly developed computerized Constructive Multiple-choice Testing system is introduced. The system combines short answer (SA) and multiple-choice (MC) formats by asking examinees to respond to the same question twice, first in the SA format, and then in the MC format. This manipulation was employed to collect information about the two…
Descriptors: Grade 5, Evaluation Methods, Multiple Choice Tests, Scores
Way, Walter D.; Murphy, Daniel; Powers, Sonya; Keng, Leslie – Pearson, 2012
Significant momentum exists for next-generation assessments to increasingly utilize technology to develop and deliver performance-based assessments. Many traditional challenges with this assessment approach still apply, including psychometric concerns related to performance-based tasks (PBTs), which include low reliability, efficiency of…
Descriptors: Task Analysis, Performance Based Assessment, Technology Uses in Education, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Alexander, Cara J.; Crescini, Weronika M.; Juskewitch, Justin E.; Lachman, Nirusha; Pawlina, Wojciech – Anatomical Sciences Education, 2009
The goals of our study were to determine the predictive value and usability of an audience response system (ARS) as a knowledge assessment tool in an undergraduate medical curriculum. Over a three year period (2006-2008), data were collected from first year didactic blocks in Genetics/Histology and Anatomy/Radiology (n = 42-50 per class). During…
Descriptors: Feedback (Response), Medical Education, Audience Response, Genetics
Peer reviewed Peer reviewed
Huffman, Douglas; Heller, Patricia – Physics Teacher, 1995
The Force Concept Inventory (FCI) is a 29-question, multiple-choice test designed to assess students' Newtonian and non-Newtonian conceptions of force. Presents an analysis of FCI results as one way to determine what the inventory actually measures. (LZ)
Descriptors: Evaluation Methods, Force, Multiple Choice Tests, Physics
Previous Page | Next Page »
Pages: 1  |  2