Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 15 |
Descriptor
| Evaluation Methods | 16 |
| Science Tests | 16 |
| Statistical Analysis | 16 |
| Science Achievement | 6 |
| College Science | 5 |
| Comparative Analysis | 5 |
| Scores | 5 |
| Foreign Countries | 4 |
| Item Response Theory | 4 |
| Multiple Choice Tests | 4 |
| Pretests Posttests | 4 |
| More ▼ | |
Source
Author
| Kelecioglu, Hülya | 2 |
| Abdulnour, Shahad | 1 |
| Bodner, George M. | 1 |
| Boyd, Cleo | 1 |
| Briggs, Derek C. | 1 |
| Campbell, Mark L. | 1 |
| Conoyer, Sarah J. | 1 |
| Cook Whitt, Katahdin | 1 |
| Davis, Ralph K. | 1 |
| DeBoer, George E. | 1 |
| Ford, Jeremy W. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 13 |
| Reports - Research | 13 |
| Dissertations/Theses -… | 2 |
| Guides - Non-Classroom | 1 |
| Information Analyses | 1 |
| Speeches/Meeting Papers | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Elementary Education | 5 |
| Middle Schools | 5 |
| Grade 8 | 4 |
| Higher Education | 4 |
| Postsecondary Education | 4 |
| Secondary Education | 4 |
| Elementary Secondary Education | 3 |
| Grade 5 | 3 |
| Grade 6 | 3 |
| Junior High Schools | 3 |
| Grade 10 | 2 |
| More ▼ | |
Audience
| Practitioners | 1 |
| Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Trends in International… | 2 |
| Program for International… | 1 |
What Works Clearinghouse Rating
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2018
We compared students' performance on a paper-based test (PBT) and three computer-based tests (CBTs). The three computer-based tests used different test navigation and answer selection features, allowing us to examine how these features affect student performance. The study sample consisted of 9,698 fourth through twelfth grade students from across…
Descriptors: Evaluation Methods, Tests, Computer Assisted Testing, Scores
Ford, Jeremy W.; Conoyer, Sarah J.; Lembke, Erica S.; Smith, R. Alex; Hosp, John L. – Assessment for Effective Intervention, 2018
In the present study, two types of curriculum-based measurement (CBM) tools in science, Vocabulary Matching (VM) and Statement Verification for Science (SV-S), a modified Sentence Verification Technique, were compared. Specifically, this study aimed to determine whether the format of information presented (i.e., SV-S vs. VM) produces differences…
Descriptors: Curriculum Based Assessment, Evaluation Methods, Measurement Techniques, Comparative Analysis
Undersander, Molly A.; Lund, Travis J.; Langdon, Laurie S.; Stains, Marilyne – Chemistry Education Research and Practice, 2017
The design of assessment tools is critical to accurately evaluate students' understanding of chemistry. Although extensive research has been conducted on various aspects of assessment tool design, few studies in chemistry have focused on the impact of the order in which questions are presented to students on the measurement of students'…
Descriptors: Test Construction, Scientific Concepts, Concept Formation, Science Education
Todd, Amber; Romine, William L.; Cook Whitt, Katahdin – Science Education, 2017
We describe the development, validation, and use of the "Learning Progression-Based Assessment of Modern Genetics" (LPA-MG) in a high school biology context. Items were constructed based on a current learning progression framework for genetics (Shea & Duncan, 2013; Todd & Kenyon, 2015). The 34-item instrument, which was tied to…
Descriptors: Genetics, Science Instruction, High School Students, Evaluation Methods
Liou, Pey-Yan; Hung, Yi-Chen – International Journal of Science and Mathematics Education, 2015
We conducted a methodological review of articles using the Programme for International Student Assessment (PISA) or Trends in International Mathematics and Science Study (TIMSS) data published by the SSCI-indexed science education journals, such as the "International Journal of Science and Mathematics Education," the "International…
Descriptors: Literature Reviews, International Assessment, Science Achievement, Elementary Secondary Education
Campbell, Mark L. – Journal of Chemical Education, 2015
Multiple-choice exams, while widely used, are necessarily imprecise due to the contribution of the final student score due to guessing. This past year at the United States Naval Academy the construction and grading scheme for the department-wide general chemistry multiple-choice exams were revised with the goal of decreasing the contribution of…
Descriptors: Multiple Choice Tests, Chemistry, Science Tests, Guessing (Tests)
Kalkan, Ömür Kaya; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2016
Linear factor analysis models used to examine constructs underlying the responses are not very suitable for dichotomous or polytomous response formats. The associated problems cannot be eliminated by polychoric or tetrachoric correlations in place of the Pearson correlation. Therefore, we considered parameters obtained from the NOHARM and FACTOR…
Descriptors: Sample Size, Nonparametric Statistics, Factor Analysis, Correlation
Thummaphan, Phonraphee – ProQuest LLC, 2017
The present study aimed to represent the innovative assessments that support students' learning in STEM education through using the integrative framework for Cognitive Diagnostic Modeling (CDM). This framework is based on three components, cognition, observation, and interpretation (National Research Council, 2001). Specifically, this dissertation…
Descriptors: STEM Education, Cognitive Processes, Observation, Psychometrics
Knierim, Katherine; Turner, Henry; Davis, Ralph K. – Journal of Geoscience Education, 2015
Two-stage exams--where students complete part one of an exam closed book and independently and part two is completed open book and independently (two-stage independent, or TS-I) or collaboratively (two-stage collaborative, or TS-C)--provide a means to include collaborative learning in summative assessments. Collaborative learning has been shown to…
Descriptors: Earth Science, Science Tests, Cooperative Learning, Summative Evaluation
Öztürk-Gübes, Nese; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2016
The purpose of this study was to examine the impact of dimensionality, common-item set format, and different scale linking methods on preserving equity property with mixed-format test equating. Item response theory (IRT) true-score equating (TSE) and IRT observed-score equating (OSE) methods were used under common-item nonequivalent groups design.…
Descriptors: Test Format, Item Response Theory, True Scores, Equated Scores
Lin, Chen-Yu; Wang, Tzu-Hua – EURASIA Journal of Mathematics, Science & Technology Education, 2017
This research explored how different models of Web-based dynamic assessment in remedial teaching improved junior high school student learning achievement and their misconceptions about the topic of "Weather and Climate." This research adopted a quasi-experimental design. A total of 58 7th graders participated in this research.…
Descriptors: Program Implementation, Computer Assisted Testing, Student Evaluation, Evaluation Methods
deBraga, Michael; Boyd, Cleo; Abdulnour, Shahad – Teaching & Learning Inquiry, 2015
A primary goal of university instruction is the students' demonstration of improved, highly developed critical thinking (CT) skills. However, how do faculty encourage CT and its potential concomitant increase in student workload without negatively impacting student perceptions of the course? In this investigation, an advanced biology course is…
Descriptors: Scholarship, Instruction, Learning, Critical Thinking
Strader, Douglas A. – ProQuest LLC, 2012
There are many advantages supporting the use of computers as an alternate mode of delivery for high stakes testing: cost savings, increased test security, flexibility in test administrations, innovations in items, and reduced scoring time. The purpose of this study was to determine if the use of computers as the mode of delivery had any…
Descriptors: Computer Assisted Testing, Evaluation Methods, Educational Technology, Scores
Morell, Linda; Tan, Rachael Jin Bee – Journal of Mixed Methods Research, 2009
Researchers in the areas of psychology and education strive to understand the intersections among validity, educational measurement, and cognitive theory. Guided by a mixed model conceptual framework, this study investigates how respondents' opinions inform the validation argument. Validity evidence for a science assessment was collected through…
Descriptors: Validity, Educational Psychology, Measurement, Epistemology
Briggs, Derek C. – Applied Measurement in Education, 2008
This article illustrates the use of an explanatory item response modeling (EIRM) approach in the context of measuring group differences in science achievement. The distinction between item response models and EIRMs, recently elaborated by De Boeck and Wilson (2004), is presented within the statistical framework of generalized linear mixed models.…
Descriptors: Science Achievement, Science Tests, Measurement, Error of Measurement
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
