NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Katrin Klingbeil; Fabian Rösken; Bärbel Barzel; Florian Schacht; Kaye Stacey; Vicki Steinle; Daniel Thurm – ZDM: Mathematics Education, 2024
Assessing students' (mis)conceptions is a challenging task for teachers as well as for researchers. While individual assessment, for example through interviews, can provide deep insights into students' thinking, this is very time-consuming and therefore not feasible for whole classes or even larger settings. For those settings, automatically…
Descriptors: Multiple Choice Tests, Formative Evaluation, Mathematics Tests, Misconceptions
Peer reviewed Peer reviewed
Direct linkDirect link
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Krell, Moritz; Mathesius, Sabrina; van Driel, Jan; Vergara, Claudia; Krüger, Dirk – International Journal of Science Education, 2020
Scientific reasoning competencies are relevant science competencies and therefore the development of assessment instruments for scientific reasoning competencies has become an integral part of science education research. However, some authors have questioned the validity of the instruments available so far, since their psychometric quality has not…
Descriptors: Preservice Teachers, Science Teachers, Science Instruction, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Bauer, Daniel; Holzer, Matthias; Kopp, Veronika; Fischer, Martin R. – Advances in Health Sciences Education, 2011
To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students,…
Descriptors: Medical Students, Test Reliability, Internal Medicine, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Sparfeldt, Jorn R.; Kimmel, Rumena; Lowenkamp, Lena; Steingraber, Antje; Rost, Detlef H. – Educational Assessment, 2012
Multiple-choice (MC) reading comprehension test items comprise three components: text passage, questions about the text, and MC answers. The construct validity of this format has been repeatedly criticized. In three between-subjects experiments, fourth graders (N[subscript 1] = 230, N[subscript 2] = 340, N[subscript 3] = 194) worked on three…
Descriptors: Test Items, Reading Comprehension, Construct Validity, Grade 4
Shafer, Mary Morello – 1992
Examining testing practices in other countries can help formulate ideas and issues in establishing a national assessment system in the United States. Based on recent reports by the Office of Technology and Assessment and the National Endowment for the Humanities, this digest provides an overview of the national examinations in several European…
Descriptors: British National Curriculum, College Admission, College Entrance Examinations, Comparative Analysis