NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 35 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Anatri Desstya; Ika Candra Sayekti; Muhammad Abduh; Sukartono – Journal of Turkish Science Education, 2025
This study aimed to develop a standardised instrument for diagnosing science misconceptions in primary school children. Following a developmental research approach using the 4-D model (Define, Design, Develop, Disseminate), 100 four-tier multiple choice items were constructed. Content validity was established through expert evaluation by six…
Descriptors: Test Construction, Science Tests, Science Instruction, Diagnostic Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Laura S. Kabiri; Catherine R. Barber; Thomas M. McCabe; Augusto X. Rodriguez – HAPS Educator, 2024
Multiple-choice questions (MCQs) are commonly used in undergraduate introductory science, technology, engineering, and mathematics (STEM) courses, and substantial evidence supports the use of student-created questions to promote learning. However, research on student-created MCQ exams as an assessment method is more limited, and no studies have…
Descriptors: Physiology, Science Tests, Student Developed Materials, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Little, Jeri L.; Frickey, Elise A.; Fung, Alexandra K. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2019
Taking a test improves memory for that tested information, a finding referred to as the testing effect. Multiple-choice tests tend to produce smaller testing effects than do cued-recall tests, and this result is largely attributed to the different processing that the two formats are assumed to induce. Specifically, it is generally assumed that the…
Descriptors: Multiple Choice Tests, Memory, Cognitive Processes, Recall (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
O'Grady, Stefan – Language Teaching Research, 2023
The current study explores the impact of varying multiple-choice question preview and presentation formats in a test of second language listening proficiency targeting different levels of text comprehension. In a between-participant design, participants completed a 30-item test of listening comprehension featuring implicit and explicit information…
Descriptors: Language Tests, Multiple Choice Tests, Scores, Second Language Learning
Steven Moore; Huy Anh Nguyen; John Stamper – Grantee Submission, 2021
While generating multiple-choice questions has been shown to promote deep learning, students often fail to realize this benefit and do not willingly participate in this activity. Additionally, the quality of the student-generated questions may be influenced by both their level of engagement and familiarity with the learning materials. Towards…
Descriptors: Multiple Choice Tests, Learning Processes, Learner Engagement, Familiarity
Peer reviewed Peer reviewed
Direct linkDirect link
Doyle, Elaine; Buckley, Patrick – Interactive Learning Environments, 2022
While research and practice centred around students and academics working together to co-create in the higher level sector has increased, co-creation in assessment remains relatively rare in a higher education context. It is acknowledged in the literature that deeper comprehension of content can be realised when students author their own questions…
Descriptors: Multiple Choice Tests, Student Participation, Test Construction, Academic Achievement
Cromley, Jennifer G.; Dai, Ting; Fechter, Tia; Nelson, Frank E.; Van Boekel, Martin; Du, Yang – Grantee Submission, 2021
Making inferences and reasoning with new scientific information is critical for successful performance in biology coursework. Thus, identifying students who are weak in these skills could allow the early provision of additional support and course placement recommendations to help students develop their reasoning abilities, leading to better…
Descriptors: Science Tests, Multiple Choice Tests, Logical Thinking, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Kalinowski, Steven T.; Willoughby, Shannon – Journal of Research in Science Teaching, 2019
We present a multiple-choice test, the Montana State University Formal Reasoning Test (FORT), to assess college students' scientific reasoning ability. The test defines scientific reasoning to be equivalent to formal operational reasoning. It contains 20 questions divided evenly among five types of problems: control of variables, hypothesis…
Descriptors: Science Tests, Test Construction, Science Instruction, Introductory Courses
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bendulo, Hermabeth O.; Tibus, Erlinda D.; Bande, Rhodora A.; Oyzon, Voltaire Q.; Milla, Norberto E.; Macalinao, Myrna L. – International Journal of Evaluation and Research in Education, 2017
Testing or evaluation in an educational context is primarily used to measure or evaluate and authenticate the academic readiness, learning advancement, acquisition of skills, or instructional needs of learners. This study tried to determine whether the varied combinations of arrangements of options and letter cases in a Multiple-Choice Test (MCT)…
Descriptors: Test Format, Multiple Choice Tests, Test Construction, Eye Movements
Peer reviewed Peer reviewed
Direct linkDirect link
Zaidi, Nikki B.; Hwang, Charles; Scott, Sara; Stallard, Stefanie; Purkiss, Joel; Hortsch, Michael – Anatomical Sciences Education, 2017
Bloom's taxonomy was adopted to create a subject-specific scoring tool for histology multiple-choice questions (MCQs). This Bloom's Taxonomy Histology Tool (BTHT) was used to analyze teacher- and student-generated quiz and examination questions from a graduate level histology course. Multiple-choice questions using histological images were…
Descriptors: Taxonomy, Anatomy, Graduate Students, Scoring Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Malau-Aduli, Bunmi S.; Alele, Faith O.; Heggarty, Paula; Teague, Peta-Ann; Gupta, Tarun Sen; Hays, Richard – Advances in Physiology Education, 2019
Medical programs are under pressure to maintain currency with scientific and technical advances, as well as prepare graduates for clinical work and a wide range of postgraduate careers. The value of the basic sciences in primary medical education was assessed by exploring the perceived clinical relevance and test performance trends among medical…
Descriptors: Medical Education, Clinical Experience, Multiple Choice Tests, Science Education
Peer reviewed Peer reviewed
Direct linkDirect link
Hauser, Peter C.; Paludneviciene, Raylene; Riddle, Wanda; Kurz, Kim B.; Emmorey, Karen; Contreras, Jessica – Journal of Deaf Studies and Deaf Education, 2016
The American Sign Language Comprehension Test (ASL-CT) is a 30-item multiple-choice test that measures ASL receptive skills and is administered through a website. This article describes the development and psychometric properties of the test based on a sample of 80 college students including deaf native signers, hearing native signers, deaf…
Descriptors: American Sign Language, Comprehension, Multiple Choice Tests, Receptive Language
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André – Applied Measurement in Education, 2016
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Descriptors: Psychometrics, Multiple Choice Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kalkan, Ömür Kaya; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2016
Linear factor analysis models used to examine constructs underlying the responses are not very suitable for dichotomous or polytomous response formats. The associated problems cannot be eliminated by polychoric or tetrachoric correlations in place of the Pearson correlation. Therefore, we considered parameters obtained from the NOHARM and FACTOR…
Descriptors: Sample Size, Nonparametric Statistics, Factor Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal; Laitusis, Cara; Stone, Elizabeth – Educational and Psychological Measurement, 2016
There are many reasons to believe that open-ended (OE) and multiple-choice (MC) items elicit different cognitive demands of students. However, empirical evidence that supports this view is lacking. In this study, we investigated the reactions of test takers to an interactive assessment with immediate feedback and answer-revision opportunities for…
Descriptors: Test Items, Questioning Techniques, Differences, Student Reaction
Previous Page | Next Page »
Pages: 1  |  2  |  3