NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 61 to 75 of 1,154 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Anatri Desstya; Ika Candra Sayekti; Muhammad Abduh; Sukartono – Journal of Turkish Science Education, 2025
This study aimed to develop a standardised instrument for diagnosing science misconceptions in primary school children. Following a developmental research approach using the 4-D model (Define, Design, Develop, Disseminate), 100 four-tier multiple choice items were constructed. Content validity was established through expert evaluation by six…
Descriptors: Test Construction, Science Tests, Science Instruction, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chunyan; Jurich, Daniel; Morrison, Carol; Grabovsky, Irina – Applied Measurement in Education, 2021
The existence of outliers in the anchor items can be detrimental to the estimation of examinee ability and undermine the validity of score interpretation across forms. However, in practice, anchor item performance can become distorted due to various reasons. This study compares the performance of modified "INFIT" and "OUTFIT"…
Descriptors: Equated Scores, Test Items, Item Response Theory, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ato Kwamina Arhin – Acta Educationis Generalis, 2024
Introduction: This article aimed at digging deep into distractors used for mathematics multiple-choice items. The quality of distractors may be more important than their number and the stem in a multiple-choice question. Little attention is given to this aspect of item writing especially, mathematics multiple-choice questions. This article…
Descriptors: Testing, Multiple Choice Tests, Test Items, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kevser Arslan; Asli Görgülü Ari – Shanlax International Journal of Education, 2024
This study aimed to develop a valid and reliable multiple-choice achievement test for the subject area of ecology. The study was conducted within the framework of exploratory sequential design based on mixed research methods, and the study group consisted of a total of 250 middle school students studying at the sixth and seventh grade level. In…
Descriptors: Ecology, Science Tests, Test Construction, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kyeng Gea Lee; Mark J. Lee; Soo Jung Lee – International Journal of Technology in Education and Science, 2024
Online assessment is an essential part of online education, and if conducted properly, has been found to effectively gauge student learning. Generally, textbased questions have been the cornerstone of online assessment. Recently, however, the emergence of generative artificial intelligence has added a significant challenge to the integrity of…
Descriptors: Artificial Intelligence, Computer Software, Biology, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
DeCarlo, Lawrence T. – Journal of Educational Measurement, 2023
A conceptualization of multiple-choice exams in terms of signal detection theory (SDT) leads to simple measures of item difficulty and item discrimination that are closely related to, but also distinct from, those used in classical item analysis (CIA). The theory defines a "true split," depending on whether or not examinees know an item,…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Zhiqiang Yang; Chengyuan Yu – Asia Pacific Education Review, 2025
This study investigated the test fairness of the translation section of a large-scale English test in China by examining its Differential Test Functioning (DTF) and Differential Item Functioning (DIF) across gender and major. Regarding DTF, the entire translation section exhibits partial strong measurement invariance across female and male…
Descriptors: Multiple Choice Tests, Test Items, Scoring, Translation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ismail, Yilmaz – Educational Research and Reviews, 2020
This study draws on the understanding that when the correlation between variables is not known yet the non-linear expectation in the correlation between the variables is present, non-linear measurement tools can be used. In education, possibility measurement tools can be used for non-linear measurement. Multiple-choice possibility measurement…
Descriptors: Multiple Choice Tests, Measurement Techniques, Student Evaluation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Chun Chen; Hsin-Kai Wu; Ching-Ting Hsin – Research in Science & Technological Education, 2024
Background and Purpose: As a growing number of instructional units have been developed to promote young children's scientific and engineering practices (SEPs), understanding how to evaluate and assess children's SEPs is imperative. However, paper-and-pencil assessments would not be suitable for young children because of their limited reading and…
Descriptors: Science Education, Engineering Education, Elementary School Students, Middle School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Katrin Klingbeil; Fabian Rösken; Bärbel Barzel; Florian Schacht; Kaye Stacey; Vicki Steinle; Daniel Thurm – ZDM: Mathematics Education, 2024
Assessing students' (mis)conceptions is a challenging task for teachers as well as for researchers. While individual assessment, for example through interviews, can provide deep insights into students' thinking, this is very time-consuming and therefore not feasible for whole classes or even larger settings. For those settings, automatically…
Descriptors: Multiple Choice Tests, Formative Evaluation, Mathematics Tests, Misconceptions
Alicia A. Stoltenberg – ProQuest LLC, 2024
Multiple-select multiple-choice items, or multiple-choice items with more than one correct answer, are used to quickly assess content on standardized assessments. Because there are multiple keys to these item types, there are also multiple ways to score student responses to these items. The purpose of this study was to investigate how changing the…
Descriptors: Scoring, Evaluation Methods, Multiple Choice Tests, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Al-zboon, Habis Saad; Alrekebat, Amjad Farhan – International Journal of Higher Education, 2021
This study aims at identifying the effect of multiple-choice test items' difficulty degree on the reliability coefficient and the standard error of measurement depending on the item response theory IRT. To achieve the objectives of the study, (WinGen3) software was used to generate the IRT parameters (difficulty, discrimination, guessing) for four…
Descriptors: Multiple Choice Tests, Test Items, Difficulty Level, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Waterbury, Glenn Thomas; DeMars, Christine E. – Educational Assessment, 2021
Vertical scaling is used to put tests of different difficulty onto a common metric. The Rasch model is often used to perform vertical scaling, despite its strict functional form. Few, if any, studies have examined anchor item choice when using the Rasch model to vertically scale data that do not fit the model. The purpose of this study was to…
Descriptors: Test Items, Equated Scores, Item Response Theory, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  77