Publication Date
| In 2026 | 0 |
| Since 2025 | 14 |
Descriptor
| Difficulty Level | 14 |
| Test Construction | 14 |
| Test Items | 9 |
| Item Analysis | 6 |
| Test Validity | 6 |
| Multiple Choice Tests | 5 |
| Science Tests | 5 |
| Test Reliability | 5 |
| Correlation | 3 |
| Elementary School Students | 3 |
| Factor Analysis | 3 |
| More ▼ | |
Source
Author
| Alexander Kah | 1 |
| Amelia Pearson | 1 |
| Ananda Senel | 1 |
| Anatri Desstya | 1 |
| Apantee Poonputta | 1 |
| Benjamin W. Domingue | 1 |
| Berrin Köseoglu | 1 |
| Chandralekha Singh | 1 |
| Changiz Mohiyeddini | 1 |
| Charlotte Broadhurst | 1 |
| Dwayne Lieck | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 13 |
| Reports - Research | 13 |
| Tests/Questionnaires | 2 |
| Books | 1 |
| Guides - Classroom - Teacher | 1 |
Education Level
Audience
| Teachers | 1 |
Location
| Indonesia | 1 |
| Thailand | 1 |
| United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Changiz Mohiyeddini – Anatomical Sciences Education, 2025
This article presents a step-by-step guide to using R and SPSS to bootstrap exam questions. Bootstrapping, a versatile nonparametric analytical technique, can help to improve the psychometric qualities of exam questions in the process of quality assurance. Bootstrapping is particularly useful in disciplines such as medical education, where student…
Descriptors: Test Items, Sampling, Statistical Inference, Nonparametric Statistics
Onur Dönmez; Yavuz Akbulut; Gözde Zabzun; Berrin Köseoglu – Applied Cognitive Psychology, 2025
This study investigates the effect of survey order in measuring self-reported cognitive load. Understanding how survey order influences responses is crucial, but it has been largely overlooked in the context of cognitive load. Using a 2 × 2 experimental design with 319 high school students, the study manipulated intrinsic cognitive load (ICL)…
Descriptors: Surveys, Test Construction, Measurement, Cognitive Processes
Nathaniel Owen; Ananda Senel – Review of Education, 2025
Transparency in high-stakes English language assessment has become crucial for ensuring fairness and maintaining assessment validity in language testing. However, our understanding of how transparency is conceptualised and implemented remains fragmented, particularly in relation to stakeholder experiences and technological innovations. This study…
Descriptors: Accountability, High Stakes Tests, Language Tests, Computer Assisted Testing
Ruying Li; Gaofeng Li – International Journal of Science and Mathematics Education, 2025
Systems thinking (ST) is an essential competence for future life and biology learning. Appropriate assessment is critical for collecting sufficient information to develop ST in biology education. This research offers an ST framework based on a comprehensive understanding of biological systems, encompassing four skills across three complexity…
Descriptors: Test Construction, Test Validity, Science Tests, Cognitive Tests
Tino Endres; Lisa Bender; Stoo Sepp; Shirong Zhang; Louise David; Melanie Trypke; Dwayne Lieck; Juliette C. Désiron; Johanna Bohm; Sophia Weissgerber; Juan Cristobal Castro-Alonso; Fred Paas – Educational Psychology Review, 2025
Assessing cognitive demand is crucial for research on self-regulated learning; however, discrepancies in translating essential concepts across languages can hinder the comparison of research findings. Different languages often emphasize various components and interpret certain constructs differently. This paper aims to develop a translingual set…
Descriptors: Cognitive Processes, Difficulty Level, Metacognition, Translation
Lyniesha Ward; Fridah Rotich; Jeffrey R. Raker; Regis Komperda; Sachin Nedungadi; Maia Popova – Chemistry Education Research and Practice, 2025
This paper describes the design and evaluation of the Organic chemistry Representational Competence Assessment (ORCA). Grounded in Kozma and Russell's representational competence framework, the ORCA measures the learner's ability to "interpret," "translate," and "use" six commonly used representations of molecular…
Descriptors: Organic Chemistry, Science Tests, Test Construction, Student Evaluation
Karin Hess – Teachers College Press, 2025
For more than 2 decades, Karin Hess has worked with the concept of depth of knowledge (DOK) and expanded applications of cognitive rigor across content areas and grade levels, proving that every student can experience deeper learning. This interactive book offers a self-guided journey beginning with the basics: what DOK is, what it is not, and…
Descriptors: Knowledge Level, Misconceptions, Difficulty Level, Academic Standards
Thayaamol Upapong; Apantee Poonputta – Educational Process: International Journal, 2025
Background/purpose: The purposes of this research are to develop a reliable and valid assessment tool for measuring systems thinking skills in upper primary students in Thailand and to establish a normative criterion for evaluating their systems thinking abilities based on educational standards. Materials/methods: The study followed a three-phase…
Descriptors: Thinking Skills, Elementary School Students, Measures (Individuals), Foreign Countries
Sophie Langhorne; Nora Uglik-Marucha; Charlotte Broadhurst; Elena Lieven; Amelia Pearson; Silia Vitoratou; Kathy Leadbitter – Journal of Autism and Developmental Disorders, 2025
Tools to measure autism knowledge are needed to assess levels of understanding within particular groups of people and to evaluate whether awareness-raising campaigns or interventions lead to improvements in understanding. Several such measures are in circulation, but, to our knowledge, there are no psychometrically-validated questionnaires that…
Descriptors: Foreign Countries, Autism Spectrum Disorders, Questionnaires, Psychometrics
Roger Young; Emily Courtney; Alexander Kah; Mariah Wilkerson; Yi-Hsin Chen – Teaching of Psychology, 2025
Background: Multiple-choice item (MCI) assessments are burdensome for instructors to develop. Artificial intelligence (AI, e.g., ChatGPT) can streamline the process without sacrificing quality. The quality of AI-generated MCIs and human experts is comparable. However, whether the quality of AI-generated MCIs is equally good across various domain-…
Descriptors: Item Response Theory, Multiple Choice Tests, Psychology, Textbooks
Anatri Desstya; Ika Candra Sayekti; Muhammad Abduh; Sukartono – Journal of Turkish Science Education, 2025
This study aimed to develop a standardised instrument for diagnosing science misconceptions in primary school children. Following a developmental research approach using the 4-D model (Define, Design, Develop, Disseminate), 100 four-tier multiple choice items were constructed. Content validity was established through expert evaluation by six…
Descriptors: Test Construction, Science Tests, Science Instruction, Diagnostic Tests
Joshua B. Gilbert; Luke W. Miratrix; Mridul Joshi; Benjamin W. Domingue – Journal of Educational and Behavioral Statistics, 2025
Analyzing heterogeneous treatment effects (HTEs) plays a crucial role in understanding the impacts of educational interventions. A standard practice for HTE analysis is to examine interactions between treatment status and preintervention participant characteristics, such as pretest scores, to identify how different groups respond to treatment.…
Descriptors: Causal Models, Item Response Theory, Statistical Inference, Psychometrics
Test of Understanding of Electric Field, Force, and Flux: A Reliable Multiple-Choice Assessment Tool
Eder Hernandez; Esmeralda Campos; Pablo Barniol; Genaro Zavala – Physical Review Physics Education Research, 2025
This study presents the development and validation of a novel multiple-choice test designed to assess university students' conceptual understanding of electric field, force, and flux. The test of understanding of electric field, force, and flux was constructed based on the results of previous studies using a phenomenographic approach to classify…
Descriptors: Physics, Scientific Concepts, Science Tests, Multiple Choice Tests
Yangqiuting Li; Chandralekha Singh – Physical Review Physics Education Research, 2025
Research-based multiple-choice questions implemented in class with peer instruction have been shown to be an effective tool for improving students' engagement and learning outcomes. Moreover, multiple-choice questions that are carefully sequenced to build on each other can be particularly helpful for students to develop a systematic understanding…
Descriptors: Physics, Science Instruction, Science Tests, Multiple Choice Tests

Peer reviewed
Direct link
