NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 202516
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Ling Wu; Yao-Hsuan Huang; Chia-Wen Chen; Po-Hsi Chen – Journal of Educational Measurement, 2025
Multistage testing (MST), a variant of computerized adaptive testing (CAT), differs from conventional CAT in that it is adapted at the module level rather than at the individual item level. Typically, all examinees begin the MST with a linear test form in the first stage, commonly known as the routing stage. In 2020, Han introduced an innovative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Rebecca Napolitano; Ryan Solnosky; Wesley Reinhart – Journal of Civil Engineering Education, 2025
This study examines the impact of changes in exam modalities on the performance and experiences of architectural engineering students in a domain-specific data science class. Specifically, the number and duration of exams (and thereby the amount of content on each) and setting in which the students took the exams in changed among the three years…
Descriptors: Data Science, Engineering Education, Architectural Education, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Xueliang Chen; Vahid Aryadoust; Wenxin Zhang – Language Testing, 2025
The growing diversity among test takers in second or foreign language (L2) assessments makes the importance of fairness front and center. This systematic review aimed to examine how fairness in L2 assessments was evaluated through differential item functioning (DIF) analysis. A total of 83 articles from 27 journals were included in a systematic…
Descriptors: Second Language Learning, Language Tests, Test Items, Item Analysis
Joanna Williamson – Research Matters, 2025
Teachers, examiners and assessment experts know from experience that some candidates annotate exam questions. "Annotation" includes anything the candidate writes or draws outside of the designated response space, such as underlining, jotting, circling, sketching and calculating. Annotations are of interest because they may evidence…
Descriptors: Mathematics, Tests, Documentation, Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Zeynep Uzun; Tuncay Ögretmen – Large-scale Assessments in Education, 2025
This study aimed to evaluate the item model fit by equating the forms of the PISA 2018 mathematics subtest with concurrent common items equating in samples from Türkiye, the UK, and Italy. The answers given in mathematics subtest Forms 2, 8, and 12 were used in this context. Analyzes were performed using the Dichotomous Rasch Model in the WINSTEPS…
Descriptors: Item Response Theory, Test Items, Foreign Countries, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Christian Berggren; Bengt Gerdin; Solmaz Filiz Karabag – Journal of Academic Ethics, 2025
The exposure of scientific scandals and the increase of dubious research practices have generated a stream of studies on Questionable Research Practices (QRPs), such as failure to acknowledge co-authors, selective presentation of findings, or removal of data not supporting desired outcomes. In contrast to high-profile fraud cases, QRPs can be…
Descriptors: Test Construction, Test Bias, Test Format, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Okan Bulut; Guher Gorgun; Hacer Karamese – Journal of Educational Measurement, 2025
The use of multistage adaptive testing (MST) has gradually increased in large-scale testing programs as MST achieves a balanced compromise between linear test design and item-level adaptive testing. MST works on the premise that each examinee gives their best effort when attempting the items, and their responses truly reflect what they know or can…
Descriptors: Response Style (Tests), Testing Problems, Testing Accommodations, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew S. Cale; Elizabeth R. Agosto; Brenda Kucha Anak Ganeng; Megan E. Kruskie; Margaret A. McNulty; Kyle A. Robertson; Cecelia J. Vetter; Sabrina C. Woods; Md. Nazmul Karim; Adam B. Wilson – Anatomical Sciences Education, 2025
To keep pace with medicine's unpredictable changes, medical trainees must learn to accurately monitor and evaluate themselves via metacognition (i.e., thinking about thinking). The Metacognitive Awareness Inventory (MAI) can assess and guide the metacognitive development of trainees. This study summarizes existing psychometric evidence and…
Descriptors: Meta Analysis, Psychometrics, Metacognition, Measures (Individuals)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Judy R. Wilkerson; W. Steve Lang; LaSonya Moore – Journal of Research in Education, 2025
The DAATS (Dispositions Assessments Aligned with Teacher Standards) battery is a series of five instruments of different item types that measure teachers' consistency with the critical dispositions embedded in the InTASC Standards. The purpose of this study was to continue a 20-year research project on the development and implementation of…
Descriptors: Educational Assessment, National Standards, Teacher Evaluation, Teacher Competencies
Peer reviewed Peer reviewed
Direct linkDirect link
Goran Trajkovski; Heather Hayes – Digital Education and Learning, 2025
This book explores the transformative role of artificial intelligence in educational assessment, catering to researchers, educators, administrators, policymakers, and technologists involved in shaping the future of education. It delves into the foundations of AI-assisted assessment, innovative question types and formats, data analysis techniques,…
Descriptors: Artificial Intelligence, Educational Assessment, Computer Uses in Education, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Yuyang Shen – Journal of Creative Behavior, 2025
Creativity tests, like creativity itself, vary widely in their structure and use. These differences include instructions, test duration, environments, prompt and response modalities, and the structure of test items. A key factor is task structure, referring to the specificity of the number of responses requested for a given prompt. Classic…
Descriptors: Creativity, Creative Thinking, Creativity Tests, Task Analysis
Previous Page | Next Page »
Pages: 1  |  2