NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 202518
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tom Benton – Practical Assessment, Research & Evaluation, 2025
This paper proposes an extension of linear equating that may be useful in one of two fairly common assessment scenarios. One is where different students have taken different combinations of test forms. This might occur, for example, where students have some free choice over the exam papers they take within a particular qualification. In this…
Descriptors: Equated Scores, Test Format, Test Items, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bin Tan; Nour Armoush; Elisabetta Mazzullo; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2025
This study reviews existing research on the use of large language models (LLMs) for automatic item generation (AIG). We performed a comprehensive literature search across seven research databases, selected studies based on predefined criteria, and summarized 60 relevant studies that employed LLMs in the AIG process. We identified the most commonly…
Descriptors: Artificial Intelligence, Test Items, Automation, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hasibe Yahsi Sari; Hulya Kelecioglu – International Journal of Assessment Tools in Education, 2025
The aim of the study is to examine the effect of polytomous item ratio on ability estimation in different conditions in multistage tests (MST) using mixed tests. The study is simulation-based research. In the PISA 2018 application, the ability parameters of the individuals and the item pool were created by using the item parameters estimated from…
Descriptors: Test Items, Test Format, Accuracy, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Hung Tan Ha; Duyen Thi Bich Nguyen; Tim Stoeckel – Language Assessment Quarterly, 2025
This article compares two methods for detecting local item dependence (LID): residual correlation examination and Rasch testlet modeling (RTM), in a commonly used 3:6 matching format and an extended matching test (EMT) format. The two formats are hypothesized to facilitate different levels of item dependency due to differences in the number of…
Descriptors: Comparative Analysis, Language Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Xueliang Chen; Vahid Aryadoust; Wenxin Zhang – Language Testing, 2025
The growing diversity among test takers in second or foreign language (L2) assessments makes the importance of fairness front and center. This systematic review aimed to examine how fairness in L2 assessments was evaluated through differential item functioning (DIF) analysis. A total of 83 articles from 27 journals were included in a systematic…
Descriptors: Second Language Learning, Language Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Necati Taskin – International Journal of Technology in Education, 2025
This study examines the effect of item order (random, increasingly difficult, and decreasingly difficult) on student performance, test parameters, and student perceptions in multiple-choice tests administered in a paper-and-pencil format after online learning. In the research conducted using an explanatory sequential mixed methods design,…
Descriptors: Test Items, Difficulty Level, Online Courses, College Freshmen
Peer reviewed Peer reviewed
Direct linkDirect link
Zeynep Uzun; Tuncay Ögretmen – Large-scale Assessments in Education, 2025
This study aimed to evaluate the item model fit by equating the forms of the PISA 2018 mathematics subtest with concurrent common items equating in samples from Türkiye, the UK, and Italy. The answers given in mathematics subtest Forms 2, 8, and 12 were used in this context. Analyzes were performed using the Dichotomous Rasch Model in the WINSTEPS…
Descriptors: Item Response Theory, Test Items, Foreign Countries, Mathematics Tests
Jeff Allen; Jay Thomas; Stacy Dreyer; Scott Johanningmeier; Dana Murano; Ty Cruce; Xin Li; Edgar Sanchez – ACT Education Corp., 2025
This report describes the process of developing and validating the enhanced ACT. The report describes the changes made to the test content and the processes by which these design decisions were implemented. The authors describe how they shared the overall scope of the enhancements, including the initial blueprints, with external expert panels,…
Descriptors: College Entrance Examinations, Testing, Change, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nese Öztürk Gübes – International Journal of Assessment Tools in Education, 2025
The Trends in International Mathematics and Science Study (TIMSS) was administered via computer, eTIMSS, for the first time in 2019. The purpose of this study was to investigate item block position and item format effect on eighth grade mathematics item easiness in low- and high-achieving countries of eTIMSS 2019. Item responses from Chile, Qatar,…
Descriptors: Foreign Countries, International Assessment, Achievement Tests, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Peter A. Edelsbrunner; Bianca A. Simonsmeier; Michael Schneider – Educational Psychology Review, 2025
Knowledge is an important predictor and outcome of learning and development. Its measurement is challenged by the fact that knowledge can be integrated and homogeneous, or fragmented and heterogeneous, which can change through learning. These characteristics of knowledge are at odds with current standards for test development, demanding a high…
Descriptors: Meta Analysis, Predictor Variables, Learning Processes, Knowledge Level
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Pasquale Anselmi; Jürgen Heller; Luca Stefanutti; Egidio Robusto; Giulia Barillari – Education and Information Technologies, 2025
Competence-based test development (CbTD) is a novel method for constructing tests that are as informative as possible about the competence state (the set of skills an individual masters) underlying the item responses. If desired, the tests can also be minimal, meaning that no item can be eliminated without reducing their informativeness. To…
Descriptors: Competency Based Education, Test Construction, Test Length, Usability
Peer reviewed Peer reviewed
Direct linkDirect link
Goran Trajkovski; Heather Hayes – Digital Education and Learning, 2025
This book explores the transformative role of artificial intelligence in educational assessment, catering to researchers, educators, administrators, policymakers, and technologists involved in shaping the future of education. It delves into the foundations of AI-assisted assessment, innovative question types and formats, data analysis techniques,…
Descriptors: Artificial Intelligence, Educational Assessment, Computer Uses in Education, Test Format
Previous Page | Next Page »
Pages: 1  |  2