NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 3,081 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tom Benton – Practical Assessment, Research & Evaluation, 2025
This paper proposes an extension of linear equating that may be useful in one of two fairly common assessment scenarios. One is where different students have taken different combinations of test forms. This might occur, for example, where students have some free choice over the exam papers they take within a particular qualification. In this…
Descriptors: Equated Scores, Test Format, Test Items, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Yuan Ge – Measurement: Interdisciplinary Research and Perspectives, 2024
Mixed-format assessments made up of multiple-choice (MC) items and constructed response (CR) items that are scored using rater judgments include unique psychometric considerations. When these item types are combined to estimate examinee achievement, information about the psychometric quality of each component can depend on that of the other. For…
Descriptors: Interrater Reliability, Test Bias, Multiple Choice Tests, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Journal of Educational Measurement, 2024
This paper presents the item and test information functions of the Rank two-parameter logistic models (Rank-2PLM) for items with two (pair) and three (triplet) statements in forced-choice questionnaires. The Rank-2PLM model for pairs is the MUPP-2PLM (Multi-Unidimensional Pairwise Preference) and, for triplets, is the Triplet-2PLM. Fisher's…
Descriptors: Questionnaires, Test Items, Item Response Theory, Models
Uk Hyun Cho – ProQuest LLC, 2024
The present study investigates the influence of multidimensionality on linking and equating in a unidimensional IRT. Two hypothetical multidimensional scenarios are explored under a nonequivalent group common-item equating design. The first scenario examines test forms designed to measure multiple constructs, while the second scenario examines a…
Descriptors: Item Response Theory, Classification, Correlation, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Janet Mee; Ravi Pandian; Justin Wolczynski; Amy Morales; Miguel Paniagua; Polina Harik; Peter Baldwin; Brian E. Clauser – Advances in Health Sciences Education, 2024
Recent advances in automated scoring technology have made it practical to replace multiple-choice questions (MCQs) with short-answer questions (SAQs) in large-scale, high-stakes assessments. However, most previous research comparing these formats has used small examinee samples testing under low-stakes conditions. Additionally, previous studies…
Descriptors: Multiple Choice Tests, High Stakes Tests, Test Format, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Monica Casella; Pasquale Dolce; Michela Ponticorvo; Nicola Milano; Davide Marocco – Educational and Psychological Measurement, 2024
Short-form development is an important topic in psychometric research, which requires researchers to face methodological choices at different steps. The statistical techniques traditionally used for shortening tests, which belong to the so-called exploratory model, make assumptions not always verified in psychological data. This article proposes a…
Descriptors: Artificial Intelligence, Test Construction, Test Format, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Li Zhao; Junjie Peng; Shiqi Ke; Kang Lee – Educational Psychology Review, 2024
Unproctored and teacher-proctored exams have been widely used to prevent cheating at many universities worldwide. However, no empirical studies have directly compared their effectiveness in promoting academic integrity in actual exams. To address this significant gap, in four preregistered field studies, we examined the effectiveness of…
Descriptors: Supervision, Tests, Testing, Integrity
Peer reviewed Peer reviewed
Direct linkDirect link
Brian E. Clauser; Victoria Yaneva; Peter Baldwin; Le An Ha; Janet Mee – Applied Measurement in Education, 2024
Multiple-choice questions have become ubiquitous in educational measurement because the format allows for efficient and accurate scoring. Nonetheless, there remains continued interest in constructed-response formats. This interest has driven efforts to develop computer-based scoring procedures that can accurately and efficiently score these items.…
Descriptors: Computer Uses in Education, Artificial Intelligence, Scoring, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Zesch, Torsten; Horbach, Andrea; Zehner, Fabian – Educational Measurement: Issues and Practice, 2023
In this article, we systematize the factors influencing performance and feasibility of automatic content scoring methods for short text responses. We argue that performance (i.e., how well an automatic system agrees with human judgments) mainly depends on the linguistic variance seen in the responses and that this variance is indirectly influenced…
Descriptors: Influences, Academic Achievement, Feasibility Studies, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Peter Organisciak; Denis Dumas – Journal of Creative Behavior, 2025
In this three-study investigation, we applied various approaches to score drawings created in response to both Form A and Form B of the Torrance Tests of Creative Thinking-Figural (broadly TTCT-F) as well as the Multi-Trial Creative Ideation task (MTCI). We focused on TTCT-F in Study 1, and utilizing a random forest classifier, we achieved 79% and…
Descriptors: Scoring, Computer Assisted Testing, Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Jessie Leigh Nielsen; Rikke Vang Christensen; Mads Poulsen – Journal of Research in Reading, 2024
Background: Studies of syntactic comprehension and reading comprehension use a wide range of syntactic comprehension tests that vary considerably in format. The goal of this study was to examine to which extent different formats of syntactic comprehension tests measure the same construct. Methods: Sixty-nine Grade 4 students completed multiple…
Descriptors: Syntax, Reading Comprehension, Comparative Analysis, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Sun; Stella Yun Kim – Educational and Psychological Measurement, 2024
Equating is a statistical procedure used to adjust for the difference in form difficulty such that scores on those forms can be used and interpreted comparably. In practice, however, equating methods are often implemented without considering the extent to which two forms differ in difficulty. The study aims to examine the effect of the magnitude…
Descriptors: Difficulty Level, Data Interpretation, Equated Scores, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Skylar J. Laursen; Dorina Sluka; Chris M. Fiacconi – Metacognition and Learning, 2024
Previous literature suggests learners can adjust their encoding strategies to match the demands of the expected test format. However, it is unclear whether other forms of metacognitive control, namely, study time allocation and restudy selection, are also sensitive to expected test format. Across four experiments we examined whether learners…
Descriptors: Test Format, Test Wiseness, Metacognition, Study Habits
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  206