NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 71 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Journal of Educational Measurement, 2024
This paper presents the item and test information functions of the Rank two-parameter logistic models (Rank-2PLM) for items with two (pair) and three (triplet) statements in forced-choice questionnaires. The Rank-2PLM model for pairs is the MUPP-2PLM (Multi-Unidimensional Pairwise Preference) and, for triplets, is the Triplet-2PLM. Fisher's…
Descriptors: Questionnaires, Test Items, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Peter Organisciak; Denis Dumas – Journal of Creative Behavior, 2025
In this three-study investigation, we applied various approaches to score drawings created in response to both Form A and Form B of the Torrance Tests of Creative Thinking-Figural (broadly TTCT-F) as well as the Multi-Trial Creative Ideation task (MTCI). We focused on TTCT-F in Study 1, and utilizing a random forest classifier, we achieved 79% and…
Descriptors: Scoring, Computer Assisted Testing, Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Minyu; Brainerd, C. J. – Metacognition and Learning, 2023
Making judgments of learning (JOLs) can sometimes modify subsequent memory performance, which is referred to as JOL reactivity. We evaluated two major theoretical explanations of JOL reactivity and used the dual-retrieval model to pinpoint the retrieval processes that are modified by JOLs. The changed-goal hypothesis assumes that JOLs highlight…
Descriptors: Cues, Evaluative Thinking, Models, Recall (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Anna Filighera; Sebastian Ochs; Tim Steuer; Thomas Tregel – International Journal of Artificial Intelligence in Education, 2024
Automatic grading models are valued for the time and effort saved during the instruction of large student bodies. Especially with the increasing digitization of education and interest in large-scale standardized testing, the popularity of automatic grading has risen to the point where commercial solutions are widely available and used. However,…
Descriptors: Cheating, Grading, Form Classes (Languages), Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Yan Jin; Jason Fan – Language Assessment Quarterly, 2023
In language assessment, AI technology has been incorporated in task design, assessment delivery, automated scoring of performance-based tasks, score reporting, and provision of feedback. AI technology is also used for collecting and analyzing performance data in language assessment validation. Research has been conducted to investigate the…
Descriptors: Language Tests, Artificial Intelligence, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Cerullo, Enzo; Jones, Hayley E.; Carter, Olivia; Quinn, Terry J.; Cooper, Nicola J.; Sutton, Alex J. – Research Synthesis Methods, 2022
Standard methods for the meta-analysis of medical tests, without assuming a gold standard, are limited to dichotomous data. Multivariate probit models are used to analyse correlated dichotomous data, and can be extended to model ordinal data. Within the context of an imperfect gold standard, they have previously been used for the analysis of…
Descriptors: Meta Analysis, Test Format, Medicine, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joseph; Pollard, Benjamin; Aiken, John M.; Lewandowski, H. J. – Physical Review Physics Education Research, 2022
Surveys have long been used in physics education research to understand student reasoning and inform course improvements. However, to make analysis of large sets of responses practical, most surveys use a closed-response format with a small set of potential responses. Open-ended formats, such as written free response, can provide deeper insights…
Descriptors: Natural Language Processing, Science Education, Physics, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Sahu, Archana; Bhowmick, Plaban Kumar – IEEE Transactions on Learning Technologies, 2020
In this paper, we studied different automatic short answer grading (ASAG) systems to provide a comprehensive view of the feature spaces explored by previous works. While the performance reported in previous works have been encouraging, systematic study of the features is lacking. Apart from providing systematic feature space exploration, we also…
Descriptors: Automation, Grading, Test Format, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Crowther, Gregory J.; Knight, Thomas A. – Advances in Physiology Education, 2023
The past [approximately]15 years have seen increasing interest in defining disciplinary core concepts. Within the field of physiology, Michael, McFarland, Modell, and colleagues have published studies that defined physiology core concepts and have elaborated many of these as detailed conceptual frameworks. With such helpful definitions now in…
Descriptors: Test Format, Physiology, Higher Education, Concept Teaching
Peer reviewed Peer reviewed
Direct linkDirect link
Stephane E. Collignon; Josey Chacko; Salman Nazir – Journal of Information Systems Education, 2024
Most business schools require students to take at least one technical Management Information System (MIS) course. Due to the technical nature of the material, the course and the assessments tend to be anxiety inducing. With over three out of every five students in US colleges suffering from "overwhelming anxiety" in some form, we study…
Descriptors: Multiple Choice Tests, Test Format, Business Schools, Information Systems
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5