NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards2
Showing 16 to 30 of 3,650 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Xiangyi Liao – Journal of Educational Measurement, 2025
Item response theory (IRT) encompasses a broader class of measurement models than is commonly appreciated by practitioners in educational measurement. For measures of vocabulary and its development, we show how psychological theory might in certain instances support unipolar IRT modeling as a superior alternative to the more traditional bipolar…
Descriptors: Educational Theories, Item Response Theory, Vocabulary Development, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Yueran Yang; Janice L. Burke; Justice Healy – Cognitive Research: Principles and Implications, 2025
"How do witnesses make identification decisions when viewing a lineup?" Understanding the witness decision-making process is essential for researchers to develop methods that can reduce mistaken identifications and improve lineup practices. Yet, the inclusion of fillers has posed a pivotal challenge to this task because the traditional…
Descriptors: Audiences, Audience Response, Identification, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Henninger, Mirka – Journal of Educational Measurement, 2021
Item Response Theory models with varying thresholds are essential tools to account for unknown types of response tendencies in rating data. However, in order to separate constructs to be measured and response tendencies, specific constraints have to be imposed on varying thresholds and their interrelations. In this article, a multidimensional…
Descriptors: Response Style (Tests), Item Response Theory, Models, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Jesús Pérez; Eladio Dapena; Jose Aguilar – Education and Information Technologies, 2024
In tutoring systems, a pedagogical policy, which decides the next action for the tutor to take, is important because it determines how well students will learn. An effective pedagogical policy must adapt its actions according to the student's features, such as knowledge, error patterns, and emotions. For adapting difficulty, it is common to…
Descriptors: Feedback (Response), Intelligent Tutoring Systems, Reinforcement, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Pere J. Ferrando; Fabia Morales-Vives; Ana Hernández-Dorado – Educational and Psychological Measurement, 2024
In recent years, some models for binary and graded format responses have been proposed to assess unipolar variables or "quasi-traits." These studies have mainly focused on clinical variables that have traditionally been treated as bipolar traits. In the present study, we have made a proposal for unipolar traits measured with continuous…
Descriptors: Item Analysis, Goodness of Fit, Accuracy, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Ken A. Fujimoto; Carl F. Falk – Educational and Psychological Measurement, 2024
Item response theory (IRT) models are often compared with respect to predictive performance to determine the dimensionality of rating scale data. However, such model comparisons could be biased toward nested-dimensionality IRT models (e.g., the bifactor model) when comparing those models with non-nested-dimensionality IRT models (e.g., a…
Descriptors: Item Response Theory, Rating Scales, Predictive Measurement, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Junhuan Wei; Qin Wang; Buyun Dai; Yan Cai; Dongbo Tu – Journal of Educational Measurement, 2024
Traditional IRT and IRTree models are not appropriate for analyzing the item that simultaneously consists of multiple-choice (MC) task and constructed-response (CR) task in one item. To address this issue, this study proposed an item response tree model (called as IRTree-MR) to accommodate items that contain different response types at different…
Descriptors: Item Response Theory, Models, Multiple Choice Tests, Cognitive Processes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Vikki Pollard; Christine Armatas – Online Learning, 2025
The Interactive, Constructive, Active, Passive (ICAP) Framework (Chi & Wylie, 2014) is used to review and develop active learning in higher education. It is a hierarchical model based on overt behaviours seen by the teacher in the classroom. This principle is acknowledged as a limitation, especially in the case of online modes of study. In…
Descriptors: Active Learning, Online Courses, Asynchronous Communication, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bogdan Yamkovenko; Charlie A. R. Hogg; Maya Miller-Vedam; Phillip Grimaldi; Walt Wells – International Educational Data Mining Society, 2025
Knowledge tracing (KT) models predict how students will perform on future interactions, given a sequence of prior responses. Modern approaches to KT leverage "deep learning" techniques to produce more accurate predictions, potentially making personalized learning paths more efficacious for learners. Many papers on the topic of KT focus…
Descriptors: Algorithms, Artificial Intelligence, Models, Prediction
Ge, Yuan – ProQuest LLC, 2022
My dissertation research explored responder behaviors (e.g., demonstrating response styles, carelessness, and possessing misconceptions) that compromise psychometric quality and impact the interpretation and use of assessment results. Identifying these behaviors can help researchers understand and minimize their potentially construct-irrelevant…
Descriptors: Test Wiseness, Response Style (Tests), Item Response Theory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Leventhal, Brian C.; Zigler, Christina K. – Measurement: Interdisciplinary Research and Perspectives, 2023
Survey score interpretations are often plagued by sources of construct-irrelevant variation, such as response styles. In this study, we propose the use of an IRTree Model to account for response styles by making use of self-report items and anchoring vignettes. Specifically, we investigate how the IRTree approach with anchoring vignettes compares…
Descriptors: Scores, Vignettes, Response Style (Tests), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A. – Educational and Psychological Measurement, 2023
Rating scale analysis techniques provide researchers with practical tools for examining the degree to which ordinal rating scales (e.g., Likert-type scales or performance assessment rating scales) function in psychometrically useful ways. When rating scales function as expected, researchers can interpret ratings in the intended direction (i.e.,…
Descriptors: Rating Scales, Testing Problems, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Engelhard, George – Educational and Psychological Measurement, 2023
The purpose of this study is to introduce a functional approach for modeling unfolding response data. Functional data analysis (FDA) has been used for examining cumulative item response data, but a functional approach has not been systematically used with unfolding response processes. A brief overview of FDA is presented and illustrated within the…
Descriptors: Data Analysis, Models, Responses, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Youn Seon Lim; Catherine Bangeranye – International Journal of Testing, 2024
Feedback is a powerful instructional tool for motivating learning. But effective feedback, requires that instructors have accurate information about their students' current knowledge status and their learning progress. In modern educational measurement, two major theoretical perspectives on student ability and proficiency can be distinguished.…
Descriptors: Cognitive Measurement, Diagnostic Tests, Item Response Theory, Case Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  244