Publication Date
| In 2026 | 0 |
| Since 2025 | 16 |
| Since 2022 (last 5 years) | 132 |
| Since 2017 (last 10 years) | 339 |
| Since 2007 (last 20 years) | 930 |
Descriptor
| Item Response Theory | 1169 |
| Models | 1169 |
| Test Items | 341 |
| Simulation | 246 |
| Psychometrics | 211 |
| Computation | 193 |
| Comparative Analysis | 192 |
| Foreign Countries | 175 |
| Goodness of Fit | 169 |
| Statistical Analysis | 152 |
| Evaluation Methods | 148 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 8 |
| Practitioners | 2 |
| Students | 1 |
Location
| Germany | 16 |
| Netherlands | 10 |
| Taiwan | 10 |
| China | 9 |
| Turkey | 9 |
| Canada | 7 |
| Iran | 7 |
| Singapore | 7 |
| California | 6 |
| Hong Kong | 6 |
| Spain | 6 |
| More ▼ | |
Laws, Policies, & Programs
| Education Consolidation… | 1 |
| Education for All Handicapped… | 1 |
| Individuals with Disabilities… | 1 |
| Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Hanke Vermeiren; Abe D. Hofman; Maria Bolsinova – International Educational Data Mining Society, 2025
The traditional Elo rating system (ERS), widely used as a student model in adaptive learning systems, assumes unidimensionality (i.e., all items measure a single ability or skill), limiting its ability to handle multidimensional data common in educational contexts. In response, several multidimensional extensions of the Elo rating system have been…
Descriptors: Item Response Theory, Models, Comparative Analysis, Algorithms
Paul A. Jewsbury; J. R. Lockwood; Matthew S. Johnson – Large-scale Assessments in Education, 2025
Many large-scale assessments model proficiency with a latent regression on contextual variables. Item-response data are used to estimate the parameters of the latent variable model and are used in conjunction with the contextual data to generate plausible values of individuals' proficiency attributes. These models typically incorporate numerous…
Descriptors: Item Response Theory, Data Use, Models, Evaluation Methods
Qi Huang; Daniel M. Bolt; Xiangyi Liao – Journal of Educational Measurement, 2025
Item response theory (IRT) encompasses a broader class of measurement models than is commonly appreciated by practitioners in educational measurement. For measures of vocabulary and its development, we show how psychological theory might in certain instances support unipolar IRT modeling as a superior alternative to the more traditional bipolar…
Descriptors: Educational Theories, Item Response Theory, Vocabulary Development, Models
Ken A. Fujimoto; Carl F. Falk – Educational and Psychological Measurement, 2024
Item response theory (IRT) models are often compared with respect to predictive performance to determine the dimensionality of rating scale data. However, such model comparisons could be biased toward nested-dimensionality IRT models (e.g., the bifactor model) when comparing those models with non-nested-dimensionality IRT models (e.g., a…
Descriptors: Item Response Theory, Rating Scales, Predictive Measurement, Bayesian Statistics
Junhuan Wei; Qin Wang; Buyun Dai; Yan Cai; Dongbo Tu – Journal of Educational Measurement, 2024
Traditional IRT and IRTree models are not appropriate for analyzing the item that simultaneously consists of multiple-choice (MC) task and constructed-response (CR) task in one item. To address this issue, this study proposed an item response tree model (called as IRTree-MR) to accommodate items that contain different response types at different…
Descriptors: Item Response Theory, Models, Multiple Choice Tests, Cognitive Processes
Bogdan Yamkovenko; Charlie A. R. Hogg; Maya Miller-Vedam; Phillip Grimaldi; Walt Wells – International Educational Data Mining Society, 2025
Knowledge tracing (KT) models predict how students will perform on future interactions, given a sequence of prior responses. Modern approaches to KT leverage "deep learning" techniques to produce more accurate predictions, potentially making personalized learning paths more efficacious for learners. Many papers on the topic of KT focus…
Descriptors: Algorithms, Artificial Intelligence, Models, Prediction
Wind, Stefanie A. – Educational and Psychological Measurement, 2023
Rating scale analysis techniques provide researchers with practical tools for examining the degree to which ordinal rating scales (e.g., Likert-type scales or performance assessment rating scales) function in psychometrically useful ways. When rating scales function as expected, researchers can interpret ratings in the intended direction (i.e.,…
Descriptors: Rating Scales, Testing Problems, Item Response Theory, Models
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Gerhard Tutz; Pascal Jordan – Journal of Educational and Behavioral Statistics, 2024
A general framework of latent trait item response models for continuous responses is given. In contrast to classical test theory (CTT) models, which traditionally distinguish between true scores and error scores, the responses are clearly linked to latent traits. It is shown that CTT models can be derived as special cases, but the model class is…
Descriptors: Item Response Theory, Responses, Scores, Models
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Journal of Educational Measurement, 2024
This paper presents the item and test information functions of the Rank two-parameter logistic models (Rank-2PLM) for items with two (pair) and three (triplet) statements in forced-choice questionnaires. The Rank-2PLM model for pairs is the MUPP-2PLM (Multi-Unidimensional Pairwise Preference) and, for triplets, is the Triplet-2PLM. Fisher's…
Descriptors: Questionnaires, Test Items, Item Response Theory, Models
Sooyong Lee; Suhwa Han; Seung W. Choi – Journal of Educational Measurement, 2024
Research has shown that multiple-indicator multiple-cause (MIMIC) models can result in inflated Type I error rates in detecting differential item functioning (DIF) when the assumption of equal latent variance is violated. This study explains how the violation of the equal variance assumption adversely impacts the detection of nonuniform DIF and…
Descriptors: Factor Analysis, Bayesian Statistics, Test Bias, Item Response Theory
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Selena Wang – ProQuest LLC, 2022
A research question that is of interest across many disciplines is whether and how relationships in a network are related to the attributes of the nodes of the network. In this dissertation, we propose two joint frameworks for modeling the relationship between the network and attributes. In the joint latent space model in Chapter 2, shared latent…
Descriptors: Networks, Item Response Theory, Models, Statistical Analysis
Kim, Yunsung; Sreechan; Piech, Chris; Thille, Candace – International Educational Data Mining Society, 2023
Dynamic Item Response Models extend the standard Item Response Theory (IRT) to capture temporal dynamics in learner ability. While these models have the potential to allow instructional systems to actively monitor the evolution of learner proficiency in real time, existing dynamic item response models rely on expensive inference algorithms that…
Descriptors: Item Response Theory, Accuracy, Inferences, Algorithms

Peer reviewed
Direct link
