Publication Date
In 2025 | 22 |
Since 2024 | 130 |
Since 2021 (last 5 years) | 442 |
Since 2016 (last 10 years) | 1026 |
Since 2006 (last 20 years) | 2780 |
Descriptor
Models | 3612 |
Item Response Theory | 1161 |
Feedback (Response) | 1108 |
Foreign Countries | 618 |
Responses | 539 |
Emotional Response | 433 |
Test Items | 383 |
Teaching Methods | 382 |
Comparative Analysis | 340 |
Simulation | 322 |
Evaluation Methods | 314 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Teachers | 40 |
Researchers | 31 |
Practitioners | 24 |
Administrators | 14 |
Counselors | 12 |
Policymakers | 4 |
Support Staff | 4 |
Media Staff | 3 |
Parents | 3 |
Students | 2 |
Location
Australia | 73 |
United Kingdom | 40 |
Canada | 39 |
California | 36 |
Germany | 36 |
China | 32 |
United States | 32 |
United Kingdom (England) | 29 |
Taiwan | 27 |
Netherlands | 25 |
Florida | 23 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Does not meet standards | 2 |
Gerhard Tutz; Pascal Jordan – Journal of Educational and Behavioral Statistics, 2024
A general framework of latent trait item response models for continuous responses is given. In contrast to classical test theory (CTT) models, which traditionally distinguish between true scores and error scores, the responses are clearly linked to latent traits. It is shown that CTT models can be derived as special cases, but the model class is…
Descriptors: Item Response Theory, Responses, Scores, Models
Jochen Ranger; Christoph König; Benjamin W. Domingue; Jörg-Tobias Kuhn; Andreas Frey – Journal of Educational and Behavioral Statistics, 2024
In the existing multidimensional extensions of the log-normal response time (LNRT) model, the log response times are decomposed into a linear combination of several latent traits. These models are fully compensatory as low levels on traits can be counterbalanced by high levels on other traits. We propose an alternative multidimensional extension…
Descriptors: Models, Statistical Distributions, Item Response Theory, Response Rates (Questionnaires)
Martijn Schoenmakers; Jesper Tijmstra; Jeroen Vermunt; Maria Bolsinova – Educational and Psychological Measurement, 2024
Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questionnaire results. For this reason, various item response theory (IRT) models have been proposed to model ERS and correct for it. Comparisons of these…
Descriptors: Item Response Theory, Response Style (Tests), Models, Likert Scales
Mary Girgis; Josephine Paparo; Ian Kneebone – Journal of Intellectual & Developmental Disability, 2025
Background: Compared to their typically developing peers, children and adolescents with intellectual disabilities are at an increased risk of developing emotion regulation difficulties, this is especially the case for autistic individuals with intellectual disabilities. To better understand the emotion regulation experiences of children and…
Descriptors: Children, Adolescents, Intellectual Disability, Emotional Response
Matthew J. Madison; Stefanie Wind; Lientje Maas; Kazuhiro Yamaguchi; Sergio Haab – Grantee Submission, 2024
Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or nonproficiency of specified latent characteristics. These models are well suited for providing diagnostic and actionable feedback to support intermediate and formative assessment efforts. Several DCMs have been developed…
Descriptors: Diagnostic Tests, Classification, Models, Psychometrics
Matthew J. Madison; Stefanie A. Wind; Lientje Maas; Kazuhiro Yamaguchi; Sergio Haab – Journal of Educational Measurement, 2024
Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or nonproficiency of specified latent characteristics. These models are well suited for providing diagnostic and actionable feedback to support intermediate and formative assessment efforts. Several DCMs have been developed…
Descriptors: Diagnostic Tests, Classification, Models, Psychometrics
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Ö. Emre C. Alagöz; Thorsten Meiser – Educational and Psychological Measurement, 2024
To improve the validity of self-report measures, researchers should control for response style (RS) effects, which can be achieved with IRTree models. A traditional IRTree model considers a response as a combination of distinct decision-making processes, where the substantive trait affects the decision on response direction, while decisions about…
Descriptors: Item Response Theory, Validity, Self Evaluation (Individuals), Decision Making
Hsieh, Shu-Hui; Perri, Pier Francesco – Sociological Methods & Research, 2022
We propose some theoretical and empirical advances by supplying the methodology for analyzing the factors that influence two sensitive variables when data are collected by randomized response (RR) survey modes. First, we provide the framework for obtaining the maximum likelihood estimates of logistic regression coefficients under the RR simple and…
Descriptors: Surveys, Models, Response Style (Tests), Marijuana
Henninger, Mirka – Journal of Educational Measurement, 2021
Item Response Theory models with varying thresholds are essential tools to account for unknown types of response tendencies in rating data. However, in order to separate constructs to be measured and response tendencies, specific constraints have to be imposed on varying thresholds and their interrelations. In this article, a multidimensional…
Descriptors: Response Style (Tests), Item Response Theory, Models, Computation
Ge, Yuan – ProQuest LLC, 2022
My dissertation research explored responder behaviors (e.g., demonstrating response styles, carelessness, and possessing misconceptions) that compromise psychometric quality and impact the interpretation and use of assessment results. Identifying these behaviors can help researchers understand and minimize their potentially construct-irrelevant…
Descriptors: Test Wiseness, Response Style (Tests), Item Response Theory, Psychometrics
Jesús Pérez; Eladio Dapena; Jose Aguilar – Education and Information Technologies, 2024
In tutoring systems, a pedagogical policy, which decides the next action for the tutor to take, is important because it determines how well students will learn. An effective pedagogical policy must adapt its actions according to the student's features, such as knowledge, error patterns, and emotions. For adapting difficulty, it is common to…
Descriptors: Feedback (Response), Intelligent Tutoring Systems, Reinforcement, Difficulty Level
Pere J. Ferrando; Fabia Morales-Vives; Ana Hernández-Dorado – Educational and Psychological Measurement, 2024
In recent years, some models for binary and graded format responses have been proposed to assess unipolar variables or "quasi-traits." These studies have mainly focused on clinical variables that have traditionally been treated as bipolar traits. In the present study, we have made a proposal for unipolar traits measured with continuous…
Descriptors: Item Analysis, Goodness of Fit, Accuracy, Test Validity
Ken A. Fujimoto; Carl F. Falk – Educational and Psychological Measurement, 2024
Item response theory (IRT) models are often compared with respect to predictive performance to determine the dimensionality of rating scale data. However, such model comparisons could be biased toward nested-dimensionality IRT models (e.g., the bifactor model) when comparing those models with non-nested-dimensionality IRT models (e.g., a…
Descriptors: Item Response Theory, Rating Scales, Predictive Measurement, Bayesian Statistics
Junhuan Wei; Qin Wang; Buyun Dai; Yan Cai; Dongbo Tu – Journal of Educational Measurement, 2024
Traditional IRT and IRTree models are not appropriate for analyzing the item that simultaneously consists of multiple-choice (MC) task and constructed-response (CR) task in one item. To address this issue, this study proposed an item response tree model (called as IRTree-MR) to accommodate items that contain different response types at different…
Descriptors: Item Response Theory, Models, Multiple Choice Tests, Cognitive Processes