Publication Date
| In 2026 | 0 |
| Since 2025 | 4 |
| Since 2022 (last 5 years) | 29 |
| Since 2017 (last 10 years) | 74 |
| Since 2007 (last 20 years) | 177 |
Descriptor
| Models | 203 |
| Scores | 203 |
| Item Response Theory | 142 |
| Test Items | 49 |
| Comparative Analysis | 47 |
| Feedback (Response) | 44 |
| Foreign Countries | 40 |
| Correlation | 34 |
| Simulation | 33 |
| Statistical Analysis | 33 |
| Psychometrics | 29 |
| More ▼ | |
Source
Author
| DeMars, Christine E. | 4 |
| Haberman, Shelby J. | 3 |
| Wilson, Mark | 3 |
| Anguiano-Carrasco, Cristina | 2 |
| Bulut, Okan | 2 |
| Cai, Li | 2 |
| Cohen, Allan S. | 2 |
| Cole, Rachel | 2 |
| Debeer, Dries | 2 |
| Ferrando, Pere J. | 2 |
| Finkelman, Matthew | 2 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 2 |
| Practitioners | 1 |
| Students | 1 |
Location
| Australia | 4 |
| California | 4 |
| Germany | 4 |
| Netherlands | 4 |
| Turkey | 4 |
| Brazil | 3 |
| Canada | 3 |
| China | 3 |
| Finland | 3 |
| Iran | 3 |
| Japan | 3 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Gerhard Tutz; Pascal Jordan – Journal of Educational and Behavioral Statistics, 2024
A general framework of latent trait item response models for continuous responses is given. In contrast to classical test theory (CTT) models, which traditionally distinguish between true scores and error scores, the responses are clearly linked to latent traits. It is shown that CTT models can be derived as special cases, but the model class is…
Descriptors: Item Response Theory, Responses, Scores, Models
Matthew J. Madison; Stefanie Wind; Lientje Maas; Kazuhiro Yamaguchi; Sergio Haab – Grantee Submission, 2024
Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or nonproficiency of specified latent characteristics. These models are well suited for providing diagnostic and actionable feedback to support intermediate and formative assessment efforts. Several DCMs have been developed…
Descriptors: Diagnostic Tests, Classification, Models, Psychometrics
Matthew J. Madison; Stefanie A. Wind; Lientje Maas; Kazuhiro Yamaguchi; Sergio Haab – Journal of Educational Measurement, 2024
Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or nonproficiency of specified latent characteristics. These models are well suited for providing diagnostic and actionable feedback to support intermediate and formative assessment efforts. Several DCMs have been developed…
Descriptors: Diagnostic Tests, Classification, Models, Psychometrics
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Leventhal, Brian C.; Zigler, Christina K. – Measurement: Interdisciplinary Research and Perspectives, 2023
Survey score interpretations are often plagued by sources of construct-irrelevant variation, such as response styles. In this study, we propose the use of an IRTree Model to account for response styles by making use of self-report items and anchoring vignettes. Specifically, we investigate how the IRTree approach with anchoring vignettes compares…
Descriptors: Scores, Vignettes, Response Style (Tests), Item Response Theory
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Journal of Educational Measurement, 2024
This paper presents the item and test information functions of the Rank two-parameter logistic models (Rank-2PLM) for items with two (pair) and three (triplet) statements in forced-choice questionnaires. The Rank-2PLM model for pairs is the MUPP-2PLM (Multi-Unidimensional Pairwise Preference) and, for triplets, is the Triplet-2PLM. Fisher's…
Descriptors: Questionnaires, Test Items, Item Response Theory, Models
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Kuan-Yu Jin; Wai-Lok Siu – Journal of Educational Measurement, 2025
Educational tests often have a cluster of items linked by a common stimulus ("testlet"). In such a design, the dependencies caused between items are called "testlet effects." In particular, the directional testlet effect (DTE) refers to a recursive influence whereby responses to earlier items can positively or negatively affect…
Descriptors: Models, Test Items, Educational Assessment, Scores
James Soland – Journal of Research on Educational Effectiveness, 2024
When randomized control trials are not possible, quasi-experimental methods often represent the gold standard. One quasi-experimental method is difference-in-difference (DiD), which compares changes in outcomes before and after treatment across groups to estimate a causal effect. DiD researchers often use fairly exhaustive robustness checks to…
Descriptors: Item Response Theory, Testing, Test Validity, Intervention
Sijia Huang; Seungwon Chung; Carl F. Falk – Journal of Educational Measurement, 2024
In this study, we introduced a cross-classified multidimensional nominal response model (CC-MNRM) to account for various response styles (RS) in the presence of cross-classified data. The proposed model allows slopes to vary across items and can explore impacts of observed covariates on latent constructs. We applied a recently developed variant of…
Descriptors: Response Style (Tests), Classification, Data, Models
Gyamfi, Abraham; Acquaye, Rosemary – Acta Educationis Generalis, 2023
Introduction: Item response theory (IRT) has received much attention in validation of assessment instrument because it allows the estimation of students' ability from any set of the items. Item response theory allows the difficulty and discrimination levels of each item on the test to be estimated. In the framework of IRT, item characteristics are…
Descriptors: Item Response Theory, Models, Test Items, Difficulty Level
Rachatasumrit, Napol; Koedinger, Kenneth R. – International Educational Data Mining Society, 2021
Student modeling is useful in educational research and technology development due to a capability to estimate latent student attributes. Widely used approaches, such as the Additive Factors Model (AFM), have shown satisfactory results, but they can only handle binary outcomes, which may yield potential information loss. In this work, we propose a…
Descriptors: Models, Student Characteristics, Feedback (Response), Error Correction
Jose R. Palma – ProQuest LLC, 2021
Response processes are an important component of validity to support the use and interpretation of test scores. Response processes information can provide insight into how students engage with assessment tasks and the type of errors made when solving items, as well as allow for the study of cognitive properties in items that may be associated with…
Descriptors: Scores, Validity, Responses, Emergent Literacy
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring
Xiao, Yue; Veldkamp, Bernard; Liu, Hongyun – Educational Measurement: Issues and Practice, 2022
The action sequences of respondents in problem-solving tasks reflect rich and detailed information about their performance, including differences in problem-solving ability, even if item scores are equal. It is therefore not sufficient to infer individual problem-solving skills based solely on item scores. This study is a preliminary attempt to…
Descriptors: Problem Solving, Item Response Theory, Scores, Item Analysis

Peer reviewed
Direct link
