NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nagy, Gabriel; Ulitzsch, Esther – Educational and Psychological Measurement, 2022
Disengaged item responses pose a threat to the validity of the results provided by large-scale assessments. Several procedures for identifying disengaged responses on the basis of observed response times have been suggested, and item response theory (IRT) models for response engagement have been proposed. We outline that response time-based…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Predictor Variables, Classification
Oluwalana, Olasumbo O. – ProQuest LLC, 2019
A primary purpose of cognitive diagnosis models (CDMs) is to classify examinees based on their attribute patterns. The Q-matrix (Tatsuoka, 1985), a common component of all CDMs, specifies the relationship between the set of required dichotomous attributes and the test items. Since a Q-matrix is often developed by content-knowledge experts and can…
Descriptors: Classification, Validity, Test Items, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Koziol, Natalie A. – Applied Measurement in Education, 2016
Testlets, or groups of related items, are commonly included in educational assessments due to their many logistical and conceptual advantages. Despite their advantages, testlets introduce complications into the theory and practice of educational measurement. Responses to items within a testlet tend to be correlated even after controlling for…
Descriptors: Classification, Accuracy, Comparative Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Wilson, Mark – Educational and Psychological Measurement, 2011
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Descriptors: Test Bias, Test Length, Statistical Inference, Geometric Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2009
This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…
Descriptors: Classification, Scoring, Item Response Theory, Measurement
Kim, Seock-Ho – 2002
Continuation ratio logits are used to model the possibilities of obtaining ordered categories in a polytomously scored item. This model is an alternative to other models for ordered category items such as the graded response model and the generalized partial credit model. The discussion includes a theoretical development of the model, a…
Descriptors: Ability, Classification, Item Response Theory, Mathematical Models
Peer reviewed Peer reviewed
Thissen, David; Steinberg, Lynne – Psychometrika, 1986
This article organizes models for categorical item response data into three distinct classes. "Difference models" are appropriate for ordered responses, "divide-by-total" models for either ordered or nominal responses, and "left-side added" models for multiple-choice responses with guessing. Details of the taxonomy…
Descriptors: Classification, Item Analysis, Latent Trait Theory, Mathematical Models
Kim, Seock-Ho; Cohen, Allan S. – 1997
Type I error rates of the likelihood ratio test for the detection of differential item functioning (DIF) were investigated using Monte Carlo simulations. The graded response model with five ordered categories was used to generate data sets of a 30-item test for samples of 300 and 1,000 simulated examinees. All DIF comparisons were simulated by…
Descriptors: Ability, Classification, Computer Simulation, Estimation (Mathematics)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sheehan, Kathleen M.; Kostin, Irene; Futagi, Yoko; Hemat, Ramin; Zuckerman, Daniel – ETS Research Report Series, 2006
This paper describes the development, implementation, and evaluation of an automated system for predicting the acceptability status of candidate reading-comprehension stimuli extracted from a database of journal and magazine articles. The system uses a combination of classification and regression techniques to predict the probability that a given…
Descriptors: Automation, Prediction, Reading Comprehension, Classification