NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Langenfeld, Thomas; Thomas, Jay; Zhu, Rongchun; Morris, Carrie A. – Journal of Educational Measurement, 2020
An assessment of graphic literacy was developed by articulating and subsequently validating a skills-based cognitive model intended to substantiate the plausibility of score interpretations. Model validation involved use of multiple sources of evidence derived from large-scale field testing and cognitive labs studies. Data from large-scale field…
Descriptors: Evidence, Scores, Eye Movements, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Qiao, Xin; Jiao, Hong; He, Qiwei – Journal of Educational Measurement, 2023
Multiple group modeling is one of the methods to address the measurement noninvariance issue. Traditional studies on multiple group modeling have mainly focused on item responses. In computer-based assessments, joint modeling of response times and action counts with item responses helps estimate the latent speed and action levels in addition to…
Descriptors: Multivariate Analysis, Models, Item Response Theory, Statistical Distributions
Peer reviewed Peer reviewed
Direct linkDirect link
Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H. – Journal of Educational Measurement, 2017
Competence data from low-stakes educational large-scale assessment studies allow for evaluating relationships between competencies and other variables. The impact of item-level nonresponse has not been investigated with regard to statistics that determine the size of these relationships (e.g., correlations, regression coefficients). Classical…
Descriptors: Test Items, Cognitive Measurement, Testing Problems, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Andrews, Jessica J.; Kerr, Deirdre; Mislevy, Robert J.; von Davier, Alina; Hao, Jiangang; Liu, Lei – Journal of Educational Measurement, 2017
Simulations and games offer interactive tasks that can elicit rich data, providing evidence of complex skills that are difficult to measure with more conventional items and tests. However, one notable challenge in using such technologies is making sense of the data generated in order to make claims about individuals or groups. This article…
Descriptors: Simulation, Interaction, Research Methodology, Cooperative Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Herborn, Katharina; Mustafic, Maida; Greiff, Samuel – Journal of Educational Measurement, 2017
Collaborative problem solving (CPS) assessment is a new academic research field with a number of educational implications. In 2015, the Programme for International Student Assessment (PISA) assessed CPS with a computer-simulated human-agent (H-A) approach that claimed to measure 12 individual CPS skills for the first time. After reviewing the…
Descriptors: Cooperative Learning, Problem Solving, Computer Simulation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy; Lee, Young-Sun – Journal of Educational Measurement, 2010
Cognitive diagnosis models (CDMs), as alternative approaches to unidimensional item response models, have received increasing attention in recent years. CDMs are developed for the purpose of identifying the mastery or nonmastery of multiple fine-grained attributes or skills required for solving problems in a domain. For CDMs to receive wider use,…
Descriptors: Ability Grouping, Item Response Theory, Models, Problem Solving
Peer reviewed Peer reviewed
Marco, Gary L. – Journal of Educational Measurement, 1977
This paper summarizes three studies that illustrate how application of the three-parameter logistic test model helped solve three relatively intractable testing problems. The three problems are: designing a multi-purpose test, evaluating an multi-level test, and equating a test on the basis of pretest statistics. (Author/JKS)
Descriptors: Latent Trait Theory, Measurement, Models, Pretests Posttests
Peer reviewed Peer reviewed
Beland, Anne; Mislevy, Robert J. – Journal of Educational Measurement, 1996
This article addresses issues in model building and statistical inference in the context of student modeling. The use of probability-based reasoning to explicate hypothesized and empirical relationships and to structure inference in the context of proportional reasoning tasks is discussed. Ideas are illustrated with an example concerning…
Descriptors: Cognitive Psychology, Models, Networks, Probability
Peer reviewed Peer reviewed
Linn, Robert L. – Journal of Educational Measurement, 1984
The common approach to studies of predictive bias is analyzed within the context of a conceptual model in which predictors and criterion measures are viewed as fallible indicators of idealized qualifications. (Author/PN)
Descriptors: Certification, Models, Predictive Measurement, Predictive Validity
Peer reviewed Peer reviewed
Hughes, David C.; Keeling, Brian – Journal of Educational Measurement, 1984
Several studies have shown that essays receive higher marks when preceded by poor quality scripts than when preceded by good quality scripts. This study investigated the effectiveness of providing scorers with model essays to reduce the influence of context. Context effects persisted despite the scoring procedures used. (Author/EGS)
Descriptors: Context Effect, Essay Tests, Essays, High Schools
Peer reviewed Peer reviewed
Embretson, Susan E. – Journal of Educational Measurement, 1995
An extension of the multidimensional Rasch model for learning and change is presented that permits theories of processes and knowledge structures to be incorporated into the item response model. The extension resolves basic problems in measuring change and permits adaptive testing. The method is illustrated in a study of mathematical problem…
Descriptors: Adaptive Testing, Change, Individual Differences, Item Response Theory