Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 7 |
Descriptor
Source
| Journal of Educational… | 12 |
Author
| Mislevy, Robert J. | 2 |
| Andrews, Jessica J. | 1 |
| Beland, Anne | 1 |
| Carstensen, Claus H. | 1 |
| De Boeck, Paul | 1 |
| Debeer, Dries | 1 |
| Embretson, Susan E. | 1 |
| Greiff, Samuel | 1 |
| Hao, Jiangang | 1 |
| He, Qiwei | 1 |
| Herborn, Katharina | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 11 |
| Reports - Research | 7 |
| Reports - Evaluative | 3 |
| Reports - Descriptive | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
| Secondary Education | 3 |
Audience
| Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 3 |
What Works Clearinghouse Rating
Langenfeld, Thomas; Thomas, Jay; Zhu, Rongchun; Morris, Carrie A. – Journal of Educational Measurement, 2020
An assessment of graphic literacy was developed by articulating and subsequently validating a skills-based cognitive model intended to substantiate the plausibility of score interpretations. Model validation involved use of multiple sources of evidence derived from large-scale field testing and cognitive labs studies. Data from large-scale field…
Descriptors: Evidence, Scores, Eye Movements, Psychometrics
Qiao, Xin; Jiao, Hong; He, Qiwei – Journal of Educational Measurement, 2023
Multiple group modeling is one of the methods to address the measurement noninvariance issue. Traditional studies on multiple group modeling have mainly focused on item responses. In computer-based assessments, joint modeling of response times and action counts with item responses helps estimate the latent speed and action levels in addition to…
Descriptors: Multivariate Analysis, Models, Item Response Theory, Statistical Distributions
Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H. – Journal of Educational Measurement, 2017
Competence data from low-stakes educational large-scale assessment studies allow for evaluating relationships between competencies and other variables. The impact of item-level nonresponse has not been investigated with regard to statistics that determine the size of these relationships (e.g., correlations, regression coefficients). Classical…
Descriptors: Test Items, Cognitive Measurement, Testing Problems, Regression (Statistics)
Andrews, Jessica J.; Kerr, Deirdre; Mislevy, Robert J.; von Davier, Alina; Hao, Jiangang; Liu, Lei – Journal of Educational Measurement, 2017
Simulations and games offer interactive tasks that can elicit rich data, providing evidence of complex skills that are difficult to measure with more conventional items and tests. However, one notable challenge in using such technologies is making sense of the data generated in order to make claims about individuals or groups. This article…
Descriptors: Simulation, Interaction, Research Methodology, Cooperative Learning
Herborn, Katharina; Mustafic, Maida; Greiff, Samuel – Journal of Educational Measurement, 2017
Collaborative problem solving (CPS) assessment is a new academic research field with a number of educational implications. In 2015, the Programme for International Student Assessment (PISA) assessed CPS with a computer-simulated human-agent (H-A) approach that claimed to measure 12 individual CPS skills for the first time. After reviewing the…
Descriptors: Cooperative Learning, Problem Solving, Computer Simulation, Evaluation Methods
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
de la Torre, Jimmy; Lee, Young-Sun – Journal of Educational Measurement, 2010
Cognitive diagnosis models (CDMs), as alternative approaches to unidimensional item response models, have received increasing attention in recent years. CDMs are developed for the purpose of identifying the mastery or nonmastery of multiple fine-grained attributes or skills required for solving problems in a domain. For CDMs to receive wider use,…
Descriptors: Ability Grouping, Item Response Theory, Models, Problem Solving
Peer reviewedMarco, Gary L. – Journal of Educational Measurement, 1977
This paper summarizes three studies that illustrate how application of the three-parameter logistic test model helped solve three relatively intractable testing problems. The three problems are: designing a multi-purpose test, evaluating an multi-level test, and equating a test on the basis of pretest statistics. (Author/JKS)
Descriptors: Latent Trait Theory, Measurement, Models, Pretests Posttests
Peer reviewedBeland, Anne; Mislevy, Robert J. – Journal of Educational Measurement, 1996
This article addresses issues in model building and statistical inference in the context of student modeling. The use of probability-based reasoning to explicate hypothesized and empirical relationships and to structure inference in the context of proportional reasoning tasks is discussed. Ideas are illustrated with an example concerning…
Descriptors: Cognitive Psychology, Models, Networks, Probability
Peer reviewedLinn, Robert L. – Journal of Educational Measurement, 1984
The common approach to studies of predictive bias is analyzed within the context of a conceptual model in which predictors and criterion measures are viewed as fallible indicators of idealized qualifications. (Author/PN)
Descriptors: Certification, Models, Predictive Measurement, Predictive Validity
Peer reviewedHughes, David C.; Keeling, Brian – Journal of Educational Measurement, 1984
Several studies have shown that essays receive higher marks when preceded by poor quality scripts than when preceded by good quality scripts. This study investigated the effectiveness of providing scorers with model essays to reduce the influence of context. Context effects persisted despite the scoring procedures used. (Author/EGS)
Descriptors: Context Effect, Essay Tests, Essays, High Schools
Peer reviewedEmbretson, Susan E. – Journal of Educational Measurement, 1995
An extension of the multidimensional Rasch model for learning and change is presented that permits theories of processes and knowledge structures to be incorporated into the item response model. The extension resolves basic problems in measuring change and permits adaptive testing. The method is illustrated in a study of mathematical problem…
Descriptors: Adaptive Testing, Change, Individual Differences, Item Response Theory

Direct link
