NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kaiwen Man; Joni M. Lakin – Journal of Educational Measurement, 2024
Eye-tracking procedures generate copious process data that could be valuable in establishing the response processes component of modern validity theory. However, there is a lack of tools for assessing and visualizing response processes using process data such as eye-tracking fixation sequences, especially those suitable for young children. This…
Descriptors: Problem Solving, Spatial Ability, Task Analysis, Network Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Langenfeld, Thomas; Thomas, Jay; Zhu, Rongchun; Morris, Carrie A. – Journal of Educational Measurement, 2020
An assessment of graphic literacy was developed by articulating and subsequently validating a skills-based cognitive model intended to substantiate the plausibility of score interpretations. Model validation involved use of multiple sources of evidence derived from large-scale field testing and cognitive labs studies. Data from large-scale field…
Descriptors: Evidence, Scores, Eye Movements, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Qiao, Xin; Jiao, Hong; He, Qiwei – Journal of Educational Measurement, 2023
Multiple group modeling is one of the methods to address the measurement noninvariance issue. Traditional studies on multiple group modeling have mainly focused on item responses. In computer-based assessments, joint modeling of response times and action counts with item responses helps estimate the latent speed and action levels in addition to…
Descriptors: Multivariate Analysis, Models, Item Response Theory, Statistical Distributions
Peer reviewed Peer reviewed
Direct linkDirect link
Rosen, Yigal – Journal of Educational Measurement, 2017
In order to understand potential applications of collaborative problem-solving (CPS) assessment tasks, it is necessary to examine empirically the multifaceted student performance that may be distributed across collaboration methods and purposes of the assessment. Ideally, each student should be matched with various types of group members and must…
Descriptors: Problem Solving, Pilot Projects, Electronic Learning, Cooperation
Peer reviewed Peer reviewed
Direct linkDirect link
Scoular, Claire; Care, Esther; Hesse, Friedrich W. – Journal of Educational Measurement, 2017
Collaborative problem solving is a complex skill set that draws on social and cognitive factors. The construct remains in its infancy due to lack of empirical evidence that can be drawn upon for validation. The differences and similarities between two large-scale initiatives that reflect this state of the art, in terms of underlying assumptions…
Descriptors: Problem Solving, Automation, Evaluation Methods, Cooperative Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Herborn, Katharina; Mustafic, Maida; Greiff, Samuel – Journal of Educational Measurement, 2017
Collaborative problem solving (CPS) assessment is a new academic research field with a number of educational implications. In 2015, the Programme for International Student Assessment (PISA) assessed CPS with a computer-simulated human-agent (H-A) approach that claimed to measure 12 individual CPS skills for the first time. After reviewing the…
Descriptors: Cooperative Learning, Problem Solving, Computer Simulation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Zhu, Mengxiao; Shu, Zhan; von Davier, Alina A. – Journal of Educational Measurement, 2016
New technology enables interactive and adaptive scenario-based tasks (SBTs) to be adopted in educational measurement. At the same time, it is a challenging problem to build appropriate psychometric models to analyze data collected from these tasks, due to the complexity of the data. This study focuses on process data collected from SBTs. We…
Descriptors: Measurement, Data Collection, National Competency Tests, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy; Lee, Young-Sun – Journal of Educational Measurement, 2010
Cognitive diagnosis models (CDMs), as alternative approaches to unidimensional item response models, have received increasing attention in recent years. CDMs are developed for the purpose of identifying the mastery or nonmastery of multiple fine-grained attributes or skills required for solving problems in a domain. For CDMs to receive wider use,…
Descriptors: Ability Grouping, Item Response Theory, Models, Problem Solving
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Sebrechts, Marc M. – Journal of Educational Measurement, 1997
A computer-delivered problem-solving task based on cognitive research literature was developed and its validity for graduate admissions assessment was studied with 107 undergraduates. Use of the test, which asked examinees to sort word-problem stems by prototypes, was supported by the findings. (SLD)
Descriptors: Admission (School), College Entrance Examinations, Computer Assisted Testing, Graduate Study
Peer reviewed Peer reviewed
Beland, Anne; Mislevy, Robert J. – Journal of Educational Measurement, 1996
This article addresses issues in model building and statistical inference in the context of student modeling. The use of probability-based reasoning to explicate hypothesized and empirical relationships and to structure inference in the context of proportional reasoning tasks is discussed. Ideas are illustrated with an example concerning…
Descriptors: Cognitive Psychology, Models, Networks, Probability
Peer reviewed Peer reviewed
Tamir, Pinchas – Journal of Educational Measurement, 1974
Descriptors: Evaluation Methods, Evaluation Needs, Performance Tests, Problem Solving
Peer reviewed Peer reviewed
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M. – Journal of Educational Measurement, 1983
This study introduces the individual consistency index (ICI), which measures the extent to which patterns of responses to parallel sets of items remain consistent over time. ICI is used as an error diagnostic tool to detect aberrant response patterns resulting from the consistent application of erroneous rules of operation. (Author/PN)
Descriptors: Achievement Tests, Algorithms, Error Patterns, Measurement Techniques
Peer reviewed Peer reviewed
Wilcox, Rand R.; And Others – Journal of Educational Measurement, 1988
The second response conditional probability model of decision-making strategies used by examinees answering multiple choice test items was revised. Increasing the number of distractors or providing distractors giving examinees (N=106) the option to follow the model improved results and gave a good fit to data for 29 of 30 items. (SLD)
Descriptors: Cognitive Tests, Decision Making, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ariel, Adelaide; Veldkamp, Bernard P.; van der Linden, Wim J. – Journal of Educational Measurement, 2004
Preventing items in adaptive testing from being over- or underexposed is one of the main problems in computerized adaptive testing. Though the problem of overexposed items can be solved using a probabilistic item-exposure control method, such methods are unable to deal with the problem of underexposed items. Using a system of rotating item pools,…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Test Construction
Peer reviewed Peer reviewed
Masters, Geofferey N. – Journal of Educational Measurement, 1984
This paper develops and illustrates a latent trait approach to constructing an item bank when responses are scored in several ordered categories. This approach is an extension of the methodology developed by Choppin, Wright and Stone, and Wright and Bell for the construction and maintenance of banks of dichotomously scored items. (Author/PN)
Descriptors: Equated Scores, Item Banks, Latent Trait Theory, Mathematical Models
Previous Page | Next Page ยป
Pages: 1  |  2