NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers5
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 31 to 45 of 179 results Save | Export
Sinharay, Sandip; van Rijn, Peter – Grantee Submission, 2020
Response-time models are of increasing interest in educational and psychological testing. This paper focuses on the lognormal model for response times (van der Linden, 2006), which is one of the most popular response-time models. Several existing statistics for testing normality and the fit of factor-analysis models are repurposed for testing the…
Descriptors: Educational Testing, Psychological Testing, Goodness of Fit, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Yong; Liang, Xinya – Measurement: Interdisciplinary Research and Perspectives, 2019
Current methods that simultaneously model differential testlet functioning (DTLF) and differential item functioning (DIF) constrain the variances of latent ability and testlet effects to be equal between the focal and the reference groups. Such a constraint can be stringent and unrealistic with real data. In this study, we propose a multigroup…
Descriptors: Test Items, Item Response Theory, Test Bias, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J. – Educational and Psychological Measurement, 2022
Individual response style behaviors, unrelated to the latent trait of interest, may influence responses to ordinal survey items. Response style can introduce bias in the total score with respect to the trait of interest, threatening valid interpretation of scores. Despite claims of response style stability across scales, there has been little…
Descriptors: Response Style (Tests), Individual Differences, Scores, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Qiao, Xin; Jiao, Hong – Journal of Educational Measurement, 2021
This study proposes explanatory cognitive diagnostic model (CDM) jointly incorporating responses and response times (RTs) with the inclusion of item covariates related to both item responses and RTs. The joint modeling of item responses and RTs intends to provide more information for cognitive diagnosis while item covariates can be used to predict…
Descriptors: Cognitive Measurement, Models, Reaction Time, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Zhan, Peida; Jiao, Hong; Man, Kaiwen; Wang, Lijun – Journal of Educational and Behavioral Statistics, 2019
In this article, we systematically introduce the just another Gibbs sampler (JAGS) software program to fit common Bayesian cognitive diagnosis models (CDMs) including the deterministic inputs, noisy "and" gate model; the deterministic inputs, noisy "or" gate model; the linear logistic model; the reduced reparameterized unified…
Descriptors: Bayesian Statistics, Computer Software, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Pavel Chernyavskiy; Traci S. Kutaka; Carson Keeter; Julie Sarama; Douglas Clements – Grantee Submission, 2024
When researchers code behavior that is undetectable or falls outside of the validated ordinal scale, the resultant outcomes often suffer from informative missingness. Incorrect analysis of such data can lead to biased arguments around efficacy and effectiveness in the context of experimental and intervention research. Here, we detail a new…
Descriptors: Bayesian Statistics, Mathematics Instruction, Learning Trajectories, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Ross, Matthew M.; Wright, A. Michelle – Journal of Education for Business, 2022
We use Markov chain Monte Carlo (MCMC) analysis to construct a three-question math quiz to assess key skills needed for introductory finance. We begin with data collected from a ten-question criterion-referenced math quiz given to 314 undergraduates on the first day of class. MCMC indicates the top three questions for predicting overall course…
Descriptors: Mathematics Tests, Markov Processes, Monte Carlo Methods, Introductory Courses
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Trendtel, Matthias; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2021
A multidimensional Bayesian item response model is proposed for modeling item position effects. The first dimension corresponds to the ability that is to be measured; the second dimension represents a factor that allows for individual differences in item position effects called persistence. This model allows for nonlinear item position effects on…
Descriptors: Bayesian Statistics, Item Response Theory, Test Items, Test Format
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J.; Leventhal, Brian C.; Ezike, Nnamdi C. – Measurement: Interdisciplinary Research and Perspectives, 2020
Data simulation and Monte Carlo simulation studies are important skills for researchers and practitioners of educational and psychological measurement, but there are few resources on the topic specific to item response theory. Even fewer resources exist on the statistical software techniques to implement simulation studies. This article presents…
Descriptors: Monte Carlo Methods, Item Response Theory, Simulation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Wanxue Zhang; Lingling Meng; Bilan Liang – Interactive Learning Environments, 2023
With the continuous development of education, personalized learning has attracted great attention. How to evaluate students' learning effects has become increasingly important. In information technology courses, the traditional academic evaluation focuses on the student's learning outcomes, such as "scores" or "right/wrong,"…
Descriptors: Information Technology, Computer Science Education, High School Students, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12