NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Maeda, Hotaka; Zhang, Bo – International Journal of Testing, 2017
The omega (?) statistic is reputed to be one of the best indices for detecting answer copying on multiple choice tests, but its performance relies on the accurate estimation of copier ability, which is challenging because responses from the copiers may have been contaminated. We propose an algorithm that aims to identify and delete the suspected…
Descriptors: Cheating, Test Items, Mathematics, Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Chalmers, R. Philip; Counsell, Alyssa; Flora, David B. – Educational and Psychological Measurement, 2016
Differential test functioning, or DTF, occurs when one or more items in a test demonstrate differential item functioning (DIF) and the aggregate of these effects are witnessed at the test level. In many applications, DTF can be more important than DIF when the overall effects of DIF at the test level can be quantified. However, optimal statistical…
Descriptors: Test Bias, Sampling, Test Items, Statistical Analysis
Wu, Yi-Fang – ProQuest LLC, 2015
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…
Descriptors: Item Response Theory, Test Items, Accuracy, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Padilla, Miguel A.; Divers, Jasmin – Educational and Psychological Measurement, 2013
The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…
Descriptors: Sampling, Statistical Inference, Computation, Statistical Analysis
Md Desa, Zairul Nor Deana – ProQuest LLC, 2012
In recent years, there has been increasing interest in estimating and improving subscore reliability. In this study, the multidimensional item response theory (MIRT) and the bi-factor model were combined to estimate subscores, to obtain subscores reliability, and subscores classification. Both the compensatory and partially compensatory MIRT…
Descriptors: Item Response Theory, Computation, Reliability, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Babcock, Ben – Applied Psychological Measurement, 2011
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
Descriptors: Item Response Theory, Sampling, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Allen, Nancy L.; Donoghue, John R. – Journal of Educational Measurement, 1996
Examined the effect of complex sampling of items on the measurement of differential item functioning (DIF) using the Mantel-Haenszel procedure through a Monte Carlo study. Suggests the superiority of the pooled booklet method when items are selected for examinees according to a balanced incomplete block design. Discusses implications for other DIF…
Descriptors: Item Bias, Monte Carlo Methods, Research Design, Sampling
Peer reviewed Peer reviewed
Garg, Rashmi; And Others – Journal of Educational Measurement, 1986
For the purpose of obtaining data to use in test development, multiple matrix sampling plans were compared to examinee sampling plans. Data were simulated for examinees, sampled from a population with a normal distribution of ability, responding to items selected from an item universe. (Author/LMO)
Descriptors: Difficulty Level, Monte Carlo Methods, Sampling, Statistical Studies
Peer reviewed Peer reviewed
Gressard, Risa P.; Loyd, Brenda H. – Journal of Educational Measurement, 1991
A Monte Carlo study, which simulated 10,000 examinees' responses to four tests, investigated the effect of item stratification on parameter estimation in multiple matrix sampling of achievement data. Practical multiple matrix sampling is based on item stratification by item discrimination and a sampling plan with moderate number of subtests. (SLD)
Descriptors: Achievement Tests, Comparative Testing, Computer Simulation, Estimation (Mathematics)
Yan, Jean W. – 1997
Context-dependent items are traditionally analyzed independently, creating a situation in which the potential local item dependence effects among these items may cause a biased estimation of examinees' abilities. This study investigated the local item dependence effects on testlets in the tryout version of a statewide science assessment by a Rasch…
Descriptors: Ability, Cluster Analysis, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Meijer, Rob R.; And Others – Applied Psychological Measurement, 1995
Three methods based on the nonparametric item response theory (IRT) of R. J. Mokken for the estimation of the reliability of single dichotomous test items are discussed. Analytical and Monte Carlo studies show that one method, designated "MS," is superior because of smaller bias and smaller sampling variance. (SLD)
Descriptors: Estimation (Mathematics), Item Response Theory, Monte Carlo Methods, Nonparametric Statistics
Meijer, Rob R.; And Others – 1994
Three methods for the estimation of the reliability of single dichotomous items are discussed. All methods are based on the assumptions of nondecreasing and nonintersecting item response functions and the Mokken model of double monotonicity. Based on analytical and Monte Carlo studies, it is concluded that one method is superior to the other two…
Descriptors: Estimation (Mathematics), Foreign Countries, Item Response Theory, Monte Carlo Methods
Allen, Nancy L.; Donoghue, John R. – 1995
This Monte Carlo study examined the effect of complex sampling of items on the measurement of differential item functioning (DIF) using the Mantel-Haenszel procedure. Data were generated using a three-parameter logistic item response theory model according to the balanced incomplete block (BIB) design used in the National Assessment of Educational…
Descriptors: Computer Assisted Testing, Difficulty Level, Elementary Secondary Education, Identification