NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 22 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dai, Yunyun – Applied Psychological Measurement, 2013
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Descriptors: Item Response Theory, Test Bias, Computation, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew – Applied Psychological Measurement, 2012
Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…
Descriptors: Intervals, Monte Carlo Methods, Computation, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Lai-Fa – Applied Psychological Measurement, 2012
Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an…
Descriptors: Social Science Research, Markov Processes, Reading Tests, Social Sciences
Peer reviewed Peer reviewed
Direct linkDirect link
Seybert, Jacob; Stark, Stephen – Applied Psychological Measurement, 2012
A Monte Carlo study was conducted to examine the accuracy of differential item functioning (DIF) detection using the differential functioning of items and tests (DFIT) method. Specifically, the performance of DFIT was compared using "testwide" critical values suggested by Flowers, Oshima, and Raju, based on simulations involving large numbers of…
Descriptors: Test Bias, Monte Carlo Methods, Form Classes (Languages), Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kieftenbeld, Vincent; Natesan, Prathiba – Applied Psychological Measurement, 2012
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Descriptors: Test Length, Markov Processes, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Yurdugul, Halil – Applied Psychological Measurement, 2009
This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…
Descriptors: Intervals, Monte Carlo Methods, Computer Software, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy; Song, Hao – Applied Psychological Measurement, 2009
Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…
Descriptors: Ability, Tests, Item Response Theory, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Froelich, Amy G.; Habing, Brian – Applied Psychological Measurement, 2008
DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…
Descriptors: Test Items, Monte Carlo Methods, Form Classes (Languages), Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Applied Psychological Measurement, 2008
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…
Descriptors: Simulation, Item Response Theory, Monte Carlo Methods, Comparative Analysis
Peer reviewed Peer reviewed
Cohen, Allan S.; Kane, Michael T.; Kim, Seock-Ho – Applied Psychological Measurement, 2001
Discusses reasons why increasing the number of replications in Monte Carlo simulation studies is not necessary for satisfactory levels of precision and offers guidelines in the context of error tolerance analysis for determining how much precision is needed. (SLD)
Descriptors: Monte Carlo Methods, Simulation
Peer reviewed Peer reviewed
Alsawalmeh, Yousef M.; Feldt, Leonard S. – Applied Psychological Measurement, 1999
Developed an approximate statistical test for the hypothesis of equality between the Spearman-Brown extrapolations of two independent values of Cronbach's alpha reliability coefficient. Monte Carlo simulations demonstrate that the procedure effectively controls Type I error. (SLD)
Descriptors: Monte Carlo Methods, Reliability, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Hoshino, Takahiro; Shigemasu, Kazuo – Applied Psychological Measurement, 2008
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Descriptors: Monte Carlo Methods, Markov Processes, Factor Analysis, Computation
Peer reviewed Peer reviewed
Raju, Nambury S.; Brand, Paul A. – Applied Psychological Measurement, 2003
Proposed a new asymptotic formula for estimating the sampling variance of a correlation coefficient corrected for unreliability and range restriction. A Monte Carlo simulation study of the new formula results in several positive conclusions about the new approach. (SLD)
Descriptors: Correlation, Monte Carlo Methods, Reliability, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy – Applied Psychological Measurement, 2008
Recent work has shown that multidimensionally scoring responses from different tests can provide better ability estimates. For educational assessment data, applications of this approach have been limited to binary scores. Of the different variants, the de la Torre and Patz model is considered more general because implementing the scoring procedure…
Descriptors: Markov Processes, Scoring, Data Analysis, Item Response Theory
Peer reviewed Peer reviewed
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun – Applied Psychological Measurement, 2002
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
Descriptors: Estimation (Mathematics), Markov Processes, Monte Carlo Methods, Simulation
Previous Page | Next Page ยป
Pages: 1  |  2