NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xi Wang; Catherine Welch – Journal of Educational Measurement, 2025
This study builds on prior research on adaptive testing by examining the performance of item calibration methods in the context of multidimensional multistage tests with within-item multidimensionality. Building on the adaptive module-level approach, where test-takers proceed through customized modules based on their initial performance, this…
Descriptors: Test Items, Adaptive Testing, Testing, Computer Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kubinger, Klaus D.; Rasch, Dieter; Yanagida, Takuya – Educational Research and Evaluation, 2011
Though calibration of an achievement test within psychological and educational context is very often carried out by the Rasch model, data sampling is hardly designed according to statistical foundations. However, Kubinger, Rasch, and Yanagida (2009) recently suggested an approach for the determination of sample size according to a given Type I and…
Descriptors: Sample Size, Simulation, Testing, Achievement Tests
Flowers, Claudia P.; And Others – 1997
An item response theory-based parametric procedure proposed by N. S. Raju, W. J. van der Linden, and P. F. Fleer (1995) known as differential functioning of items and tests (DFIT) can be used with unidimensional and multidimensional data with dichotomous or polytomous scoring. This study describes the polytomous DFIT framework and evaluates and…
Descriptors: Chi Square, Computer Simulation, Item Bias, Item Response Theory
Peer reviewed Peer reviewed
Harwell, Michael R.; Janosky, Janine E. – Applied Psychological Measurement, 1991
Investigates the BILOG computer program's ability to recover known item parameters for different numbers of items, examinees, and variances of the prior distributions of discrimination parameters for the two-parameter logistic item-response theory model. For samples of at least 250 examinees and 15 items, simulation results support using BILOG.…
Descriptors: Bayesian Statistics, Computer Simulation, Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Kim, Seock-Ho; And Others – Psychometrika, 1994
Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item and ability parameters through two joint and two marginal Bayesian procedures. Marginal procedures yielded smaller root mean square differences for item and ability, but results for larger sample size and test length were similar.…
Descriptors: Ability, Bayesian Statistics, Computer Simulation, Estimation (Mathematics)
Peer reviewed Peer reviewed
Direct linkDirect link
Schmitt, J. Eric; Mehta, Paras D.; Aggen, Steven H.; Kubarych, Thomas S.; Neale, Michael C. – Multivariate Behavioral Research, 2006
Ordered latent class analysis (OLCA) can be used to approximate unidimensional latent distributions. The main objective of this study is to evaluate the method of OLCA in detecting non-normality of an unobserved continuous variable (i.e., a common factor) used to explain the covariation between dichotomous item-level responses. Using simulation,…
Descriptors: Probability, Sample Size, Effect Size, Depression (Psychology)
Ackerman, Terry A.; Evans, John A. – 1993
A didactic example is provided, using a Monte Carlo method, of how differential item functioning (DIF) can be eliminated (and thus better understood) when the complete latent space is used. The main source of DIF is that the matching single criterion used in some DIF procedures, Mantel Haenszel or Simultaneous Item Bias (SIBTEST), does not account…
Descriptors: Computer Simulation, Equations (Mathematics), Item Bias, Item Response Theory
Peer reviewed Peer reviewed
Stark, Stephen; Drasgow, Fritz – Applied Psychological Measurement, 2002
Describes item response and information functions for the Zinnes and Griggs paired comparison item response theory (IRT) model (1974) and presents procedures for estimating stimulus and person parameters. Monte Carlo simulations show that at least 400 ratings are required to obtain reasonably accurate estimates of the stimulus parameters and their…
Descriptors: Comparative Analysis, Computer Simulation, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Hambleton, Ronald K.; Jones, Russell W. – Applied Measurement in Education, 1994
The impact of capitalizing on chance in item selection on the accuracy of test information functions was studied through simulation, focusing on examinee sample size in item calibration and the ratio of item bank size to test length. (SLD)
Descriptors: Computer Simulation, Estimation (Mathematics), Item Banks, Item Response Theory
Peer reviewed Peer reviewed
Barnes, Laura L. B.; Wise, Steven L. – Applied Measurement in Education, 1991
One-parameter and three-parameter item response theory (IRT) model estimates were compared with estimates obtained from two modified one-parameter models that incorporated a constant nonzero guessing parameter. Using small-sample simulation data (50, 100, and 200 simulated examinees), modified 1-parameter models were most effective in estimating…
Descriptors: Ability, Achievement Tests, Comparative Analysis, Computer Simulation
PDF pending restoration PDF pending restoration
Bush, M. Joan; Schumacker, Randall E. – 1993
The feasibility of quick norms derived by the procedure described by B. D. Wright and M. H. Stone (1979) was investigated. Norming differences between traditionally calculated means and Rasch "quick" means were examined for simulated data sets of varying sample size, test length, and type of distribution. A 5 by 5 by 2 design with a…
Descriptors: Computer Simulation, Item Response Theory, Norm Referenced Tests, Sample Size
De Ayala, R. J. – 1993
Previous work on the effects of dimensionality on parameter estimation was extended from dichotomous models to the polytomous graded response (GR) model. A multidimensional GR model was developed to generate data in one-, two-, and three-dimensions, with two- and three-dimensional conditions varying in their interdimensional associations. Test…
Descriptors: Computer Simulation, Correlation, Difficulty Level, Estimation (Mathematics)
Tang, K. Linda; And Others – 1993
This study compared the performance of the LOGIST and BILOG computer programs on item response theory (IRT) based scaling and equating for the Test of English as a Foreign Language (TOEFL) using real and simulated data and two calibration structures. Applications of IRT for the TOEFL program are based on the three-parameter logistic (3PL) model.…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Estimation (Mathematics)
Peer reviewed Peer reviewed
Hambleton, Ronald K.; And Others – Journal of Educational Measurement, 1993
Item parameter estimation errors in test development are highlighted. The problem is illustrated with several simulated data sets, and a conservative solution is offered for addressing the problem in item response theory test development practice. Steps that reduce the problem of capitalizing on chance in item selections are suggested. (SLD)
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Hambleton, Ronald K.; Jones, Russell W. – 1993
Errors in item parameter estimates have a negative impact on the accuracy of item and test information functions. The estimation errors may be random, but because items with higher levels of discriminating power are more likely to be selected for a test, and these items are most apt to contain positive errors, the result is that item information…
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Previous Page | Next Page ยป
Pages: 1  |  2