Publication Date
| In 2026 | 0 |
| Since 2025 | 3 |
| Since 2022 (last 5 years) | 34 |
| Since 2017 (last 10 years) | 89 |
| Since 2007 (last 20 years) | 166 |
Descriptor
| Error of Measurement | 258 |
| Monte Carlo Methods | 258 |
| Sample Size | 76 |
| Statistical Analysis | 67 |
| Computation | 62 |
| Statistical Bias | 61 |
| Comparative Analysis | 60 |
| Correlation | 56 |
| Simulation | 50 |
| Item Response Theory | 47 |
| Structural Equation Models | 44 |
| More ▼ | |
Source
Author
| Ferron, John M. | 5 |
| Finch, W. Holmes | 5 |
| Leite, Walter L. | 5 |
| Finch, Holmes | 4 |
| Hancock, Gregory R. | 4 |
| Van den Noortgate, Wim | 4 |
| Beretvas, S. Natasha | 3 |
| Huang, Francis L. | 3 |
| Kwok, Oi-man | 3 |
| Lee, Sik-Yum | 3 |
| Stark, Stephen | 3 |
| More ▼ | |
Publication Type
| Journal Articles | 197 |
| Reports - Research | 172 |
| Reports - Evaluative | 63 |
| Speeches/Meeting Papers | 31 |
| Reports - Descriptive | 12 |
| Dissertations/Theses -… | 9 |
| Numerical/Quantitative Data | 2 |
| Information Analyses | 1 |
| Opinion Papers | 1 |
Education Level
| Elementary Education | 7 |
| Grade 1 | 3 |
| Grade 2 | 3 |
| Grade 3 | 3 |
| Grade 5 | 3 |
| Higher Education | 3 |
| Early Childhood Education | 2 |
| Grade 4 | 2 |
| Intermediate Grades | 2 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
| More ▼ | |
Audience
| Researchers | 9 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Finch, W. Holmes – Educational and Psychological Measurement, 2020
Exploratory factor analysis (EFA) is widely used by researchers in the social sciences to characterize the latent structure underlying a set of observed indicator variables. One of the primary issues that must be resolved when conducting an EFA is determination of the number of factors to retain. There exist a large number of statistical tools…
Descriptors: Factor Analysis, Goodness of Fit, Social Sciences, Comparative Analysis
Hong, Sanghyun; Reed, W. Robert – Research Synthesis Methods, 2021
The purpose of this study is to show how Monte Carlo analysis of meta-analytic estimators can be used to select estimators for specific research situations. Our analysis conducts 1620 individual experiments, where each experiment is defined by a unique combination of sample size, effect size, effect size heterogeneity, publication selection…
Descriptors: Monte Carlo Methods, Meta Analysis, Research Methodology, Experiments
Fan Pan – ProQuest LLC, 2021
This dissertation informed researchers about the performance of different level-specific and target-specific model fit indices in Multilevel Latent Growth Model (MLGM) using unbalanced design and different trajectories. As the use of MLGMs is a relatively new field, this study helped further the field by informing researchers interested in using…
Descriptors: Goodness of Fit, Item Response Theory, Growth Models, Monte Carlo Methods
Shear, Benjamin R.; Nordstokke, David W.; Zumbo, Bruno D. – Practical Assessment, Research & Evaluation, 2018
This computer simulation study evaluates the robustness of the nonparametric Levene test of equal variances (Nordstokke & Zumbo, 2010) when sampling from populations with unequal (and unknown) means. Testing for population mean differences when population variances are unknown and possibly unequal is often referred to as the Behrens-Fisher…
Descriptors: Nonparametric Statistics, Computer Simulation, Monte Carlo Methods, Sampling
Leite, Walter L.; Aydin, Burak; Gurel, Sungur – Journal of Experimental Education, 2019
This Monte Carlo simulation study compares methods to estimate the effects of programs with multiple versions when assignment of individuals to program version is not random. These methods use generalized propensity scores, which are predicted probabilities of receiving a particular level of the treatment conditional on covariates, to remove…
Descriptors: Probability, Weighted Scores, Monte Carlo Methods, Statistical Bias
Chang, Wanchen; Pituch, Keenan A. – Journal of Experimental Education, 2019
When data for multiple outcomes are collected in a multilevel design, researchers can select a univariate or multivariate analysis to examine group-mean differences. When correlated outcomes are incomplete, a multivariate multilevel model (MVMM) may provide greater power than univariate multilevel models (MLMs). For a two-group multilevel design…
Descriptors: Hierarchical Linear Modeling, Multivariate Analysis, Research Problems, Error of Measurement
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Paulsen, Justin; Valdivia, Dubravka Svetina – Journal of Experimental Education, 2022
Cognitive diagnostic models (CDMs) are a family of psychometric models designed to provide categorical classifications for multiple latent attributes. CDMs provide more granular evidence than other psychometric models and have potential for guiding teaching and learning decisions in the classroom. However, CDMs have primarily been conducted using…
Descriptors: Psychometrics, Classification, Teaching Methods, Learning Processes
Robert Meyer; Sara Hu; Michael Christian – Society for Research on Educational Effectiveness, 2022
This paper develops models to measure growth in student achievement with a focus on the possibility of differential growth in achievement for low and high-achieving students. We consider a gap-closing model that evaluates the degree to which students in a target group -- students in the bottom quartile of measured achievement -- perform better…
Descriptors: Academic Achievement, Achievement Gap, Models, Measurement Techniques
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Tong, Xin; Zhang, Zhiyong – Grantee Submission, 2020
Despite broad applications of growth curve models, few studies have dealt with a practical issue -- nonnormality of data. Previous studies have used Student's "t" distributions to remedy the nonnormal problems. In this study, robust distributional growth curve models are proposed from a semiparametric Bayesian perspective, in which…
Descriptors: Robustness (Statistics), Bayesian Statistics, Models, Error of Measurement
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Rubio-Aparicio, María; López-López, José Antonio; Sánchez-Meca, Julio; Marín-Martínez, Fulgencio; Viechtbauer, Wolfgang; Van den Noortgate, Wim – Research Synthesis Methods, 2018
The random-effects model, applied in most meta-analyses nowadays, typically assumes normality of the distribution of the effect parameters. The purpose of this study was to examine the performance of various random-effects methods (standard method, Hartung's method, profile likelihood method, and bootstrapping) for computing an average effect size…
Descriptors: Effect Size, Meta Analysis, Intervals, Monte Carlo Methods

Peer reviewed
Direct link
