Publication Date
| In 2026 | 0 |
| Since 2025 | 3 |
| Since 2022 (last 5 years) | 34 |
| Since 2017 (last 10 years) | 89 |
| Since 2007 (last 20 years) | 166 |
Descriptor
| Error of Measurement | 258 |
| Monte Carlo Methods | 258 |
| Sample Size | 76 |
| Statistical Analysis | 67 |
| Computation | 62 |
| Statistical Bias | 61 |
| Comparative Analysis | 60 |
| Correlation | 56 |
| Simulation | 50 |
| Item Response Theory | 47 |
| Structural Equation Models | 44 |
| More ▼ | |
Source
Author
| Ferron, John M. | 5 |
| Finch, W. Holmes | 5 |
| Leite, Walter L. | 5 |
| Finch, Holmes | 4 |
| Hancock, Gregory R. | 4 |
| Van den Noortgate, Wim | 4 |
| Beretvas, S. Natasha | 3 |
| Huang, Francis L. | 3 |
| Kwok, Oi-man | 3 |
| Lee, Sik-Yum | 3 |
| Stark, Stephen | 3 |
| More ▼ | |
Publication Type
| Journal Articles | 197 |
| Reports - Research | 172 |
| Reports - Evaluative | 63 |
| Speeches/Meeting Papers | 31 |
| Reports - Descriptive | 12 |
| Dissertations/Theses -… | 9 |
| Numerical/Quantitative Data | 2 |
| Information Analyses | 1 |
| Opinion Papers | 1 |
Education Level
| Elementary Education | 7 |
| Grade 1 | 3 |
| Grade 2 | 3 |
| Grade 3 | 3 |
| Grade 5 | 3 |
| Higher Education | 3 |
| Early Childhood Education | 2 |
| Grade 4 | 2 |
| Intermediate Grades | 2 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
| More ▼ | |
Audience
| Researchers | 9 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wang, Shaojie; Zhang, Minqiang; Lee, Won-Chan; Huang, Feifei; Li, Zonglong; Li, Yixing; Yu, Sufang – Journal of Educational Measurement, 2022
Traditional IRT characteristic curve linking methods ignore parameter estimation errors, which may undermine the accuracy of estimated linking constants. Two new linking methods are proposed that take into account parameter estimation errors. The item- (IWCC) and test-information-weighted characteristic curve (TWCC) methods employ weighting…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Monte Carlo Methods
Ben-Michael, Eli; Feller, Avi; Rothstein, Jesse – Grantee Submission, 2022
Staggered adoption of policies by different units at different times creates promising opportunities for observational causal inference. Estimation remains challenging, however, and common regression methods can give misleading results. A promising alternative is the synthetic control method (SCM), which finds a weighted average of control units…
Descriptors: Causal Models, Statistical Inference, Computation, Evaluation Methods
Simsek, Ahmet Salih – International Journal of Assessment Tools in Education, 2023
Likert-type item is the most popular response format for collecting data in social, educational, and psychological studies through scales or questionnaires. However, there is no consensus on whether parametric or non-parametric tests should be preferred when analyzing Likert-type data. This study examined the statistical power of parametric and…
Descriptors: Error of Measurement, Likert Scales, Nonparametric Statistics, Statistical Analysis
Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Grantee Submission, 2021
Multilevel structural equation (MSEM) models allow researchers to model latent factor structures at multiple levels simultaneously by decomposing within- and between-group variation. Yet the extent to which the sampling ratio (i.e., proportion of cases sampled from each group) influences the results of MSEM models remains unknown. This paper…
Descriptors: Sampling, Structural Equation Models, Factor Structure, Monte Carlo Methods
Arel-Bundock, Vincent – Sociological Methods & Research, 2022
Qualitative comparative analysis (QCA) is an influential methodological approach motivated by set theory and boolean logic. QCA proponents have developed algorithms to analyze quantitative data, in a bid to uncover necessary and sufficient conditions where causal relationships are complex, conditional, or asymmetric. This article uses computer…
Descriptors: Comparative Analysis, Qualitative Research, Attribution Theory, Computer Simulation
Liu, Yixing; Thompson, Marilyn S. – Journal of Experimental Education, 2022
A simulation study was conducted to explore the impact of differential item functioning (DIF) on general factor difference estimation for bifactor, ordinal data. Common analysis misspecifications in which the generated bifactor data with DIF were fitted using models with equality constraints on noninvariant item parameters were compared under data…
Descriptors: Comparative Analysis, Item Analysis, Sample Size, Error of Measurement
Jobst, Lisa J.; Auerswald, Max; Moshagen, Morten – Educational and Psychological Measurement, 2022
Prior studies investigating the effects of non-normality in structural equation modeling typically induced non-normality in the indicator variables. This procedure neglects the factor analytic structure of the data, which is defined as the sum of latent variables and errors, so it is unclear whether previous results hold if the source of…
Descriptors: Goodness of Fit, Structural Equation Models, Error of Measurement, Factor Analysis
Koçak, Duygu – Pedagogical Research, 2020
Iteration number in Monte Carlo simulation method used commonly in educational research has an effect on Item Response Theory test and item parameters. The related studies show that the number of iteration is at the discretion of the researcher. Similarly, there is no specific number suggested for the number of iteration in the related literature.…
Descriptors: Monte Carlo Methods, Item Response Theory, Educational Research, Test Items
Nazari, Sanaz; Leite, Walter L.; Huggins-Manley, A. Corinne – Journal of Experimental Education, 2023
The piecewise latent growth models (PWLGMs) can be used to study changes in the growth trajectory of an outcome due to an event or condition, such as exposure to an intervention. When there are multiple outcomes of interest, a researcher may choose to fit a series of PWLGMs or a single parallel-process PWLGM. A comparison of these models is…
Descriptors: Growth Models, Statistical Analysis, Intervention, Comparative Analysis
Montoya, Amanda K.; Edwards, Michael C. – Educational and Psychological Measurement, 2021
Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the…
Descriptors: Goodness of Fit, Factor Analysis, Cutting Scores, Correlation
Kirkup, Les; Frenkel, Bob – Physics Education, 2020
When the relationship between two physical variables, such as voltage and current, can be expressed as y = bx where b is a constant. b may be estimated by least squares, or by averaging the values of b obtained for each x-y data pair. We show for data gathered in an experiment, as well as through Monte Carlo simulation and mathematical analysis,…
Descriptors: Comparative Analysis, Least Squares Statistics, Monte Carlo Methods, Physics
van der Linden, Wim J.; Ren, Hao – Journal of Educational and Behavioral Statistics, 2020
The Bayesian way of accounting for the effects of error in the ability and item parameters in adaptive testing is through the joint posterior distribution of all parameters. An optimized Markov chain Monte Carlo algorithm for adaptive testing is presented, which samples this distribution in real time to score the examinee's ability and optimally…
Descriptors: Bayesian Statistics, Adaptive Testing, Error of Measurement, Markov Processes
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2024
Longitudinal models of individual growth typically emphasize between-person predictors of change but ignore how growth may vary "within" persons because each person contributes only one point at each time to the model. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Applied Measurement in Education, 2024
Longitudinal models typically emphasize between-person predictors of change but ignore how growth varies "within" persons because each person contributes only one data point at each time. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift over time. While traditionally…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development
Lu, Rui; Keller, Bryan Sean – AERA Online Paper Repository, 2019
When estimating an average treatment effect with observational data, it's possible to get an unbiased estimate of the causal effect if all confounding variables are observed and reliably measured. In education, confounding variables are often latent constructs. Covariate selection methods used in causal inference applications assume that all…
Descriptors: Factor Analysis, Predictor Variables, Monte Carlo Methods, Comparative Analysis

Peer reviewed
Direct link
