Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 18 |
Descriptor
| Correlation | 30 |
| Effect Size | 30 |
| Monte Carlo Methods | 30 |
| Sample Size | 13 |
| Comparative Analysis | 8 |
| Statistical Analysis | 8 |
| Statistical Bias | 8 |
| Error of Measurement | 7 |
| Evaluation Methods | 7 |
| Meta Analysis | 7 |
| Simulation | 7 |
| More ▼ | |
Source
Author
| Porter, Kristin E. | 3 |
| Ahn, Soyeon | 2 |
| Hittner, James B. | 2 |
| Kromrey, Jeffrey D. | 2 |
| May, Kim | 2 |
| Onghena, Patrick | 2 |
| Thompson, Bruce | 2 |
| Barcikowski, Robert S. | 1 |
| Batley, Prathiba Natesan | 1 |
| Brooks, Gordon P. | 1 |
| Bulte, Isis | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 20 |
| Reports - Research | 15 |
| Reports - Evaluative | 9 |
| Dissertations/Theses -… | 3 |
| Guides - Non-Classroom | 3 |
| Speeches/Meeting Papers | 3 |
Education Level
| Adult Education | 1 |
| Higher Education | 1 |
| Secondary Education | 1 |
Audience
| Researchers | 6 |
Location
| China (Shanghai) | 1 |
| Finland | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 1 |
What Works Clearinghouse Rating
Batley, Prathiba Natesan; Minka, Tom; Hedges, Larry Vernon – Grantee Submission, 2020
Immediacy is one of the necessary criteria to show strong evidence of treatment effect in single case experimental designs (SCEDs). With the exception of Natesan and Hedges (2017) no inferential statistical tool has been used to demonstrate or quantify it until now. We investigate and quantify immediacy by treating the change-points between the…
Descriptors: Bayesian Statistics, Monte Carlo Methods, Statistical Inference, Markov Processes
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Kelly, Sean; Ye, Feifei – Journal of Experimental Education, 2017
Educational analysts studying achievement and other educational outcomes frequently encounter an association between initial status and growth, which has important implications for the analysis of covariate effects, including group differences in growth. As explicated by Allison (1990), where only two time points of data are available, identifying…
Descriptors: Regression (Statistics), Models, Error of Measurement, Scores
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon – Educational and Psychological Measurement, 2014
Previous research has demonstrated that differential item functioning (DIF) methods that do not account for multilevel data structure could result in too frequent rejection of the null hypothesis (i.e., no DIF) when the intraclass correlation coefficient (?) of the studied item was the same as the ? of the total score. The current study extended…
Descriptors: Test Bias, Correlation, Scores, Comparative Analysis
Coughlin, Kevin B. – ProQuest LLC, 2013
This study is intended to provide researchers with empirically derived guidelines for conducting factor analytic studies in research contexts that include dichotomous and continuous levels of measurement. This study is based on the hypotheses that ordinary least squares (OLS) factor analysis will yield more accurate parameter estimates than…
Descriptors: Comparative Analysis, Least Squares Statistics, Maximum Likelihood Statistics, Factor Analysis
Skidmore, Susan Troncoso; Thompson, Bruce – Journal of Experimental Education, 2011
In the present Monte Carlo simulation study, the authors compared bias and precision of 7 sampling error corrections to the Pearson r[superscript 2] under 6 x 3 x 6 conditions (i.e., population ρ values of 0.0, 0.1, 0.3, 0.5, 0.7, and 0.9, respectively; population shapes normal, skewness = kurtosis = 1, and skewness = -1.5 with kurtosis =…
Descriptors: Monte Carlo Methods, Sampling, Error Correction, Statistical Bias
Itang'ata, Mukaria J. J. – ProQuest LLC, 2013
Often researchers face situations where comparative studies between two or more programs are necessary to make causal inferences for informed policy decision-making. Experimental designs employing randomization provide the strongest evidence for causal inferences. However, many pragmatic and ethical challenges may preclude the use of randomized…
Descriptors: Comparative Analysis, Probability, Statistical Bias, Monte Carlo Methods
Ning, Bo; Van Damme, Jan; Gielen, Sarah; Vanlaar, Gudrun; Van den Noortgate, Wim – Scandinavian Journal of Educational Research, 2016
Finland and Shanghai are strong performers in the Program for International Student Assessment (PISA). The current study explored the similarities and differences in educational effectiveness between these 2 strong performers. To this end, 14 predictors representing student background and school process characteristics were selected from the PISA…
Descriptors: Foreign Countries, Reading Achievement, Comparative Education, Instructional Effectiveness
Wilson, Celia M. – ProQuest LLC, 2010
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…
Descriptors: Monte Carlo Methods, Measurement, Multivariate Analysis, Error of Measurement
Manolov, Rumen; Solanas, Antonio; Bulte, Isis; Onghena, Patrick – Journal of Experimental Education, 2010
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each…
Descriptors: Monte Carlo Methods, Effect Size, Simulation, Evaluation Methods
Dong, Nianbo; Lipsey, Mark – Society for Research on Educational Effectiveness, 2010
This study uses simulation techniques to examine the statistical power of the group- randomized design and the matched-pair (MP) randomized block design under various parameter combinations. Both nearest neighbor matching and random matching are used for the MP design. The power of each design for any parameter combination was calculated from…
Descriptors: Simulation, Statistical Analysis, Cluster Grouping, Mathematical Models
Gilpin, Andrew R. – Educational and Psychological Measurement, 2008
Rosenthal and Rubin introduced a general effect size index, r[subscript equivalent], for use in meta-analyses of two-group experiments; it employs p values from reports of the original studies to determine an equivalent t test and the corresponding point-biserial correlation coefficient. The present investigation used Monte Carlo-simulated…
Descriptors: Effect Size, Correlation, Meta Analysis, Monte Carlo Methods
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
