Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 10 |
Descriptor
| Effect Size | 12 |
| Evaluation Methods | 12 |
| Monte Carlo Methods | 12 |
| Correlation | 7 |
| Sample Size | 7 |
| Statistical Analysis | 7 |
| Computation | 5 |
| Educational Research | 4 |
| Hypothesis Testing | 4 |
| Simulation | 4 |
| Error Patterns | 3 |
| More ▼ | |
Source
| Educational and Psychological… | 3 |
| Grantee Submission | 2 |
| Journal of Experimental… | 2 |
| ProQuest LLC | 2 |
| International Journal of… | 1 |
| Journal of Research on… | 1 |
| MDRC | 1 |
Author
| Porter, Kristin E. | 3 |
| Kromrey, Jeffrey D. | 2 |
| Ahn, Soyeon | 1 |
| Bulte, Isis | 1 |
| Garrett, Phyllis | 1 |
| Hinnant, Ben | 1 |
| Itang'ata, Mukaria J. J. | 1 |
| Jager, Justin | 1 |
| Kelley, Ken | 1 |
| Manolov, Rumen | 1 |
| Onghena, Patrick | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 8 |
| Reports - Research | 6 |
| Guides - Non-Classroom | 3 |
| Dissertations/Theses -… | 2 |
| Reports - Evaluative | 1 |
Education Level
Audience
| Researchers | 3 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Hinnant, Ben; Schulenberg, John; Jager, Justin – International Journal of Behavioral Development, 2021
Multifinality, equifinality, and fanning are important developmental concepts that emphasize understanding interindividual variability in trajectories over time. However, each concept implies that there are points in a developmental window where interindividual variability is more limited. We illustrate the multifinality concept under…
Descriptors: Individual Differences, Simulation, Effect Size, Prediction
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun – Grantee Submission, 2017
The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…
Descriptors: Statistical Analysis, Evaluation Methods, Structural Equation Models, Reliability
Itang'ata, Mukaria J. J. – ProQuest LLC, 2013
Often researchers face situations where comparative studies between two or more programs are necessary to make causal inferences for informed policy decision-making. Experimental designs employing randomization provide the strongest evidence for causal inferences. However, many pragmatic and ethical challenges may preclude the use of randomized…
Descriptors: Comparative Analysis, Probability, Statistical Bias, Monte Carlo Methods
Romano, Jeanine L.; Kromrey, Jeffrey D.; Owens, Corina M.; Scott, Heather M. – Journal of Experimental Education, 2011
In this study, the authors aimed to examine 8 of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions wherein the underlying item…
Descriptors: Intervals, Monte Carlo Methods, Rating Scales, Computation
Manolov, Rumen; Solanas, Antonio; Bulte, Isis; Onghena, Patrick – Journal of Experimental Education, 2010
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each…
Descriptors: Monte Carlo Methods, Effect Size, Simulation, Evaluation Methods
Garrett, Phyllis – ProQuest LLC, 2009
The use of polytomous items in assessments has increased over the years, and as a result, the validity of these assessments has been a concern. Differential item functioning (DIF) and missing data are two factors that may adversely affect assessment validity. Both factors have been studied separately, but DIF and missing data are likely to occur…
Descriptors: Sample Size, Monte Carlo Methods, Test Validity, Effect Size
Kelley, Ken – Educational and Psychological Measurement, 2005
The standardized group mean difference, Cohen's "d", is among the most commonly used and intuitively appealing effect sizes for group comparisons. However, reporting this point estimate alone does not reflect the extent to which sampling error may have led to an obtained value. A confidence interval expresses the uncertainty that exists between…
Descriptors: Intervals, Sampling, Integrity, Effect Size
Kromrey, Jeffrey D.; Rendina-Gobioff, Gianna – Educational and Psychological Measurement, 2006
The performance of methods for detecting publication bias in meta-analysis was evaluated using Monte Carlo methods. Four methods of bias detection were investigated: Begg's rank correlation, Egger's regression, funnel plot regression, and trim and fill. Five factors were included in the simulation design: number of primary studies in each…
Descriptors: Comparative Analysis, Meta Analysis, Monte Carlo Methods, Correlation

Peer reviewed
Direct link
