Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 21 |
Descriptor
Computation | 21 |
Effect Size | 21 |
Simulation | 21 |
Statistical Analysis | 10 |
Sample Size | 5 |
Comparative Analysis | 4 |
Error of Measurement | 4 |
Evaluation Methods | 4 |
Models | 4 |
Monte Carlo Methods | 4 |
Research Design | 4 |
More ▼ |
Source
Author
Pustejovsky, James E. | 2 |
Banks, Kathleen | 1 |
Beasley, T. Mark | 1 |
Beretvas, S. Natasha | 1 |
Buttery, Paula | 1 |
Cappaert, Kevin | 1 |
Carvajal, Jorge | 1 |
Dai, Yunyun | 1 |
DeMars, Christine E. | 1 |
Dixon, Peter | 1 |
Elliott, Mark | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Research | 15 |
Reports - Evaluative | 5 |
Dissertations/Theses -… | 1 |
Information Analyses | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Audience
Location
South Korea | 2 |
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Elliott, Mark; Buttery, Paula – Educational and Psychological Measurement, 2022
We investigate two non-iterative estimation procedures for Rasch models, the pair-wise estimation procedure (PAIR) and the Eigenvector method (EVM), and identify theoretical issues with EVM for rating scale model (RSM) threshold estimation. We develop a new procedure to resolve these issues--the conditional pairwise adjacent thresholds procedure…
Descriptors: Item Response Theory, Rating Scales, Computation, Simulation
Waterbury, Glenn Thomas; DeMars, Christine E. – Journal of Experimental Education, 2019
There is a need for effect sizes that are readily interpretable by a broad audience. One index that might fill this need is [pi], which represents the proportion of scores in one group that exceed the mean of another group. The robustness of estimates of [pi] to violations of normality had not been explored. Using simulated data, three estimates…
Descriptors: Effect Size, Robustness (Statistics), Simulation, Research Methodology
Gorard, Stephen – International Journal of Research & Method in Education, 2015
This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…
Descriptors: Effect Size, Computation, Comparative Analysis, Simulation
Westlund, Erik; Stuart, Elizabeth A. – American Journal of Evaluation, 2017
This article discusses the nonuse, misuse, and proper use of pilot studies in experimental evaluation research. The authors first show that there is little theoretical, practical, or empirical guidance available to researchers who seek to incorporate pilot studies into experimental evaluation research designs. The authors then discuss how pilot…
Descriptors: Use Studies, Pilot Projects, Evaluation Research, Experiments
Tipton, Elizabeth; Pustejovsky, James E. – Journal of Educational and Behavioral Statistics, 2015
Meta-analyses often include studies that report multiple effect sizes based on a common pool of subjects or that report effect sizes from several samples that were treated with very similar research protocols. The inclusion of such studies introduces dependence among the effect size estimates. When the number of studies is large, robust variance…
Descriptors: Meta Analysis, Effect Size, Computation, Robustness (Statistics)
Beasley, T. Mark – Journal of Experimental Education, 2014
Increasing the correlation between the independent variable and the mediator ("a" coefficient) increases the effect size ("ab") for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation caused by…
Descriptors: Statistical Analysis, Effect Size, Nonparametric Statistics, Statistical Inference
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim – Journal of Experimental Education, 2014
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Descriptors: Effect Size, Statistical Bias, Sample Size, Regression (Statistics)
Pantelis, Peter C.; Kennedy, Daniel P. – Autism: The International Journal of Research and Practice, 2016
Two-phase designs in epidemiological studies of autism prevalence introduce methodological complications that can severely limit the precision of resulting estimates. If the assumptions used to derive the prevalence estimate are invalid or if the uncertainty surrounding these assumptions is not properly accounted for in the statistical inference…
Descriptors: Foreign Countries, Pervasive Developmental Disorders, Autism, Incidence
Shieh, Gwowen; Jan, Show-Li – Journal of Experimental Education, 2013
The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…
Descriptors: Sampling, Statistical Analysis, Computation, Research Methodology
Pustejovsky, James E.; Hedges, Larry V.; Shadish, William R. – Journal of Educational and Behavioral Statistics, 2014
In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general…
Descriptors: Hierarchical Linear Modeling, Effect Size, Maximum Likelihood Statistics, Computation
Dai, Yunyun – Applied Psychological Measurement, 2013
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Descriptors: Item Response Theory, Test Bias, Computation, Bayesian Statistics
Luo, Long – ProQuest LLC, 2012
The topic of this research is inference for effect size of a curriculum intervention, which is an important research topic in education. The linear relationship between the outcomes of an intervention and teachers' fidelity, the extent to which the intervention was actually delivered by teachers as intended, is an important component for…
Descriptors: Effect Size, Intervention, Curriculum, Curriculum Implementation
Walker, Cindy M.; Zhang, Bo; Banks, Kathleen; Cappaert, Kevin – Educational and Psychological Measurement, 2012
The purpose of this simulation study was to establish general effect size guidelines for interpreting the results of differential bundle functioning (DBF) analyses using simultaneous item bias test (SIBTEST). Three factors were manipulated: number of items in a bundle, test length, and magnitude of uniform differential item functioning (DIF)…
Descriptors: Test Bias, Test Length, Simulation, Guidelines
Oah, Shezeen; Lee, Jang-Han – Journal of Organizational Behavior Management, 2011
The failures of previous studies to demonstrate productivity differences across different percentages of incentive pay may be partially due to insufficient simulation fidelity. The present study compared the effects of different percentages of incentive pay using a more advanced simulation method. Three payment methods were tested: hourly,…
Descriptors: Wages, Incentives, Productivity, Reinforcement
Romano, Jeanine L.; Kromrey, Jeffrey D.; Owens, Corina M.; Scott, Heather M. – Journal of Experimental Education, 2011
In this study, the authors aimed to examine 8 of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions wherein the underlying item…
Descriptors: Intervals, Monte Carlo Methods, Rating Scales, Computation
Previous Page | Next Page ยป
Pages: 1 | 2