Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 4 |
Descriptor
| Effect Size | 7 |
| Evaluation Methods | 7 |
| Test Reliability | 7 |
| Accuracy | 2 |
| Educational Research | 2 |
| Meta Analysis | 2 |
| Outcomes of Treatment | 2 |
| Sample Size | 2 |
| Simulation | 2 |
| Statistical Significance | 2 |
| Test Validity | 2 |
| More ▼ | |
Source
| Research Synthesis Methods | 2 |
| American Psychologist | 1 |
| Educational and Psychological… | 1 |
| Exceptional Children | 1 |
| Journal of Consulting and… | 1 |
Author
Publication Type
| Journal Articles | 6 |
| Reports - Descriptive | 3 |
| Information Analyses | 2 |
| Reports - Research | 2 |
| ERIC Digests in Full Text | 1 |
| ERIC Publications | 1 |
| Reports - Evaluative | 1 |
Education Level
| Early Childhood Education | 1 |
| Preschool Education | 1 |
Audience
Location
| Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Guido Schwarzer; Gerta Rücker; Cristina Semaca – Research Synthesis Methods, 2024
The "LFK" index has been promoted as an improved method to detect bias in meta-analysis. Putatively, its performance does not depend on the number of studies in the meta-analysis. We conducted a simulation study, comparing the "LFK" index test to three standard tests for funnel plot asymmetry in settings with smaller or larger…
Descriptors: Bias, Meta Analysis, Simulation, Evaluation Methods
Caspar J. Van Lissa; Eli-Boaz Clapper; Rebecca Kuiper – Research Synthesis Methods, 2024
The product Bayes factor (PBF) synthesizes evidence for an informative hypothesis across heterogeneous replication studies. It can be used when fixed- or random effects meta-analysis fall short. For example, when effect sizes are incomparable and cannot be pooled, or when studies diverge significantly in the populations, study designs, and…
Descriptors: Hypothesis Testing, Evaluation Methods, Replication (Evaluation), Sample Size
Goldstein, Howard; Lackey, Kimberly C.; Schneider, Naomi J. B. – Exceptional Children, 2014
This review presents a novel framework for evaluating evidence based on a set of parallel criteria that can be applied to both group and single-subject experimental design (SSED) studies. The authors illustrate use of this evaluation system in a systematic review of 67 articles investigating social skills interventions for preschoolers with autism…
Descriptors: Preschool Education, Preschool Children, Intervention, Autism
Erceg-Hurn, David M.; Mirosevich, Vikki M. – American Psychologist, 2008
Classic parametric statistical significance tests, such as analysis of variance and least squares regression, are widely used by researchers in many disciplines, including psychology. For classic parametric tests to produce accurate results, the assumptions underlying them (e.g., normality and homoscedasticity) must be satisfied. These assumptions…
Descriptors: Statistical Significance, Least Squares Statistics, Effect Size, Statistical Studies
Atkins, David C.; Bedics, Jamie D.; Mcglinchey, Joseph B.; Beauchaine, Theodore P. – Journal of Consulting and Clinical Psychology, 2005
Measures of clinical significance are frequently used to evaluate client change during therapy. Several alternatives to the original method devised by N. S. Jacobson, W. C. Follette, & D. Revenstorf (1984) have been proposed, each purporting to increase accuracy. However, researchers have had little systematic guidance in choosing among…
Descriptors: Psychotherapy, Statistical Significance, Outcomes of Treatment, Behavior Change
Wang, Wen-Chung; Chen, Hsueh-Chu – Educational and Psychological Measurement, 2004
As item response theory (IRT) becomes popular in educational and psychological testing, there is a need of reporting IRT-based effect size measures. In this study, we show how the standardized mean difference can be generalized into such a measure. A disattenuation procedure based on the IRT test reliability is proposed to correct the attenuation…
Descriptors: Test Reliability, Rating Scales, Sample Size, Error of Measurement
Rudner, Lawrence M. – 1996
In educational research and evaluation, a sample of subjects usually received some type of programmatic treatment. Outcome scores for these students are then compared with outcome scores of a control or comparison group. M. Lewis and H. McGurk (1972) have pointed out that there are some implicit assumptions when this approach is applied to…
Descriptors: Child Development, Cognitive Development, Early Childhood Education, Educational Research

Peer reviewed
Direct link
