NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Oleson, Jacob J.; Brown, Grant D.; McCreery, Ryan – Journal of Speech, Language, and Hearing Research, 2019
Purpose: Clinicians depend on the accuracy of research in the speech, language, and hearing sciences to improve assessment and treatment of patients with communication disorders. Although this work has contributed to great advances in clinical care, common statistical misconceptions remain, which deserve closer inspection in the field. Challenges…
Descriptors: Statistics, Speech Language Pathology, Research, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L. – Educational and Psychological Measurement, 2012
The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…
Descriptors: Computation, Statistical Analysis, Hypothesis Testing, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Fidalgo, Angel M.; Scalon, Joao D. – Journal of Psychoeducational Assessment, 2010
In spite of the growing interest in cross-cultural research and assessment, there is little research on statistical procedures that can be used to simultaneously assess the differential item functioning (DIF) across multiple groups. The chief objective of this work is to show a unified framework for the analysis of DIF in multiple groups using one…
Descriptors: Test Bias, Statistics, Evaluation, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Serlin, Ronald C. – Psychological Methods, 2010
The sense that replicability is an important aspect of empirical science led Killeen (2005a) to define "p[subscript rep]," the probability that a replication will result in an outcome in the same direction as that found in a current experiment. Since then, several authors have praised and criticized 'p[subscript rep]," culminating…
Descriptors: Epistemology, Effect Size, Replication (Evaluation), Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Strang, Kenneth David – Practical Assessment, Research & Evaluation, 2009
This paper discusses how a seldom-used statistical procedure, recursive regression (RR), can numerically and graphically illustrate data-driven nonlinear relationships and interaction of variables. This routine falls into the family of exploratory techniques, yet a few interesting features make it a valuable compliment to factor analysis and…
Descriptors: Multicultural Education, Computer Software, Multiple Regression Analysis, Multidimensional Scaling
Peer reviewed Peer reviewed
Woolley, Thomas W.; Dawson, George O. – Journal of Research in Science Teaching, 1983
Examines what power-related changes occured in science education research over the past decade as a result of an earlier survey. Previous recommendations are expanded/expounded upon within the context of more recent work in the area. Proposes guidelines for reporting minimal amount of information for clear/independent evaluation of research…
Descriptors: Data Analysis, Effect Size, Guidelines, Power (Statistics)
Peer reviewed Peer reviewed
Posavac, E. J. – Evaluation and Program Planning, 1998
Misuses of null hypothesis significance testing are reviewed and alternative approaches are suggested for carrying out and reporting statistical tests that might be useful to program evaluators. Several themes, including the importance of respecting the magnitude of Type II errors and describing effect sizes in units stakeholders can understand,…
Descriptors: Effect Size, Evaluation Methods, Hypothesis Testing, Program Evaluation
Peer reviewed Peer reviewed
Ottenbacher, Kenneth J. – Exceptional Children, 1989
The study examining the validity of statistical conclusions of 49 early intervention studies found that 4 percent had adequate power to detect medium intervention effects and 18 percent to detect large intervention effects. Low statistical conclusion validity has practical consequences in program evaluation and cost-effectiveness determinations.…
Descriptors: Cost Effectiveness, Disabilities, Effect Size, Intervention