Publication Date
| In 2026 | 0 |
| Since 2025 | 3 |
| Since 2022 (last 5 years) | 43 |
| Since 2017 (last 10 years) | 606 |
| Since 2007 (last 20 years) | 3463 |
Descriptor
Source
Author
| Thompson, Bruce | 43 |
| Slate, John R. | 13 |
| Onwuegbuzie, Anthony J. | 12 |
| Goldhaber, Dan | 11 |
| Levin, Joel R. | 11 |
| Hedges, Larry V. | 9 |
| Newman, Isadore | 9 |
| Games, Paul A. | 8 |
| Aiken, Lewis R. | 7 |
| Daniel, Larry G. | 7 |
| Levy, Kenneth J. | 7 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 88 |
| Teachers | 22 |
| Practitioners | 13 |
| Policymakers | 10 |
| Administrators | 6 |
| Counselors | 2 |
| Media Staff | 2 |
| Parents | 2 |
| Students | 1 |
Location
| Turkey | 162 |
| Texas | 157 |
| Jordan | 85 |
| California | 80 |
| United States | 75 |
| Australia | 61 |
| Florida | 59 |
| Saudi Arabia | 52 |
| Tennessee | 48 |
| North Carolina | 45 |
| Canada | 44 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 14 |
| Meets WWC Standards with or without Reservations | 23 |
| Does not meet standards | 25 |
Peer reviewedLeitner, Dennis W. – Multiple Linear Regression Viewpoints, 1979
This paper relates common statistics from contingency table analysis to the more familiar R squared terminology in order to better understand the strength of the relation implied. The method of coding contingency tables was shown, as well as how R squared related to phi, V, and chi squared. (Author/CTM)
Descriptors: Correlation, Expectancy Tables, Hypothesis Testing, Multiple Regression Analysis
Peer reviewedPack, Elbert; Stander, Aaron – NASSP Bulletin, 1981
Describes how to measure whether students are making significant gains in reading. (JM)
Descriptors: Academic Achievement, Measurement Techniques, Program Evaluation, Reading Programs
Peer reviewedLevy, Kenneth J. – Journal of Experimental Education, 1978
Monte Carlo techniques were employed to compare the familiar F-test with Welch's V-test procedure for testing hypotheses concerning a priori contrasts among K treatments. The two procedures were compared under homogeneous and heterogeneous variance conditions. (Author)
Descriptors: Analysis of Variance, Comparative Analysis, Hypothesis Testing, Monte Carlo Methods
Peer reviewedLevy, Kenneth J. – Journal of Experimental Education, 1979
Dunnett's procedure for comparing K-1 treatments with a control is discussed within the context of three nonparametric models: those of Kruskal-Wallis, Friedman, and Cochran. (Author/MH)
Descriptors: Analysis of Variance, Comparative Analysis, Mathematical Models, Nonparametric Statistics
Peer reviewedBetz, M. Austin; Gabriel, K. Ruben – Journal of Educational Statistics, 1978
This paper is concerned with testing hypotheses about main effects, simple effects, and interaction effects by means of analysis of variance. It presents alternative strategies for analyzing data sets for which a factorial model with two completely crossed, fixed factors is appropriate. (CTM)
Descriptors: Analysis of Variance, Aptitude Treatment Interaction, Hypothesis Testing, Research Problems
Peer reviewedThompson, Bruce – Educational Researcher, 1997
Argues that describing results as "significant" rather than "statistically significant" is confusing to the very people most apt to misinterpret this telegraphic wording. The importance of reporting the effect size and the value of both internal and external replicability analyses are stressed. (SLD)
Descriptors: Editing, Educational Research, Effect Size, Scholarly Journals
Peer reviewedRaju, Nambury S. – Applied Psychological Measurement, 1990
The asymptotic sampling distributions (means and variances) are presented for the signed and unsigned estimates for the Rasch model, two-parameter model, and the three-parameter model with fixed lower asymptotes. Applications for item-bias research are discussed. (SLD)
Descriptors: Equations (Mathematics), Estimation (Mathematics), Item Bias, Item Response Theory
Peer reviewedUmesh, U. N.; Mishra, Sanjay – Psychometrika, 1990
Major issues related to index-of-fit conjoint analysis were addressed in this simulation study. Goals were to develop goodness-of-fit criteria for conjoint analysis; develop tests to determine the significance of conjoint analysis results; and calculate the power of the test of the null hypothesis of random data distribution. (SLD)
Descriptors: Computer Simulation, Goodness of Fit, Monte Carlo Methods, Power (Statistics)
Atkinson, Leslie – American Journal on Mental Retardation, 1990
The article provides a set of tables with the differences necessary for statistical significance between the Vineland Adaptive Behavior Scales and Bayley Scales of Infant Development, McCarthy Scales of Children's Abilities, Stanford-Binet Intelligence Scale, and Wechsler scales. The tables are intended to supplement clinical decisions in…
Descriptors: Adaptive Behavior (of Disabled), Evaluation Methods, Intelligence Tests, Mental Retardation
Peer reviewedHsu, Louis M. – Journal of Counseling Psychology, 1989
Discusses three topics related to interpretation of discriminant analyses (DA's): (1) partial F ratios and partial Wilks's lambdas for predictor variables in standard, step-down, and stepwise DA's; (2) relation of goals of classification to definition/evaluation of classification rules; and (3) significance tests for total hit rates in internal…
Descriptors: Data Interpretation, Discriminant Analysis, Multivariate Analysis, Predictor Variables
Peer reviewedSchmidt, Frank; Hunter, John E. – Evaluation and the Health Professions, 1995
It is argued that point estimates of effect sizes and confidence intervals around these point estimates are more appropriate statistics for individual studies than reliance on statistical significance testing and that meta-analysis is appropriate for analysis of data from multiple studies. (SLD)
Descriptors: Effect Size, Estimation (Mathematics), Knowledge Level, Meta Analysis
Peer reviewedWilcox, Rand R. – Multivariate Behavioral Research, 1995
Five methods for testing the hypothesis of independence between two sets of variates were compared through simulation. Results indicate that two new methods, based on robust measures reflecting the linear association between two random variables, provide reasonably accurate control over Type I errors. Drawbacks to rank-based methods are discussed.…
Descriptors: Analysis of Covariance, Comparative Analysis, Hypothesis Testing, Robustness (Statistics)
Peer reviewedOttenbacher, Kenneth J. – Journal of Early Intervention, 1992
Measures of effect size were computed for 237 statistical tests from 59 early intervention studies. Data revealed that the average treatment effect across studies was medium in size. Interpretation of measures of magnitude strength is discussed in relation to statistical significance testing. Reporting of measures of effect size along with…
Descriptors: Disabilities, Early Childhood Education, Early Intervention, Effect Size
Peer reviewedZimmerman, Donald W.; And Others – Journal of Experimental Education, 1992
D. W. Zimmerman argues that the interpretation by J. D. Gibbons and S. Chakraborti of recent simulation results and their recommendations are misleading and suggests use of an alternate test when homogeneity of variance and normality are violated. Gibbons and Chakraborti review their differences with Zimmerman's position. (SLD)
Descriptors: Computer Simulation, Research Methodology, Research Reports, Sample Size
Peer reviewedCarver, Ronald P. – Journal of Experimental Education, 1993
Four things are recommended to minimize the influence or importance of statistical significance testing. Researchers must not neglect to add "statistical" to significant and could interpret results before giving p-values. Effect sizes should be reported with measures of sampling error, and replication can be built into the design. (SLD)
Descriptors: Educational Researchers, Effect Size, Error of Measurement, Research Methodology


