NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,601 to 3,615 of 4,508 results Save | Export
Peer reviewed Peer reviewed
Renner, Barbara Rochen; Ball, Donald W. – Educational and Psychological Measurement, 1983
To determine the effect of violating the assumption of homogeneity of covariance for the Tukey Wholly Significant Difference (WSD) test, Monte Carlo simulations varied the number of treatment groups, sample size, and degree of covariance heterogeneity. As covariance heterogeneity was increased, the empirical significance levels increased beyond…
Descriptors: Data Analysis, Hypothesis Testing, Monte Carlo Methods, Research Methodology
Peer reviewed Peer reviewed
Haase, Richard F. – Educational and Psychological Measurement, 1983
This paper reviews the distinctions between classical and partial eta square and derives a formula for use in those complex analysis of variance designs in which the investigator desires a measure of classical eta square and has access only to the F-tests and relevant degrees of freedom. (Author/BW)
Descriptors: Analysis of Variance, Hypothesis Testing, Mathematical Formulas, Research Design
Peer reviewed Peer reviewed
Ross, Donald C. – Educational and Psychological Measurement, 1983
Theta is a statistic which measures the degree to which a designated pattern successfully partitions a matrix of pre- and post-treatment ratings into regions typical of each of two treatments. In this paper, theta is extended to multivariate and multigroup cases. (Author/BW)
Descriptors: Hypothesis Testing, Matrices, Multivariate Analysis, Research Methodology
Peer reviewed Peer reviewed
Fagley, N. S.; McKinney, I. Jean – Journal of Counseling Psychology, 1983
Reexamines the article by Atkinson, Furlong, and Wampold (1982) and questions their conclusion that reviewers were biased toward statistically significant results. A statistical power analysis shows the power of their bogus study was low. Low power in a study reporting nonsignificant findings is a valid reason for recommending not to publish.…
Descriptors: Bias, Evaluation Criteria, Opinions, Publications
Peer reviewed Peer reviewed
Keselman, H. J. – Multivariate Behavioral Research, 1982
The need for multiple comparison procedures for repeated measures means employing a pooled estimate of error variance to conform to the sphericity assumptions of the design in order to provide a valid test is discussed. An alternative approach which does not require this assumption is presented. (Author/JKS)
Descriptors: Analysis of Variance, Data Analysis, Hypothesis Testing, Research Design
Peer reviewed Peer reviewed
Yu, Mimi C.; Dunn, Olive Jean – Educational and Psychological Measurement, 1982
Eight asymptotically robust tests of population correlation coefficient equality are proposed and are studied along with two parametric tests. Monte Carlo simulation is used to compare the small sample performance of these ten procedures. The sampled distributions consist of the normal distribution, two mixed normal distributions and four…
Descriptors: Correlation, Mathematical Formulas, Statistical Distributions, Statistical Significance
Peer reviewed Peer reviewed
Rae, Gordon – Educational and Psychological Measurement, 1982
Analyses of artificial data involving repeated, related binary measures to different samples suggest that Tideman's generalized chi-square statistic and conventional repeated-measures analysis of variance do not produce conflicting outcomes. Provided the appropriate assumptions are met, analysis of variance may provide a more versatile approach.…
Descriptors: Analysis of Variance, Hypothesis Testing, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Ronis, David L. – Educational and Psychological Measurement, 1981
Many researchers draw the conclusion that one independent variable has more impact than another without testing the null hypothesis that their impact is equal. This paper presents and recommends a technique for testing the relative magnitude of effects, rather than basing conclusions solely on descriptive statistics. (Author/BW)
Descriptors: Analysis of Variance, Hypothesis Testing, Mathematical Models, Research Design
Peer reviewed Peer reviewed
Pohlmann, John T. – Multiple Linear Regression Viewpoints, 1979
The type I error rate in stepwise regression analysis deserves serious consideration by researchers. The problem-wide error rate is the probability of selecting any variable when all variables have population regression weights of zero. Appropriate significance tests are presented and a Monte Carlo experiment is described. (Author/CTM)
Descriptors: Correlation, Error Patterns, Multiple Regression Analysis, Predictor Variables
Peer reviewed Peer reviewed
Ramsey, Philip H. – Journal of Educational Statistics, 1980
Disagreements have arisen about the robustness of the t test in normal populations with unequal variances. Employing liberal but objective standards for assessing robustness, it is shown that the t test is not always robust to the assumption of equal population variances even when sample sizes are equal. (Author/JKS)
Descriptors: Analysis of Variance, Data Analysis, Hypothesis Testing, Mathematical Models
Peer reviewed Peer reviewed
Dyer, Frank J. – Educational and Psychological Measurement, 1980
Power analysis is in essence a technique for estimating the probability of obtaining a specific minimum observed effect size. Power analysis techniques are applied to research planning problems in test reliability studies. A table for use in research planning and hypothesis testing is presented. (Author)
Descriptors: Hypothesis Testing, Mathematical Formulas, Power (Statistics), Probability
Peer reviewed Peer reviewed
Williams, John D. – Journal of Experimental Education, 1979
Hollingsworth recently showed a posttest contrast for analysis of variance situations that, for equal sample sizes, had several favorable qualities. However, for unequal sample sizes, the contrast fails to achieve status as a maximized contrast; thus, separate testing of the contrast is required. (Author/GSK)
Descriptors: Analysis of Variance, Data Analysis, Hypothesis Testing, Statistical Analysis
Peer reviewed Peer reviewed
Brown, Ric – Journal for Research in Mathematics Education, 1980
The author discusses the importance of statistical significance to researchers and suggests that researchers should consider an additional statistic, the magnitude of effect index. (MK)
Descriptors: Educational Research, Mathematics Education, Research Problems, Researchers
Peer reviewed Peer reviewed
Swaminathan, Hariharan; DeFriesse, Frederick – Educational and Psychological Measurement, 1979
A problem in analysis of variance is that after rejection of the overall hypothesis, no contrasts of interest are found to be significant. A procedure for determining the contrast of significance is outlined, and the relationship between the "most significant" contrast and the overall test is shown. (Author/JKS)
Descriptors: Analysis of Variance, Comparative Analysis, Hypothesis Testing, Statistical Significance
Peer reviewed Peer reviewed
James, Michael – Educational and Psychological Measurement, 1979
Details are given for the use of the mixed effects multivariate analysis of variance table provided by the BMD12V computer program to compute raw generalized variances and hence the U and F statistics for the mixed effects model. (Author/JKS)
Descriptors: Analysis of Variance, Computer Programs, Hypothesis Testing, Program Descriptions
Pages: 1  |  ...  |  237  |  238  |  239  |  240  |  241  |  242  |  243  |  244  |  245  |  ...  |  301