NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,961 to 3,975 of 4,509 results Save | Export
Proper, Elizabeth C. – 1971
Texts often suggest running preliminary tests for homogeneity of variance prior to running an ANOVA. While it has been known for some time that most of the suggested tests are probably not appropriate, they are still being used. This paper is a review of the literature in terms of the implications involved in running preliminary tests in general…
Descriptors: Analysis of Variance, Hypothesis Testing, Literature Reviews, Models
Bielby, William T.; Kluegel, James R. – 1976
Neglected issues of simultaneous statistical inference and statistical power in survey research applications of the general linear model are reviewed, and it was found that classical hypothesis testing as it is currently applied, is inadequate for the purposes of social research. The intelligent use of statistical inference demands control over…
Descriptors: Comparative Analysis, Hypothesis Testing, Mathematical Models, Power (Statistics)
Novack, Stanley R. – Training, 1976
A statistical method, Fishers Table of t (or t ratio), is used to determine the statistical significance of a comparison of results from pretests and post-tests of knowledge and skill taken by training program participants. A statistically significant improvement can demonstrate training effectiveness. (MS)
Descriptors: Comparative Analysis, Educational Programs, Evaluation Methods, Pretesting
Peer reviewed Peer reviewed
Blair, R. Clifford; Higgings, J. J. – American Educational Research Journal, 1978
Kaufman and Sweet's article on the regression analysis of unbalanced factorial designs (EJ 111 767) is reviewed. A number of errors are noted, and relevant literature is cited. (GDC)
Descriptors: Least Squares Statistics, Mathematical Models, Multiple Regression Analysis, Research Design
Peer reviewed Peer reviewed
Daniel, Wayne W. – Science Education, 1977
Research hypotheses versus statistical hypotheses, null hypotheses and alternative hypotheses, and statistical significance versus practical significance are described and illustrated. (CP)
Descriptors: Educational Research, Research Design, Research Methodology, Research Problems
Peer reviewed Peer reviewed
Tanner, David E. – Journal of Research and Development in Education, 1988
A multiple-choice achievement test was constructed in which both cognitive level and degree of abstractness were controlled in test items. By controlling both dimensions, researchers hoped that the variance in achievement scores of earlier research could be accounted for. Results of testing education majors are discussed. (Author/MT)
Descriptors: Abstract Reasoning, Cognitive Ability, Cognitive Tests, Education Majors
Peer reviewed Peer reviewed
Murray, Leigh W.; Dosser, David A., Jr. – Journal of Counseling Psychology, 1987
The use of measures of magnitude of effect has been advocated as a way to go beyond statistical tests of significance and to identify effects of a practical size. They have been used in meta-analysis to combine results of different studies. Describes problems associated with measures of magnitude of effect (particularly study size) and…
Descriptors: Effect Size, Meta Analysis, Research Design, Research Methodology
Peer reviewed Peer reviewed
Rasmussen, Jeffrey Lee – Evaluation Review, 1985
A recent study (Blair and Higgins, 1980) indicated a power advantage for the Wilcoxon W Test over student's t-test when calculated from a common mixed-normal sample. Results of the present study indicate that the t-test corrected for outliers shows a superior power curve to the Wilcoxon W.
Descriptors: Computer Simulation, Error of Measurement, Hypothesis Testing, Power (Statistics)
Peer reviewed Peer reviewed
Ottenbacher, Kenneth – Occupational Therapy Journal of Research, 1984
Occupational theory research has been associated with low statistical power and a high rate of Type II errors. To increase power, a procedure involving the partitioning of the decision region into three sections, based on the decision-theory approach to significance testing, is proposed. (SK)
Descriptors: Behavioral Science Research, Effect Size, Hypothesis Testing, Occupational Therapy
Peer reviewed Peer reviewed
Games, Paul A.; Howell, John F. – Journal of Educational Statistics, 1976
Compares three methods of analyzing pairwise treatment differences in a multi-treatment experiment via computer simulation techniques. Under the equal n condition, the robustness of the conventional Tukey Wholly Significant Difference test (WSD) to heterogeneous variances was contrasted with two alternate techniques. Under unequal n conditions,…
Descriptors: Analysis of Variance, Comparative Analysis, Computer Programs, Hypothesis Testing
Peer reviewed Peer reviewed
Scott, William A. – Educational and Psychological Measurement, 1972
Descriptors: Item Sampling, Mathematical Applications, Scoring Formulas, Statistical Analysis
Peer reviewed Peer reviewed
Lissitz, Robert W.; Halperin, Silas – Educational and Psychological Measurement, 1971
Descriptors: Behavioral Science Research, Computer Programs, Hypothesis Testing, Mathematical Models
Peer reviewed Peer reviewed
Westermann, Rainer; Hager, Willi – Perceptual and Motor Skills, 1983
Two psychological experiments--Anderson and Shanteau (1970), Berkowitz and LePage (1967)--are reanalyzed to present the problem of the relative importance of low Type 1 error probability and high power when answering a research question by testing several statistical hypotheses. (Author/PN)
Descriptors: Error of Measurement, Hypothesis Testing, Power (Statistics), Research Design
Peer reviewed Peer reviewed
Sechrest, Lee; Yeaton, William H. – Evaluation Review, 1982
Methods of assessing effect size and the flows that limit their usefulness are discussed. The various statistical procedures for estimating variance accounted for are based on different statistical models producing sharply differing results. The methods reflect too greatly the particular study characteristics and hence have limited…
Descriptors: Analysis of Variance, Evaluation Criteria, Experiments, Research Methodology
Peer reviewed Peer reviewed
Martin, Edwin – Psychological Review, 1981
Hintzman's 1980 attack on certain analyses in memory research is based on doubtful presumptions, namely, that contingency tables are inherently suspect as evidence for or against scientific conclusions and that pressing this logical argument is in some way an acceptable substitute for empirically examining the conclusions in question. (Author)
Descriptors: Analysis of Covariance, Correlation, Expectancy Tables, Goodness of Fit
Pages: 1  |  ...  |  261  |  262  |  263  |  264  |  265  |  266  |  267  |  268  |  269  |  ...  |  301