Publication Date
| In 2026 | 0 |
| Since 2025 | 11 |
| Since 2022 (last 5 years) | 121 |
| Since 2017 (last 10 years) | 1491 |
| Since 2007 (last 20 years) | 5823 |
Descriptor
Source
Author
| Lawson, Anton E. | 16 |
| Algina, James | 15 |
| Wilcox, Rand R. | 15 |
| Paas, Fred | 13 |
| Singaravelu, G. | 13 |
| Levin, Joel R. | 12 |
| Games, Paul A. | 11 |
| Newman, Isadore | 11 |
| Feldt, Leonard S. | 10 |
| Levy, Kenneth J. | 10 |
| Olejnik, Stephen F. | 10 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 247 |
| Teachers | 111 |
| Practitioners | 74 |
| Administrators | 13 |
| Policymakers | 13 |
| Students | 9 |
| Media Staff | 7 |
| Counselors | 4 |
| Community | 1 |
Location
| Nigeria | 276 |
| Germany | 163 |
| Canada | 133 |
| Australia | 118 |
| India | 117 |
| Netherlands | 111 |
| United States | 102 |
| Israel | 94 |
| United Kingdom | 90 |
| California | 87 |
| China | 86 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 11 |
| Meets WWC Standards with or without Reservations | 13 |
| Does not meet standards | 7 |
Dupont, Daniel; Stolovitch, Harold D. – Performance and Instruction, 1983
Describes the process by which Learner Verification and Revision (LVR) transforms information gathered during learner verification into revision prescriptions for the development of instructional materials. Robinson's and Gropper's procedural models for formative evaluation are also discussed and a study comparing the effectiveness of materials…
Descriptors: Evaluation Methods, Feedback, Formative Evaluation, Hypothesis Testing
Peer reviewedAlgina, James – Multivariate Behavioral Research, 1982
The use of analysis of covariance in simple repeated measures designs is considered. Conditions necessary for the analysis of covariance adjusted main effects and interactions to be meaningful are presented. (Author/JKS)
Descriptors: Analysis of Covariance, Analysis of Variance, Data Analysis, Hypothesis Testing
Peer reviewedRosenthal, Robert; Rubin, Donald B. – Journal of Educational Psychology, 1982
The procedures for (1) assessing the heterogeneity of a set of effect sizes derived from a meta-analysis, (2) testing for trends by means of contrasts among the effect sizes obtained, and (3) evaluating the practical importance of the average effect size obtained are described. (Author/PN)
Descriptors: Cognitive Ability, Data Analysis, Evaluation Methods, Hypothesis Testing
Peer reviewedCook, Thomas J.; Poole, W. Kenneth – Evaluation Review, 1982
The assumption of equal treatment implementation is questioned. Through the reanalysis of data from a nutrition supplementation program evaluation, the power of the analysis of treatment effects is shown to increase when data on the level of treatment implementation is included. (Author/CM)
Descriptors: Evaluation Methods, Hypothesis Testing, Power (Statistics), Program Evaluation
Peer reviewedHowe, Holly L.; Hoff, Margaret B. – Evaluation and the Health Professions, 1981
The sensitivity and simplicity of Wald's sequential analysis test in monitoring a preventive health care program are discussed. Data exemplifying the usefulness and expedience of employing sequential methods are presented. (Author/GK)
Descriptors: Evaluation Methods, Formative Evaluation, Hypothesis Testing, Preventive Medicine
Peer reviewedRubin, Donald B. – Journal of Educational Statistics, 1981
The use of Bayesian and empirical Bayesian techniques to summarize results from parallel randomized experiments is illustrated using the results of eight such experiments from an SAT coaching study. Graphical techniques, simulation techniques, and methods for monitoring the adequacy of model specification are highlighted. (Author/JKS)
Descriptors: Bayesian Statistics, Data Analysis, Educational Experiments, Goodness of Fit
Peer reviewedHowell, David C.; McConaughy, Stephanie H. – Educational and Psychological Measurement, 1982
It is argued here that the choice of the appropriate method for calculating least squares analysis of variance with unequal sample sizes depends upon the question the experimenter wants to answer about the data. The different questions reflect different null hypotheses. An example is presented using two alternative methods. (Author/BW)
Descriptors: Analysis of Variance, Hypothesis Testing, Least Squares Statistics, Mathematical Models
Peer reviewedHakstian, A. Ralph; And Others – Multivariate Behavioral Research, 1982
Issues related to the decision of the number of factors to retain in factor analyses are identified. Three widely used decision rules--the Kaiser-Guttman (eigenvalue greater than one), scree, and likelihood ratio tests--are investigated using simulated data. Recommendations for use are made. (Author/JKS)
Descriptors: Algorithms, Data Analysis, Factor Analysis, Factor Structure
Peer reviewedZwick, William R. – Multivariate Behavioral Research, 1982
The performance of four rules for determining the number of components (factors) to retain (Kaiser's eigenvalue greater than one, Cattell's scree, Bartlett's test, and Velicer's Map) was investigated across four systematically varied factors (sample size, number of variables, number of components, and component saturation). (Author/JKS)
Descriptors: Algorithms, Data Analysis, Factor Analysis, Factor Structure
Peer reviewedLane, David M. – Multivariate Behavioral Research, 1981
Problems in testing main effects in regression analysis when there is interaction are discussed. A method by which main effects can be tested independently of the interaction is developed and compared with the hierarchical method. The method provides control of the type I error rate, but is quite conservative. (Author/JKS)
Descriptors: Aptitude Treatment Interaction, Data Analysis, Hypothesis Testing, Mathematical Models
Peer reviewedRevelle, William; Rocklin, Thomas – Multivariate Behavioral Research, 1979
A new procedure for determining the optimal number of interpretable factors to extract from a correlation matrix is introduced and compared to more conventional procedures. The new method evaluates the magnitude of the very simple structure index of goodness of fit for factor solutions of increasing rank. (Author/CTM)
Descriptors: Factor Analysis, Goodness of Fit, Hypothesis Testing, Research Design
Peer reviewedWilliams, John T. – Multiple Linear Regression Viewpoints, 1979
A process is described for multiple comparisons when covariates are involved in the analysis. The method can be accomplished with considerable ease whenever pairwise comparisons are involved. More complex contrasts require the use of full and restricted models of variance. (CTM)
Descriptors: Analysis of Covariance, Comparative Analysis, Hypothesis Testing, Multiple Regression Analysis
Peer reviewedOlejnik, Stephen F.; Porter, Andrew C. – Journal of Educational Statistics, 1981
The evaluation of competing analysis strategies based on estimator bias and variance is demonstrated using gains in standard scores and analysis of covariance procedures for quasi-experiments conforming to the fan-spread hypothesis. The findings do not lead to a uniform recommendation of either approach. (Author/JKS)
Descriptors: Bias, Data Analysis, Evaluation, Hypothesis Testing
Peer reviewedKraemer, Helena Chmura – Psychometrika, 1981
Limitations and extensions of Feldt's approach to testing the equality of Cronbach's alpha coefficients in independent and matched samples are discussed. In particular, this approach is used to test equality of intraclass correlation coefficients. (Author)
Descriptors: Analysis of Variance, Correlation, Hypothesis Testing, Mathematical Models
Peer reviewedMitchell, Christine; Ault, Ruth L. – Child Development, 1979
In terms of Kagan's theory of the problem-solving process, this study explores the relationship between reflection-impulsivity, hypothesis generation and testing, and evaluation of the quality of one's own solutions among children approximately 8 to 12 years old. (JMB)
Descriptors: Children, Cognitive Processes, Cognitive Style, Conceptual Tempo


