Descriptor
Source
| Educational and Psychological… | 15 |
Author
| Goldstein, Zvi | 2 |
| Marcoulides, George A. | 2 |
| Aggarwal, Lalit K. | 1 |
| Bajgier, Steve M. | 1 |
| Bray, James H. | 1 |
| Brown, R. L. | 1 |
| Cahan, Sorel | 1 |
| Howard, George S. | 1 |
| Howell, David C. | 1 |
| Lance, Charles E. | 1 |
| Linn, Robert L. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 14 |
| Reports - Evaluative | 10 |
| Reports - Research | 4 |
| Opinion Papers | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedZucker, David M. – Educational and Psychological Measurement, 1990
A major pitfall with fixed-effects analysis of variance in the nested design is illustrated. Under the null hypothesis of no intervention effect, the rejection rate for "F(sub fixed)" in the nested design is greater than the putative Type I error rate of the test, potentially to an extreme degree. (Author/TJH)
Descriptors: Analysis of Variance, Equations (Mathematics), Mathematical Models, Research Problems
Peer reviewedLance, Charles E.; And Others – Educational and Psychological Measurement, 1990
A causal model of halo error (HE) is derived. Three hypotheses are formulated to explain findings of negative HE. It is suggested that apparent negative HE may have been misinferred from existing correlational measures of HE, and that positive HE is more prevalent than had previously been thought. (SLD)
Descriptors: Causal Models, Correlation, Definitions, Equations (Mathematics)
Peer reviewedMicceri, Theodore – Educational and Psychological Measurement, 1990
The use of proportions to transform raw scores into standardized, dimensionless units creates measurement problems. The use of fractional (percentage) uncertainties for physical or any other measurements results in other problems. Means of dealing with associated problems of standardization are discussed. (TJH)
Descriptors: Mathematical Models, Measurement Techniques, Ratios (Mathematics), Raw Scores
Peer reviewedHowell, David C.; McConaughy, Stephanie H. – Educational and Psychological Measurement, 1982
It is argued here that the choice of the appropriate method for calculating least squares analysis of variance with unequal sample sizes depends upon the question the experimenter wants to answer about the data. The different questions reflect different null hypotheses. An example is presented using two alternative methods. (Author/BW)
Descriptors: Analysis of Variance, Hypothesis Testing, Least Squares Statistics, Mathematical Models
Peer reviewedStrauss, David – Educational and Psychological Measurement, 1981
To determine if the observed correlation between two variables can be "explained" by a third variable, a significance test on the partial correlation coefficient is often used. This can be misleading when the third variable is measured with error. This article shows how the problem can be partially overcome. (Author/BW)
Descriptors: Correlation, Error of Measurement, Mathematical Models, Predictive Validity
Peer reviewedMarcoulides, George A.; Goldstein, Zvi – Educational and Psychological Measurement, 1990
A methodology for determining the optimal number of observations to use in a measurement design when resource constraints are imposed is presented. Two- and three-facet designs are outlined. Parallel closed form formulae can easily be determined for other designs. (TJH)
Descriptors: Equations (Mathematics), Estimation (Mathematics), Generalizability Theory, Mathematical Models
Peer reviewedCahan, Sorel – Educational and Psychological Measurement, 1986
This paper points out the logical, theoretical, and statistical problems associated with the computation of the "pooled-within" correlation coefficient in partially segregated educational systems. (Author/LMO)
Descriptors: Correlation, Educational Research, Heterogeneous Grouping, Homogeneous Grouping
Peer reviewedWerts, Charles E.; Linn, Robert L. – Educational and Psychological Measurement, 1971
Descriptors: Analysis of Covariance, Analysis of Variance, Comparative Analysis, Mathematical Models
Peer reviewedMaxwell, Scott E.; Howard, George S. – Educational and Psychological Measurement, 1981
This paper delineates conditions under which the use of change scores will not produce misleading results, and may perhaps be preferable to other methods of analysis. The validity of change scores in randomized pretest-posttest designs is discussed along with situations where analysis of change scores should be used. (Author/GK)
Descriptors: Analysis of Covariance, Analysis of Variance, Mathematical Models, Pretests Posttests
Peer reviewedMarcoulides, George A.; Goldstein, Zvi – Educational and Psychological Measurement, 1991
A method is presented for determining the optimal number of conditions to use in measurement designs when resource constraints are imposed. The method is illustrated using a multivariate two-facet design, and extensions to other designs are discussed. (SLD)
Descriptors: Budgeting, Data Collection, Efficiency, Equations (Mathematics)
Peer reviewedBray, James H.; And Others – Educational and Psychological Measurement, 1984
The purpose of this study was to determise the relative loss in statistical power of traditional methods of analysis when response-shift bias is present. Five methods of analysis (posttest scores only; postminus pretests; postminus retrospective pretest; postcovaried by pretest; postcovaried by retrospective pretest) were compared. (Author/BW)
Descriptors: Analysis of Covariance, Analysis of Variance, Comparative Analysis, Mathematical Models
Peer reviewedPreece, Peter F. W. – Educational and Psychological Measurement, 1982
The validity of various reliability-corrected procedures for adjusting for initial differences between groups in uncontrolled studies is established for subjects exhibiting linear fan-spread growth. The results are then extended to a nonlinear model of growth. (Author)
Descriptors: Achievement Gains, Analysis of Covariance, Error of Measurement, Hypothesis Testing
Peer reviewedReddy, Srinivas K. – Educational and Psychological Measurement, 1992
Implications of ignoring correlated error on parameter estimates in some simple structural equation models are examined. It is shown analytically and empirically through simulation that ignoring positive between-construct correlated error overestimates the structural parameter linking the two constructs. Effects become more pronounced with…
Descriptors: Correlation, Equations (Mathematics), Error of Measurement, Estimation (Mathematics)
Peer reviewedBajgier, Steve M.; Aggarwal, Lalit K. – Educational and Psychological Measurement, 1991
Ignorance of the characteristics of a mixed population may lead to bias in a summary measure of a phenomenon. A test based on sample kurtosis is demonstrated by a simulation study to be more powerful than six other known tests in detecting a class of mixed normal distributions. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Equations (Mathematics), Goodness of Fit
Peer reviewedBrown, R. L. – Educational and Psychological Measurement, 1992
A Monte Carlo study explores the robustness assumption in structural equation modeling of using a full information normal theory generalized least-squares estimation procedure on Type I censored data. The efficacy of the following proposed alternate estimation procedures is assessed: asymptotically distribution free estimator and a latent…
Descriptors: Computer Simulation, Equations (Mathematics), Estimation (Mathematics), Least Squares Statistics


