NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 2,161 to 2,175 of 3,311 results Save | Export
Peer reviewed Peer reviewed
Samejima, Fumiko – Applied Psychological Measurement, 1994
The reliability coefficient is predicted from the test information function (TIF) or two modified TIF formulas and a specific trait distribution. Examples illustrate the variability of the reliability coefficient across different trait distributions, and results are compared with empirical reliability coefficients. (SLD)
Descriptors: Adaptive Testing, Error of Measurement, Estimation (Mathematics), Reliability
Peer reviewed Peer reviewed
Huck, Schuyler W. – Educational and Psychological Measurement, 1992
Three factors that increase score variability yet can be associated with an increase, a decrease, or no change in Pearson's correlation coefficient (r) are discussed (restriction of range, errors of measurement, and linear transformations of data). The connection between changes in variability and r depends on how changes occur. (SLD)
Descriptors: Correlation, Equations (Mathematics), Error of Measurement, Groups
Peer reviewed Peer reviewed
Behrman, Jere R.; Rosenzweig, Mark R. – Economics of Education Review, 1999
Using a new twins sample, presents new estimates of schooling returns and "ability" bias and tests that bias's significance. Ability bias may exist even if ability's genetically-endowed component does not affect schooling decisions directly when correlated with other family characteristics (like income) that do affect schooling. (19…
Descriptors: Ability, Elementary Secondary Education, Error of Measurement, Estimation (Mathematics)
Peer reviewed Peer reviewed
Milanowski, Anthony T. – Journal of Personnel Evaluation in Education, 1999
Describes the temporal consistency of school classification observed in the Kentucky, and secondarily in the Charlotte-Mecklinberg (North Carolina), school-based performance award programs. Data from the Kentucky Department of Education show the extent to which temporal inconsistency could be due to measurement error. (SLD)
Descriptors: Academic Achievement, Achievement Gains, Classification, Error of Measurement
Peer reviewed Peer reviewed
Steiger, James H. – Structural Equation Modeling, 2000
Discusses two criticisms raised by L. Hayduk and D. Glaser of the most commonly used point estimate of the Root Mean Square Error (RMSEA) and points out misconceptions in their discussion. Although there are apparent flaws in their arguments, the RMSEA is open to question for several other reasons. (SLD)
Descriptors: Error of Measurement, Estimation (Mathematics), Factor Analysis, Hypothesis Testing
Peer reviewed Peer reviewed
Yin, Ping; Fan, Xitao – Educational and Psychological Measurement, 2000
Performed a meta-analysis of reliability estimates for Beck Depression Inventory (BDI) (A. Beck and others, 1961) scores across studies. Only 7.5% of studies reviewed (n=1,200) reported meaningful reliability estimates. Analysis results suggest that standard errors of measurement should be considered in addition to reliability estimates when…
Descriptors: Depression (Psychology), Error of Measurement, Estimation (Mathematics), Generalization
Kim, Jwa K. – Research in the Schools, 1994
Effects of item parameters on ability estimation were investigated through Monte Carlo studies using the Expected-A-Posteriori estimation. Results show a significant effect of item discriminating parameter on standard error of ability estimation. As the discriminating parameter increases, the standard error decreases. (SLD)
Descriptors: Ability, Error of Measurement, Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Parshall, Cynthia G.; Houghton, Pansy Du Bose; Kromrey, Jeffrey D. – Journal of Educational Measurement, 1995
Sample sizes (n=15, 25, 50, 100) were examined (1,000 times each) with replacement from each of 5 data files from teacher subject test areas to compare statistical bias and standard error. Trivial levels of equating bias were present with small samples, and substantial increases in standard errors were found as the sample size decreased. (MAK)
Descriptors: Error of Measurement, Sample Size, Statistical Analysis, Statistical Bias
Peer reviewed Peer reviewed
Sivo, Stephen A.; Willson, Victor L. – Journal of Experimental Education, 1998
Critiques H. W. Marsh and K.-T. Hau's (1996) assertion that parsimony is not always desirable when assessing model-fit on a particular counterexample drawn from Marsh's previous research. This counterexample is neither general nor valid enough to support such a thesis and it signals an oversight of extant, stochastic models justifying correlated…
Descriptors: Correlation, Error of Measurement, Goodness of Fit, Statistical Studies
Peer reviewed Peer reviewed
Hoyle, Rick H. – Journal of Experimental Education, 1998
In response to H. W. Marsh and K.-T. Hau's (1996) article on the potential for inferential errors when parsimony is rewarded in the evaluation of overall fit of structural equation models, a design-sensitive adjustment to the standard parsimony ratio is proposed. This ratio renders a more reasonable upper bound than does the standard parsimony…
Descriptors: Correlation, Error of Measurement, Goodness of Fit, Statistical Studies
Peer reviewed Peer reviewed
Mohr, L. B. – Evaluation and Program Planning, 2000
Suggests that there is a tendency in social science and program evaluation to adhere to some methodological practices by force of custom rather than because of their reasoned applicability. These ideas include regression artifacts, random measurement error, and change or gain scores. (Author/SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology
Peer reviewed Peer reviewed
Reichardt, Charles S. – Evaluation and Program Planning, 2000
Agrees with L. Mohr that researchers are too quick to assume that measurement error is random, but disagrees that the idea of regression toward the mean has been a distraction and the notion that change scores analysis should be avoided in favor of regression analysis. (SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology
Peer reviewed Peer reviewed
Mohr, L. B. – Evaluation and Program Planning, 2000
Responds to C. S. Reichardt's discussion of regression artifacts, random measurement error, and change scores. Emphasizes that attention to regression artifacts in program evaluation is almost bound to be problematic and proposes some arguments in support of this position. (SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology
Peer reviewed Peer reviewed
Klauer, Karl Christoph; Batchelder, William H. – Psychometrika, 1996
A general approach to the analysis of nominal-scale ratings is discussed that is based on a simple measurement error model for a rater's judgments. The basic measurement error model gives rise to an agreement model for the agreement matrix of two or more raters. (SLD)
Descriptors: Classification, Data Analysis, Equations (Mathematics), Error of Measurement
Peer reviewed Peer reviewed
Lee, Guemin – Journal of Educational Measurement, 2000
Studied the appropriateness and implications of incorporating a testlet definition into the estimation of procedures of the conditional standard error of measurement (SEM) for tests composed of testlets. Simulation results for several methods show that an item-based method using a generalizability theory model provided good estimates of the…
Descriptors: Comparative Analysis, Error of Measurement, Estimation (Mathematics), Generalizability Theory
Pages: 1  |  ...  |  141  |  142  |  143  |  144  |  145  |  146  |  147  |  148  |  149  |  ...  |  221