NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 2,026 to 2,040 of 3,311 results Save | Export
Peer reviewed Peer reviewed
Yuan, Ke-Hai; Bentler, Peter M. – Psychometrika, 2000
Studied whether the standard z-statistic that evaluates whether a factor loading is statistically necessary is correctly applied in such situations and more generally when the variables being analyzed are arbitrarily rescaled. An example illustrates that neither the factor loading estimates nor the standard error estimates possess scale…
Descriptors: Error of Measurement, Estimation (Mathematics), Mathematical Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Dustmann, Christian; van Soest, Arthur – Industrial and Labor Relations Review, 2002
Analysis of panel data on immigrants to Germany 1984-94 focused on the relationship of language proficiency and productivity. Results show how time-varying measurement errors can lead to downward bias on the effect of fluency on earnings. Language proficiency is thus far more important than studies have suggested. (Contains 30 references.) (SK)
Descriptors: Error of Measurement, Foreign Countries, Immigrants, Language Proficiency
Peer reviewed Peer reviewed
Liou, Michelle; Cheng, Philip E.; Johnson, Eugene G. – Applied Psychological Measurement, 1997
Derived simplified equations to compute the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function. Results from two empirical studies indicate that these equations work reasonably well for moderate size samples. (SLD)
Descriptors: Computation, Equated Scores, Error of Measurement, Estimation (Mathematics)
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Penev, Spiridon – Multivariate Behavioral Research, 2006
Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…
Descriptors: Error of Measurement, Intervals, Reliability, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ruggiero, John – Economics of Education Review, 2006
Data Envelopment Analysis has become a popular tool for evaluating the efficiency of decision making units. The nonparametric approach has been widely applied to educational production. The approach is, however, deterministic and leads to biased estimates of performance in the presence of measurement error. Numerous simulation studies confirm the…
Descriptors: Data Analysis, Decision Making, Efficiency, Productivity
Peer reviewed Peer reviewed
Direct linkDirect link
Olsson, Henrik; Wennerholm, Pia; Lyxzen, Urban – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2004
J. P. Minda and J. D. Smith (2001) showed that a prototype model outperforms an exemplar model, especially in larger categories or categories that contained more complex stimuli. R. M. Nosofsky and S. R. Zaki (2002) showed that an exemplar model with a response-scaling mechanism outperforms a prototype model. The authors of the current study…
Descriptors: Error of Measurement, Stimuli, Models, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Kowalchuk, Rhonda K.; Keselman, H. J.; Algina, James; Wolfinger, Russell D. – Educational and Psychological Measurement, 2004
One approach to the analysis of repeated measures data allows researchers to model the covariance structure of their data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach, available through SAS PROC MIXED, was compared to a Welch-James type statistic.…
Descriptors: Interaction, Sample Size, Statistical Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
McDonald, Roderick P. – Structural Equation Modeling, 2004
Improper structures arising from the estimation of parameters in structural equation models (SEMs) are commonly an indication that the model is incorrectly specified. The use of boundary solutions cannot in general be recommended. Partly on the basis of theory given by Van Driel, and partly by example, suggestions are made for using the data as…
Descriptors: Structural Equation Models, Evaluation Methods, Error of Measurement, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Gilbert – Psychological Methods, 2000
This article introduces a procedure for estimating reliability in which equivalent halves of a given test are systematically created and then administered a few days apart so that transient error can be included in the error calculus. The procedure not only estimates complete reliability (taking into account both specific-factor error and…
Descriptors: Reliability, Computation, Error of Measurement, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Raju, Nambury S.; Lezotte, Daniel V.; Fearing, Benjamin K.; Oshima, T. C. – Applied Psychological Measurement, 2006
This note describes a procedure for estimating the range restriction component used in correcting correlations for unreliability and range restriction when an estimate of the reliability of a predictor is not readily available for the unrestricted sample. This procedure is illustrated with a few examples. (Contains 1 table.)
Descriptors: Correlation, Reliability, Predictor Variables, Error Correction
Peer reviewed Peer reviewed
Direct linkDirect link
Bock, R. Darrell; Brennan, Robert L.; Muraki, Eiji – Applied Psychological Measurement, 2002
In assessment programs where scores are reported for individual examinees, it is desirable to have responses to performance exercises graded by more than one rater. If more than one item on each test form is so graded, it is also desirable that different raters grade the responses of any one examinee. This gives rise to sampling designs in which…
Descriptors: Generalizability Theory, Test Items, Item Response Theory, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Hoyt, William T. – Journal of Counseling Psychology, 2002
Rater bias has long been considered a source of error in observer ratings but has been ignored by process researchers using participant ratings. In particular, rater variance, or differences in generalized favorable or unfavorable perceptions of others, represents a neglected source of error in studies using participant ratings. The author…
Descriptors: Psychotherapy, Generalizability Theory, Research Methodology, Error of Measurement
Custer, Michael; Sharairi, Sid; Yamazaki, Kenji; Signatur, Diane; Swift, David; Frey, Sharon – Online Submission, 2008
The present study compared item and ability invariance as well as model-data fit between the one-parameter (1PL) and three-parameter (3PL) Item Response Theory (IRT) models utilizing real data across five grades; second through sixth as well as simulated data at second, fourth and sixth grade. At each grade, the 1PL and 3PL IRT models were run…
Descriptors: Error of Measurement, Item Response Theory, Models, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Pustjens, Heidi; Van de gaer, Eva; Van Damme, Jan; Onghena, Patrick – School Effectiveness and School Improvement, 2008
The major aim of educational effectiveness research is to examine and explain school, class, and teacher differences with respect to relevant educational criteria. Until now, in the large majority of studies, language and mathematics scores were used as a criterion. In the present study, the educational track students choose at the start of…
Descriptors: Catholic Schools, Academic Achievement, Secondary Education, Instructional Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Bollen, Kenneth A. – Psychological Methods, 2007
R. D. Howell, E. Breivik, and J. B. Wilcox (2007) have argued that causal (formative) indicators are inherently subject to interpretational confounding. That is, they have argued that using causal (formative) indicators leads the empirical meaning of a latent variable to be other than that assigned to it by a researcher. Their critique of causal…
Descriptors: Researchers, Structural Equation Models, Formative Evaluation, Transformative Learning
Pages: 1  |  ...  |  132  |  133  |  134  |  135  |  136  |  137  |  138  |  139  |  140  |  ...  |  221