NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Sirotnik, Kenneth; Wellington, Roger – Journal of Educational Measurement, 1977
A single conceptual and theoretical framework for sampling any configuration of data from one or more population matrices is presented, integrating past designs and discussing implications for more general designs. The theory is based upon a generalization of the generalized symmetric mean approach for single matrix samples. (Author/CTM)
Descriptors: Analysis of Variance, Data Analysis, Item Sampling, Mathematical Models
Peer reviewed Peer reviewed
Gressard, Risa P.; Loyd, Brenda H. – Journal of Educational Measurement, 1991
A Monte Carlo study, which simulated 10,000 examinees' responses to four tests, investigated the effect of item stratification on parameter estimation in multiple matrix sampling of achievement data. Practical multiple matrix sampling is based on item stratification by item discrimination and a sampling plan with moderate number of subtests. (SLD)
Descriptors: Achievement Tests, Comparative Testing, Computer Simulation, Estimation (Mathematics)
Peer reviewed Peer reviewed
Levin, Joel R. – Journal of Educational Measurement, 1975
A set procedure developed in this study is useful in determining sample size, based on specification of linear contrasts involving certain formula treatments. (Author/DEP)
Descriptors: Analysis of Variance, Comparative Analysis, Mathematical Models, Measurement Techniques
Peer reviewed Peer reviewed
Whitely, Susan E. – Journal of Educational Measurement, 1977
A debate concerning specific issues and the general usefulness of the Rasch latent trait test model is continued. Methods of estimation, necessary sample size, and the applicability of the model are discussed. (JKS)
Descriptors: Error of Measurement, Item Analysis, Mathematical Models, Measurement
Peer reviewed Peer reviewed
Wright, Benjamin D. – Journal of Educational Measurement, 1977
Statements made in a previous article of this journal concerning the Rasch latent trait test model are questioned. Methods of estimation, necessary sample sizes, several formuli, and the general usefulness of the Rasch model are discussed. (JKS)
Descriptors: Computers, Error of Measurement, Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
Kolen, Michael J.; Whitney, Douglas R. – Journal of Educational Measurement, 1982
The adequacy of equipercentile, linear, one-parameter (Rasch), and three-parameter logistic item-response theory procedures for equating 12 forms of five tests of general educational development were compared. Results indicated the equating method adequacy depends on a variety of factors such as test characteristics, equating design, and sample…
Descriptors: Achievement Tests, Comparative Analysis, Equated Scores, Equivalency Tests
Peer reviewed Peer reviewed
Slinde, Jeffrey A.; Linn, Robert L. – Journal of Educational Measurement, 1979
The Rasch model was used to equate reading comprehension tests of widely different difficulty for three groups of fifth grade students of widely different ability. Under these extreme circumstances, the Rasch model equating was unsatisfactory. (Author/CTM)
Descriptors: Academic Ability, Bias, Difficulty Level, Equated Scores
Peer reviewed Peer reviewed
Willms, J. Douglas; Raudenbush, Stephen W. – Journal of Educational Measurement, 1989
A general longitudinal model is presented for estimating school effects and their stability. The model, capable of separating true changes from sampling and measurement error, controls statistically for effects of factors exogenous to the school system. The model is illustrated with data from large cohorts of students in Scotland. (SLD)
Descriptors: Elementary Secondary Education, Equations (Mathematics), Error of Measurement, Estimation (Mathematics)
Peer reviewed Peer reviewed
Wainer, Howard – Journal of Educational Measurement, 1986
Describes recent research attempts to draw inferences about the relative standing of the states on the basis of mean SAT scores. This paper identifies five serious errors that call into question the validity of such inferences. Some plausible ways to avoid the errors are described. (Author/LMO)
Descriptors: College Entrance Examinations, Equated Scores, Mathematical Models, Predictor Variables
Peer reviewed Peer reviewed
Gustafsson, Jan-Eric – Journal of Educational Measurement, 1979
Computer generated data are used to show that Slinde and Linn's criticism of the usefulness of the Rasch model for equating (EJ 189 585) may have been the result of an artifact produced by the manner in which the samples were chosen in their study. (CTM)
Descriptors: Achievement Tests, Bias, College Entrance Examinations, Equated Scores
Peer reviewed Peer reviewed
Shavelson, Richard J.; And Others – Journal of Educational Measurement, 1993
Evidence is presented on the generalizability and convergent validity of performance assessments using data from six studies of student achievement that sampled a wide range of measurement facets and methods. Results at individual and school levels indicate that task-sampling variability is the major source of measurement error. (SLD)
Descriptors: Academic Achievement, Educational Assessment, Error of Measurement, Generalizability Theory
Peer reviewed Peer reviewed
Mislevy, Robert J.; And Others – Journal of Educational Measurement, 1992
Concepts behind plausible values in estimating population characteristics from sparse matrix samples of item responses are discussed. The use of marginal analyses is described in the context of the National Assessment of Educational Progress, and the approach is illustrated with Scholastic Aptitude Test data for 9,075 high school seniors. (SLD)
Descriptors: College Entrance Examinations, Educational Assessment, Equations (Mathematics), Estimation (Mathematics)