NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 151 to 165 of 176 results Save | Export
Peer reviewed Peer reviewed
Jansen, Margo G. H. – Journal of Educational Statistics, 1986
In this paper a Bayesian procedure is developed for the simultaneous estimation of the reading ability and difficulty parameters which are assumed to be factors in reading errors by the multiplicative Poisson Model. According to several criteria, the Bayesian estimates are better than comparable maximum likelihood estimates. (Author/JAZ)
Descriptors: Achievement Tests, Bayesian Statistics, Comparative Analysis, Difficulty Level
Peer reviewed Peer reviewed
Lord, Frederic M. – Journal of Educational Measurement, 1986
Advantages and disadvantages of joint maximum likelihood, marginal maximum likelihood, and Bayesian methods of parameter estimation in item response theory are discussed and compared. (Author)
Descriptors: Bayesian Statistics, Error Patterns, Estimation (Mathematics), Higher Education
Peer reviewed Peer reviewed
Jansen, Margo G. H.; van Duijn, Marijtje A. J. – Psychometrika, 1992
A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)
Descriptors: Achievement Tests, Bayesian Statistics, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Gifford, Janice A.; Swaminathan, Hariharan – Applied Psychological Measurement, 1990
The effects of priors and amount of bias in the Bayesian approach to the estimation problem in item response models are examined using simulation studies. Different specifications of prior information have only modest effects on Bayesian estimates, which are less biased than joint maximum likelihood estimates for small samples. (TJH)
Descriptors: Bayesian Statistics, Comparative Analysis, Computer Simulation, Estimation (Mathematics)
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
In this study some alternative item selection criteria for adaptive testing are proposed. These criteria take into account the uncertainty of the ability estimates. A general weighted information criterion is suggested of which the usual maximum information criterion and the suggested alternative criteria are special cases. A simulation study was…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Yen, Wendy M. – 1982
The three-parameter logistic model discussed was used by CTB/McGraw-Hill in the development of the Comprehensive Tests of Basic Skills, Form U (CTBS/U) and the Test of Cognitive Skills (TCS), published in the fall of 1981. The development, standardization, and scoring of the tests are described, particularly as these procedures were influenced by…
Descriptors: Achievement Tests, Bayesian Statistics, Cognitive Processes, Data Collection
Vale, C. David; And Others – 1981
A simulation study to determine appropriate linking methods for adaptive testing items was designed. Three basic data sets for responses were created. These were randomly sampled, systematically sampled, and selected data sets. The evaluative criteria used were fidelity of parameter estimation, asymptotic ability estimates, root-mean-square error…
Descriptors: Adaptive Testing, Aptitude Tests, Armed Forces, Bayesian Statistics
Rabinowitz, Stanley N.; Pruzek, Robert – 1978
Despite advances in common factor analysis, a review of 89 studies published in four selected journals between 1963 and 1976 indicated that behavioral scientists preferred principal components analysis, followed by varimax or orthogonal rotation. Resultant row sums of squares of factor matrices from principal component analyses of real data sets…
Descriptors: Bayesian Statistics, Comparative Analysis, Educational Research, Factor Analysis
Tobi, Hilde – 1990
This paper reviews the literature about item response models for the subject level and aggregated level (group level). Group-level item response models (IRMs) are used in the United States in large-scale assessment programs such as the National Assessment of Educational Progress and the California Assessment Program. In the Netherlands, these…
Descriptors: Bayesian Statistics, Comparative Analysis, Educational Assessment, Elementary Secondary Education
Jannarone, Robert J. – 1986
A variety of locally dependent models are introduced having individual difference parameters that may be interpreted as reflecting effective learning abilities. One version is a univariate extension of the Rasch model with a Markov property: the probability that a given individual will pass an item depends on previous items only through the…
Descriptors: Academic Aptitude, Bayesian Statistics, Cognitive Ability, Estimation (Mathematics)
Mislevy, Robert J. – 1987
Standard procedures for estimating item parameters in Item Response Theory models make no use of auxiliary information about test items, such as their format or content, or the skills they require for solution. This paper describes a framework for exploiting this information, thereby enhancing the precision and stability of item parameter…
Descriptors: Bayesian Statistics, Difficulty Level, Estimation (Mathematics), Intermediate Grades
Peer reviewed Peer reviewed
Segall, Daniel O. – Psychometrika, 1996
Maximum likelihood and Bayesian procedures are presented for item selection and scoring of multidimensional adaptive tests. A demonstration with simulated response data illustrates that multidimensional adaptive testing can provide equal or higher reliabilities with fewer items than are required in one-dimensional adaptive testing. (SLD)
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Equations (Mathematics)
Buhr, Dianne C.; Algina, James – 1986
The focus of this study is on the estimation procedures implemented in BILOG, a computer program. One purpose is to compare the item parameter estimates produced by various procedures available in BILOG. Four different models are used: the one, two, and three parameter model and a three parameter model with common guessing parameters. The results…
Descriptors: Ability, Bayesian Statistics, Comparative Analysis, Computer Oriented Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Yuan H.; Lissitz, Robert W. – Journal of Educational Measurement, 2004
The analytically derived asymptotic standard errors (SEs) of maximum likelihood (ML) item estimates can be approximated by a mathematical function without examinees' responses to test items, and the empirically determined SEs of marginal maximum likelihood estimation (MMLE)/Bayesian item estimates can be obtained when the same set of items is…
Descriptors: Test Items, Computation, Item Response Theory, Error of Measurement
Mislevy, Robert J. – Journal of Education Statistics, 1986
Recent work in factor analysis of categorical variables is reviewed, emphasizing a generalized least squares solution and a maximum likelihood approach. A common factor model for dichotomous items is introduced, and the estimation of factor loadings from matrices of tetracorrelations is discussed. (LMO)
Descriptors: Bayesian Statistics, Estimation (Mathematics), Factor Analysis, Goodness of Fit
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12