NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 42 results Save | Export
Peer reviewed Peer reviewed
Liou, Michelle; Cheng, Philip E. – Psychometrika, 1995
Different data imputation techniques that are useful for equipercentile equating are discussed, and empirical data are used to evaluate the accuracy of these techniques as compared with chained equipercentile equating. The kernel estimator, the EM algorithm, the EB model, and the iterative moment estimator are considered. (SLD)
Descriptors: Equated Scores, Equations (Mathematics), Estimation (Mathematics), Test Format
Peer reviewed Peer reviewed
Feldt, Leonard S. – Applied Measurement in Education, 2002
Considers the degree of bias in testlet-based alpha (internal consistency reliability) through hypothetical examples and real test data from four tests of the Iowa Tests of Basic Skills. Presents a simple formula for computing a testlet-based congeneric coefficient. (SLD)
Descriptors: Estimation (Mathematics), Reliability, Statistical Bias, Test Format
Peer reviewed Peer reviewed
Pommerich, Mary; Nicewander, W. Alan; Hanson, Bradley A. – Journal of Educational Measurement, 1999
Studied whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Evaluated six Item Response Theory-based domain score estimation methods through simulation and concluded they performed better than observed score on the form taken. (SLD)
Descriptors: Estimation (Mathematics), Groups, Item Response Theory, Scores
Li, Yuan H.; Lissitz, Robert W.; Yang, Yu Nu – 1999
Recent years have seen growing use of tests with mixed item formats, e.g., partly containing dichotomously scored items and partly consisting of polytomously scored items. A matching two test characteristic curves method (CCM) for placing these mixed format items on the same metric is described and evaluated in this paper under a common-item…
Descriptors: Equated Scores, Estimation (Mathematics), Item Response Theory, Test Format
Wightman, Linda F.; Wightman, Lawrence E. – 1988
Section Pre-Equating (SPE) is a method used to equate test forms that consist of multiple separately timed sections. SPE does not require examinees to take two complete forms of the test. Instead, all of the old form and one or two sections of the new form are administered to each examinee, and missing data techniques are employed to estimate the…
Descriptors: Comparative Analysis, Correlation, Equated Scores, Estimation (Mathematics)
Samejima, Fumiko – 1983
A general model for the homogeneous case of the continuous response is proposed. The model is an expansion and generalization of the one proposed by the author in 1974, in which the open response situation is dealt with. In this generalized model, the closed response situation is dealt with, and it includes the model for the open response…
Descriptors: Estimation (Mathematics), Latent Trait Theory, Mathematical Models, Probability
Pommerich, Mary; Nicewander, W. Alan – 1998
A simulation study was performed to determine whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Six Item Response Theory (IRT)-based domain score estimation methods were evaluated, under conditions of few items per content area per…
Descriptors: Ability, Estimation (Mathematics), Groups, Item Response Theory
Peer reviewed Peer reviewed
Qualls, Audrey L. – Applied Measurement in Education, 1995
Classically parallel, tau-equivalently parallel, and congenerically parallel models representing various degrees of part-test parallelism and their appropriateness for tests composed of multiple item formats are discussed. An appropriate reliability estimate for a test with multiple item formats is presented and illustrated. (SLD)
Descriptors: Achievement Tests, Estimation (Mathematics), Measurement Techniques, Test Format
Peer reviewed Peer reviewed
Berger, Martijn P. F. – Applied Psychological Measurement, 1994
This paper focuses on similarities of optimal design of fixed-form tests, adaptive tests, and testlets within the framework of the general theory of optimal designs. A sequential design procedure is proposed that uses these similarities to obtain consistent estimates for the trait level distribution. (SLD)
Descriptors: Achievement Tests, Adaptive Testing, Algorithms, Estimation (Mathematics)
Peer reviewed Peer reviewed
Axelrod, Bradley N.; And Others – Psychological Assessment, 1996
The calculations of D. Schretlen, R. H. B. Benedict, and J. H. Bobholz for the reliabilities of a short form of the Wechsler Adult Intelligence Scale--Revised (WAIS-R) (1994) consistently overestimated the values. More accurate values are provided for the WAIS--R and a seven-subtest short form. (SLD)
Descriptors: Error Correction, Error of Measurement, Estimation (Mathematics), Intelligence Tests
Bergstrom, Betty A.; Lunz, Mary E. – 1991
The equivalence of pencil and paper Rasch item calibrations when used in a computer adaptive test administration was explored in this study. Items (n=726) were precalibarted with the pencil and paper test administrations. A computer adaptive test was administered to 321 medical technology students using the pencil and paper precalibrations in the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Pommerich, Mary; Nicewander, W. Alan – 1998
A simulation study was performed to determine whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Six Item Response Theory (IRT) -based domain score estimation methods were evaluated, under conditions of few items per content area per…
Descriptors: Ability, Estimation (Mathematics), Group Membership, Item Response Theory
van der Linden, Wim J. – 1999
A constrained computerized adaptive testing (CAT) algorithm is presented that automatically equates the number-correct scores on adaptive tests. The algorithm can be used to equate number-correct scores across different administrations of the same adaptive test as well as to an external reference test. The constraints are derived from a set of…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewed Peer reviewed
Sykes, Robert C.; Ito, Kyoko – Applied Psychological Measurement, 1997
Evaluated the equivalence of scores and one-parameter logistic model item difficulty estimates obtained from computer-based and paper-and-pencil forms of a licensure examination taken by 418 examinees. There was no effect of either order or mode of administration on the equivalences. (SLD)
Descriptors: Computer Assisted Testing, Estimation (Mathematics), Health Personnel, Item Response Theory
Wang, Yu-Chung Lawrence – 1994
The first purpose of this study was to investigate the stability of two essential dimensionality measures across 10 random samples within a particular assessment item (AT1) selection. Other purposes were to investigate the discrepancy of the essential unidimensionality estimates for a test across different AT1 selections and sample sizes and to…
Descriptors: Correlation, Educational Assessment, Estimation (Mathematics), Item Response Theory
Previous Page | Next Page ยป
Pages: 1  |  2  |  3