NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 91 to 105 of 367 results Save | Export
Peer reviewed Peer reviewed
Armstrong, Ronald D.; Jones, Douglas H. – Applied Psychological Measurement, 1992
Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)
Descriptors: Algorithms, Decision Making, Equations (Mathematics), Mathematical Models
Peer reviewed Peer reviewed
Swanson, Len; Stocking, Martha L. – Applied Psychological Measurement, 1993
A model for solving very large item selection problems is presented. The model builds on binary programming applied to test construction. A heuristic for selecting items that satisfy the constraints in the model is also presented, and various problems are solved using the model and heuristic. (SLD)
Descriptors: Algorithms, Equations (Mathematics), Heuristics, Item Response Theory
Peer reviewed Peer reviewed
Hambleton, Ronald K.; Jones, Russell W. – Educational Measurement: Issues and Practice, 1993
This National Council on Measurement in Education (NCME) instructional module compares classical test theory and item response theory and describes their applications in test development. Related concepts, models, and methods are explored; and advantages and disadvantages of each framework are reviewed. (SLD)
Descriptors: Comparative Analysis, Educational Assessment, Graphs, Item Response Theory
van der Linden, Wim J. – 1995
Dichotomous item response theory (IRT) models can be viewed as families of stochastically ordered distributions of responses to test items. This paper explores several properties of such distributions. The focus is on the conditions under which stochastic order in families of conditional distributions is transferred to their inverse distributions,…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Foreign Countries
Ackerman, Terry A.; Evans, John A. – 1993
A didactic example is provided, using a Monte Carlo method, of how differential item functioning (DIF) can be eliminated (and thus better understood) when the complete latent space is used. The main source of DIF is that the matching single criterion used in some DIF procedures, Mantel Haenszel or Simultaneous Item Bias (SIBTEST), does not account…
Descriptors: Computer Simulation, Equations (Mathematics), Item Bias, Item Response Theory
Seong, Tae-Je – 1990
The similarity of item and ability parameter estimations was investigated using two numerical analysis techniques via marginal maximum likelihood estimation (MMLE) with a large simulated data set (n=1,000 examinees) and changing the number of quadrature points. MMLE estimation uses a numerical analysis technique to integrate examinees' abilities…
Descriptors: Comparative Analysis, Equations (Mathematics), Estimation (Mathematics), Mathematical Models
Kulick, Edward; Dorans, Neil J. – 1984
A new approach to assessing unexpected differential item performance (item bias or item fairness) is introduced and applied to the item responses of different subpopulations of Scholastic Aptitude Test (SAT) takers. The essential features of the standardization approach are described. The primary goal of the standardization approach is to control…
Descriptors: College Entrance Examinations, Individual Differences, Mathematical Models, Performance Factors
Reckase, Mark D.; McKinley, Robert L. – 1982
A class of multidimensional latent trait models is described. The properties of the model parameters, and initial results on the accuracy of a maximum likelihood procedure for estimating the model parameters are discussed. The model presented is a special case of the general model described by Rasch (1961), with close similarities to the models…
Descriptors: Correlation, Item Analysis, Latent Trait Theory, Mathematical Models
Samejima, Fumiko – 1981
In defense of retaining the "latent trait theory" term, instead of replacing it with "item response theory" as some recent research would have it, the following objectives are outlined: (1) investigation of theory and method for estimating the operating characteristics of discrete item responses using a minimum number of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Factor Analysis, Latent Trait Theory
Haebara, Tomokazu – 1981
When several ability scales in item response models are separately derived from different test forms administered to different samples of examinees, these scales must be equated to a common scale because their units and origins are arbitrarily determined and generally different from scale to scale. A general method for equating logistic ability…
Descriptors: Academic Ability, Equated Scores, Latent Trait Theory, Least Squares Statistics
Peer reviewed Peer reviewed
Kane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Divgi, D. R. – Journal of Educational Measurement, 1986
This paper discusses various issues involved in using the Rasch Model with multiple-choice tests and questions the suitability of this model for multiple-choice items. Results of some past studies supporting the model are shown to be irrelevant. The effects of the model's misfit on test equating are demonstrated. (Author JAZ)
Descriptors: Equated Scores, Goodness of Fit, Latent Trait Theory, Mathematical Models
Peer reviewed Peer reviewed
Feldt, Leonard S. – Educational and Psychological Measurement, 1984
The binomial error model includes form-to-form difficulty differences as error variance and leads to Ruder-Richardson formula 21 as an estimate of reliability. If the form-to-form component is removed from the estimate of error variance, the binomial model leads to KR 20 as the reliability estimate. (Author/BW)
Descriptors: Achievement Tests, Difficulty Level, Error of Measurement, Mathematical Formulas
Mellenbergh, Gideon J.; van der Linden, Wim J. – Evaluation in Education: International Progress, 1982
Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)
Descriptors: Criterion Referenced Tests, Educational Testing, Item Analysis, Latent Trait Theory
Peer reviewed Peer reviewed
Huynh, Huynh; Saunders, Joseph C. – Journal of Educational Measurement, 1980
Single administration (beta-binomial) estimates for the raw agreement index p and the corrected-for-chance kappa index in mastery testing are compared with those based on two test administrations in terms of estimation bias and sampling variability. Bias is about 2.5 percent for p and 10 percent for kappa. (Author/RL)
Descriptors: Comparative Analysis, Error of Measurement, Mastery Tests, Mathematical Models
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  25