NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 151 to 165 of 367 results Save | Export
Peer reviewed Peer reviewed
Muthen, Bengt O. – Psychometrika, 1989
A method for detecting instructional sensitivity (item bias) in test items is proposed. This method extends item response theory by allowing for item-specific variation in measurement relations across students' varying instructional backgrounds. Item bias detection is a by-product. Traditional and new methods are compared. (SLD)
Descriptors: Achievement Tests, Educational Background, Educational Opportunities, Elementary Secondary Education
Peer reviewed Peer reviewed
Feldt, Leonard S. – Applied Measurement in Education, 1993
The recommendation that the reliability of multiple-choice tests will be enhanced if the distribution of item difficulties is concentrated at approximately 0.50 is reinforced and extended in this article by viewing the 0/1 item scoring as a dichotomization of an underlying normally distributed ability score. (SLD)
Descriptors: Ability, Difficulty Level, Guessing (Tests), Mathematical Models
Peer reviewed Peer reviewed
Miller, Timothy R.; Spray, Judith A. – Journal of Educational Measurement, 1993
Presents logistic discriminant analysis as a means of detecting differential item functioning (DIF) in items that are polytomously scored. Provides examples of DIF detection using a 27-item mathematics test with 1,977 examinees. The proposed method is simpler and more practical than polytomous extensions of the logistic regression DIF procedure.…
Descriptors: Discriminant Analysis, Item Bias, Mathematical Models, Mathematics Tests
Peer reviewed Peer reviewed
Seong, Tae-Je – Applied Psychological Measurement, 1990
The sensitivity of marginal maximum likelihood estimation of item and ability (theta) parameters was examined when prior ability distributions were not matched to underlying ability distributions. Thirty sets of 45-item test data were generated. Conditions affecting the accuracy of estimation are discussed. (SLD)
Descriptors: Ability, Computer Simulation, Equations (Mathematics), Estimation (Mathematics)
Stocking, Martha L. – 1996
The interest in the application of large-scale computerized adaptive testing has served to focus attention on issues that arise when theoretical advances are made operational. Some of these issues stem less from changes in testing conditions and more from changes in testing paradigms. One such issue is that of the order in which questions are…
Descriptors: Adaptive Testing, Cognitive Processes, Comparative Analysis, Computer Assisted Testing
Thissen, David; Steinberg, Lynne – 1983
An extension of the Bock-Samejima model for multiple choice items is introduced. The model provides for varying probabilities of the response alternative when the examinee guesses. A marginal maximum likelihood method is devised for estimating the item parameters, and likelihood ratio tests for comparing more and less constrained forms of the…
Descriptors: Ability, Estimation (Mathematics), Guessing (Tests), Latent Trait Theory
Samejima, Fumiko – 1982
In a preceding research report, ONR/RR-82-1 (Information Loss Caused by Noise in Models for Dichotomous Items), observations were made on the effect of noise accommodated in different types of models on the dichotomous response level. In the present paper, focus is put upon the three-parameter logistic model, which is widely used among…
Descriptors: Estimation (Mathematics), Goodness of Fit, Guessing (Tests), Mathematical Models
Ackerman, Terry A. – 1987
One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…
Descriptors: Computer Software, Correlation, Estimation (Mathematics), Latent Trait Theory
Adema, Jos J. – 1989
Item banks, large sets of test items, can be used for the construction of achievement tests. Mathematical programming models have been proposed for the selection of items from an item bank for a test. These models make automated test construction possible. However, to find an optimal or even an approximate optimal solution to a test construction…
Descriptors: Achievement Tests, Computer Assisted Testing, Computer Software, Item Banks
De Ayala, R. J.; And Others – 1990
Computerized adaptive testing procedures (CATPs) based on the graded response method (GRM) of F. Samejima (1969) and the partial credit model (PCM) of G. Masters (1982) were developed and compared. Both programs used maximum likelihood estimation of ability, and item selection was conducted on the basis of information. Two simulated data sets, one…
Descriptors: Ability Identification, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Kingston, Neal M.; Dorans, Neil J. – 1982
The feasibility of using item response theory (IRT) as a psychometric model for the Graduate Record Examination (GRE) Aptitude Test was addressed by assessing the reasonableness of the assumptions of item response theory for GRE item types and examinee populations. Items from four forms and four administrations of the GRE Aptitude Test were…
Descriptors: Aptitude Tests, Graduate Study, Higher Education, Latent Trait Theory
Dorans, Neil J. – 1983
A formal analysis is presented of the effects of item deletion on equating/scaling functions and reported score distributions. The phrase "item deletion" refers to the process of changing the original key of a flawed item to either all options correct, including omits, or to no options correct, i.e., not scoring the flawed item. There…
Descriptors: College Entrance Examinations, Equated Scores, Item Analysis, Mathematical Models
Reckase, Mark D.; McKinley, Robert L. – 1983
A study was undertaken to develop guidelines for the interpretation of the parameters of three multidimensional item response theory models and to determine the relationship between the parameters and traditional concepts of item difficulty and discrimination. The three models considered were multidimensional extensions of the one-, two-, and…
Descriptors: Computer Programs, Difficulty Level, Goodness of Fit, Latent Trait Theory
Bradshaw, Charles W., Jr. – 1968
A method for determining invariant item parameters is presented, along with a scheme for obtaining test scores which are interpretable in terms of a common metric. The method assumes a unidimensional latent trait and uses a three parameter normal ogive model. The assumptions of the model are explored, and the methods for calculating the proposed…
Descriptors: Equated Scores, Item Analysis, Latent Trait Theory, Mathematical Models
Samejima, Fumiko – 1981
This is a continuation of a previous study in which a new method of estimating the operating characteristics of discrete item responses based upon an Old Test, which has a non-constant test information function, was tested upon each of two subtests of the original Old Test, Subtests 1 and 2. The results turned out to be quite successful. In the…
Descriptors: Academic Ability, Computer Assisted Testing, Estimation (Mathematics), Latent Trait Theory
Pages: 1  |  ...  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  ...  |  25