Descriptor
Source
Journal of Educational… | 2 |
Psychometrika | 2 |
Applied Psychological… | 1 |
Educational and Psychological… | 1 |
Psychological Assessment | 1 |
Studies in Educational… | 1 |
Author
Publication Type
Education Level
Audience
Researchers | 42 |
Location
United States | 2 |
Belgium | 1 |
California | 1 |
Denmark | 1 |
Florida | 1 |
Georgia | 1 |
Hungary | 1 |
Israel | 1 |
Italy | 1 |
Netherlands | 1 |
United Kingdom (England) | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jiang, Hai; Tang, K. Linda – 1998
This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and…
Descriptors: Algorithms, Item Response Theory, Mathematical Models, Simulation
Kim, Seock-Ho – 2002
Continuation ratio logits are used to model the possibilities of obtaining ordered categories in a polytomously scored item. This model is an alternative to other models for ordered category items such as the graded response model and the generalized partial credit model. The discussion includes a theoretical development of the model, a…
Descriptors: Ability, Classification, Item Response Theory, Mathematical Models
Ackerman, Terry A.; Spray, Judith A. – 1986
A model of test item dependency is presented and used to illustrate the effect that violations of local independence have on the behavior of item characteristic curves. The dependency model is flexible enough to simulate the interaction of a number of factors including item difficulty and item discrimination, varying degrees of item dependence,…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
Dorans, Neil J.; Holland, Paul W. – 1992
At the Educational Testing Service, the Mantel-Haenszel procedure is used for differential item functioning (DIF) detection, and the standardization procedure is used to describe DIF. This report describes these procedures. First, an important distinction is made between DIF and impact, pointing to the need to compare the comparable. Then, these…
Descriptors: Comparative Analysis, Distractors (Tests), Identification, Item Bias
Bart, William M.; Palvia, Rajkumari – 1983
In previous research, no relationship was found between test factor structure and test hierarchical structure. This study found some correspondence between test factor structure and test inter-item dependency structure, as measured by a log-linear model. There was an inconsistency, however, which warrants further study: more significant two-item…
Descriptors: Factor Structure, Interaction, Latent Trait Theory, Mathematical Models
Ackerman, Terry A. – 1987
Concern has been expressed over the item response theory (IRT) assumption that a person's ability can be estimated in a unidimensional latent space. To examine whether or not the response to an item requires only a single latent ability, unidimensional ability estimates were compared for data generated from the multidimensional item response…
Descriptors: Ability, Computer Simulation, Difficulty Level, Item Analysis
Eignor, Daniel R.; Douglass, James B. – 1982
This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…
Descriptors: Comparative Analysis, Latent Trait Theory, Mathematical Models, Multiple Choice Tests
Groome, Mary Lynn; Groome, William R. – 1979
Angoff's method for identifying possible biased test items was applied to four computer-generated hypothetical tests, two of which contained no biased items and two of which contained a few biased items. The tests were generated to match specifications of a latent trait model. Angoff's method compared item difficulty estimates for two different…
Descriptors: Difficulty Level, Identification, Item Analysis, Mathematical Models
Henard, David H. – 1998
Item response theory models arose from the inherent limitations of classical test theory methods of test analysis. A brief description of those limitations and the corresponding enhancements provided by item response models is provided. Further, an examination of the popular Rasch one-parameter latent trait model is undertaken. Specific…
Descriptors: Computer Oriented Programs, Computer Software, Heuristics, Item Response Theory
Linacre, John Michael – 1991
A rating scale can be expressed as a chain of dichotomous items. The relationship between the dichotomies depends on the manner in which the rating scale is presented to the test taker. Three models for ordered scales are discussed. In the success model, which represents growth, the lowest or easiest category is presented first. If the test taker…
Descriptors: Difficulty Level, Equations (Mathematics), Mathematical Models, Rating Scales
Reckase, Mark D.; McKinley, Robert L. – 1984
The purpose of this paper is to present a generalization of the concept of item difficulty to test items that measure more than one dimension. Three common definitions of item difficulty were considered: the proportion of correct responses for a group of individuals; the probability of a correct response to an item for a specific person; and the…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
Gustafsson, Jan-Eric – 1980
Some basic concepts of the one-parameter logistic latent-trait model, or the Rasch model, are presented. This model assumes that the probability of a correct answer to an item is a function of two parameters, one representing the difficulty of the item and one representing the ability of the subject. The purpose of this paper is to explain a…
Descriptors: Academic Ability, Academic Achievement, Difficulty Level, Latent Trait Theory

Stocking, Martha L.; And Others – Applied Psychological Measurement, 1993
A method of automatically selecting items for inclusion in a test with constraints on item content and statistical properties was applied to real data. Tests constructed manually from the same data and constraints were compared to tests constructed automatically. Results show areas in which automated assembly can improve test construction. (SLD)
Descriptors: Algorithms, Automation, Comparative Testing, Computer Assisted Testing
Ackerman, Terry A. – 1991
This paper examines the effect of using unidimensional item response theory (IRT) item parameter estimates of multidimensional items to create weakly parallel test forms using target information curves. To date, all computer-based algorithms that have been devised to create parallel test forms assume that the items are unidimensional. This paper…
Descriptors: Algorithms, Equations (Mathematics), Estimation (Mathematics), Item Response Theory

Reckase, Mark D. – Psychological Assessment, 1996
Summarizes the current state of the art in test construction and contrasts it with previous conceptual models, some of which are wrong or misleading. New methodologies for item selection and review are presented, with current thinking on the specification of technical characteristics of tests. (Author/SLD)
Descriptors: Mathematical Models, Psychological Testing, Selection, State of the Art Reviews