NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 61 to 75 of 367 results Save | Export
Berger, Martijn P. F. – 1989
The problem of obtaining designs that result in the most precise parameter estimates is encountered in at least two situations where item response theory (IRT) models are used. In so-called two-stage testing procedures, certain designs that match difficulty levels of the test items with the ability of the examinees may be located. Such designs…
Descriptors: Difficulty Level, Efficiency, Equations (Mathematics), Heuristics
Holland, Paul W.; Thayer, Dorothy T. – 1985
An alternative definition has been developed of the delta scale of item difficulty used at Educational Testing Service. The traditional delta scale uses an inverse normal transformation based on normal ogive models developed years ago. However, no use is made of this fact in typical uses of item deltas. It is simply one way to make the probability…
Descriptors: Difficulty Level, Error Patterns, Estimation (Mathematics), Item Analysis
Boekkooi-Timminga, Ellen – 1989
The construction of parallel tests from item response theory (IRT) based item banks is discussed. Tests are considered parallel whenever their information functions are identical. After the methods for constructing parallel tests are considered, the computational complexity of 0-1 linear programming and the heuristic procedure applied are…
Descriptors: Heuristics, Item Banks, Latent Trait Theory, Mathematical Models
Engelen, Ron J. H.; And Others – 1988
Fisher's information measure for the item difficulty parameter in the Rasch model and its marginal and conditional formulations are investigated. It is shown that expected item information in the unconditional model equals information in the marginal model, provided the assumption of sampling examinees from an ability distribution is made. For the…
Descriptors: Ability, Difficulty Level, Foreign Countries, Latent Trait Theory
Lord, Frederic M. – 1982
Explored are two theoretical approaches that attempt to cope with omitted responses, that is, when an examinee omits (fails to respond to) an item and therefore the item response formula cannot be used. Preliminary considerations are discussed, and it is shown that a conveniently simple application of equivalent items leads to internal…
Descriptors: Guessing (Tests), Latent Trait Theory, Mathematical Models, Maximum Likelihood Statistics
Reckase, Mark D.; McKinley, Robert L. – 1984
The purpose of this paper is to present a generalization of the concept of item difficulty to test items that measure more than one dimension. Three common definitions of item difficulty were considered: the proportion of correct responses for a group of individuals; the probability of a correct response to an item for a specific person; and the…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
Tsutakawa, Robert K.; Lin, Hsin Ying – 1984
Item response curves for a set of binary responses are studied from a Bayesian viewpoint of estimating the item parameters. For the two-parameter logistic model with normally distributed ability, restricted bivariate beta priors are used to illustrate the computation of the posterior mode via the EM algorithm. The procedure is illustrated by data…
Descriptors: Algorithms, Bayesian Statistics, College Entrance Examinations, Estimation (Mathematics)
Gustafsson, Jan-Eric – 1980
Some basic concepts of the one-parameter logistic latent-trait model, or the Rasch model, are presented. This model assumes that the probability of a correct answer to an item is a function of two parameters, one representing the difficulty of the item and one representing the ability of the subject. The purpose of this paper is to explain a…
Descriptors: Academic Ability, Academic Achievement, Difficulty Level, Latent Trait Theory
Peer reviewed Peer reviewed
Rosenbaum, Paul R. – Psychometrika, 1987
This paper develops and applies three nonparametric comparisons of the shapes of two item characteristic surfaces: (1) proportional latent odds; (2) uniform relative difficulty; and (3) item sensitivity. A method is presented for comparing the relative shapes of two item characteristic curves in two examinee populations who were administered an…
Descriptors: Comparative Analysis, Computer Simulation, Difficulty Level, Item Analysis
Peer reviewed Peer reviewed
Muthen, Bengt; Lehman, James – Journal of Educational Statistics, 1985
The applicability of a new multiple-group factor analysis of dichotomous variables is shown and contrasted with the item response theory approach to item bias analysis. Situations are considered where the same set of test items has been administered to more than one group of examinees. (Author/BS).
Descriptors: Factor Analysis, Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewed Peer reviewed
Katz, Barry M.; McSweeney, Maryellen – Journal of Experimental Education, 1984
This paper developed and illustrated a technique to analyze categorical data when subjects can appear in any number of categories for multigroup designs. Post hoc procedures to be used in conjunction with the presented statistical test are also developed. The technique is a large sample technique whose small sample properties are as yet unknown.…
Descriptors: Data Analysis, Hypothesis Testing, Mathematical Models, Research Methodology
Peer reviewed Peer reviewed
Molenaar, Ivo W.; Hoijtink, Herbert – Psychometrika, 1990
Statistical properties of person fit indices are reviewed as indicators of the extent to which a person's score pattern is in agreement with a measurement model. Distribution of a fit index and ability-free fit evaluation are discussed. The null distribution was simulated for a test of 20 items. (SLD)
Descriptors: Item Banks, Item Response Theory, Mathematical Models, Monte Carlo Methods
Peer reviewed Peer reviewed
Zwick, Rebecca – Journal of Educational Statistics, 1990
Use of the Mantel-Haenszel procedure as a test for differential item functioning under the Rasch model of item-response theory is examined. Results of the procedure cannot be generalized to the class of items for which item-response functions are monotonic and local independence holds. (TJH)
Descriptors: Demography, Equations (Mathematics), Error of Measurement, Item Bias
Peer reviewed Peer reviewed
Lautenschlager, Gary J.; Park, Dong-Gun – Applied Psychological Measurement, 1988
The consequences of using item response theory (IRT) item bias detecting procedures with multidimensional IRT item data are examined. Limitations in procedures for detecting item bias are discussed. (SLD)
Descriptors: Item Analysis, Latent Trait Theory, Mathematical Models, Multidimensional Scaling
Peer reviewed Peer reviewed
Reuterberg, Sven-Eric; Gustafsson, Jan-Eric – Educational and Psychological Measurement, 1992
The use of confirmatory factor analysis by the LISREL program is demonstrated as an assumption-testing method when computing reliability coefficients under different model assumptions. Results indicate that reliability estimates are robust against departure from the assumption of parallelism of test items. (SLD)
Descriptors: Equations (Mathematics), Estimation (Mathematics), Mathematical Models, Robustness (Statistics)
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  25