Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Difficulty Level | 10 |
| Simulation | 8 |
| Item Response Theory | 7 |
| Test Items | 7 |
| Models | 3 |
| Adaptive Testing | 2 |
| Bayesian Statistics | 2 |
| Comparative Analysis | 2 |
| Computer Assisted Testing | 2 |
| Computer Simulation | 2 |
| Correlation | 2 |
| More ▼ | |
Source
| Educational and Psychological… | 10 |
Author
| Dardick, William R. | 1 |
| De Ayala, R. J. | 1 |
| Dodd, Barbara G. | 1 |
| Jin, Kuan-Yu | 1 |
| MacDonald, Paul | 1 |
| Matlock, Ki Lynn | 1 |
| Mazor, Kathleen M. | 1 |
| Mislevy, Robert J. | 1 |
| Paunonen, Sampo V. | 1 |
| Schweizer, Karl | 1 |
| Shoemaker, David M. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 9 |
| Reports - Research | 5 |
| Reports - Evaluative | 4 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Raven Advanced Progressive… | 1 |
What Works Clearinghouse Rating
Is the Factor Observed in Investigations on the Item-Position Effect Actually the Difficulty Factor?
Schweizer, Karl; Troche, Stefan – Educational and Psychological Measurement, 2018
In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of…
Descriptors: Investigations, Difficulty Level, Factor Analysis, Models
Matlock, Ki Lynn; Turner, Ronna – Educational and Psychological Measurement, 2016
When constructing multiple test forms, the number of items and the total test difficulty are often equivalent. Not all test developers match the number of items and/or average item difficulty within subcontent areas. In this simulation study, six test forms were constructed having an equal number of items and average item difficulty overall.…
Descriptors: Item Response Theory, Computation, Test Items, Difficulty Level
Dardick, William R.; Mislevy, Robert J. – Educational and Psychological Measurement, 2016
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Descriptors: Bayesian Statistics, Probability, Data Analysis, Item Response Theory
Wang, Wen-Chung; Jin, Kuan-Yu – Educational and Psychological Measurement, 2010
In this study, the authors extend the standard item response model with internal restrictions on item difficulty (MIRID) to fit polytomous items using cumulative logits and adjacent-category logits. Moreover, the new model incorporates discrimination parameters and is rooted in a multilevel framework. It is a nonlinear mixed model so that existing…
Descriptors: Difficulty Level, Test Items, Item Response Theory, Generalization
Weitzman, R. A. – Educational and Psychological Measurement, 2009
Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…
Descriptors: Item Response Theory, Test Items, Difficulty Level, Test Bias
Peer reviewedMazor, Kathleen M.; And Others – Educational and Psychological Measurement, 1994
A variation of the Mantel Haenszel procedure is proposed that improves detection rates of nonuniform differential item functioning (DIF) without increasing the Type I error rate. The procedure, which is illustrated with simulated examinee responses, involves splitting the sample into low- and high-performing groups. (SLD)
Descriptors: Difficulty Level, Identification, Item Analysis, Item Bias
Peer reviewedShoemaker, David M. – Educational and Psychological Measurement, 1972
Descriptors: Difficulty Level, Error of Measurement, Item Sampling, Simulation
Peer reviewedMacDonald, Paul; Paunonen, Sampo V. – Educational and Psychological Measurement, 2002
Examined the behavior of item and person statistics from item response theory and classical test theory frameworks through Monte Carlo methods with simulated test data. Findings suggest that item difficulty and person ability estimates are highly comparable for both approaches. (SLD)
Descriptors: Ability, Comparative Analysis, Difficulty Level, Item Response Theory
Peer reviewedDodd, Barbara G.; And Others – Educational and Psychological Measurement, 1993
Effects of the following variables on performance of computerized adaptive testing (CAT) procedures for the partial credit model (PCM) were studied: (1) stopping rule for terminating CAT; (2) item pool size; and (3) distribution of item difficulties. Implications of findings for CAT systems based on the PCM are discussed. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Difficulty Level
Peer reviewedDe Ayala, R. J. – Educational and Psychological Measurement, 1992
Effects of dimensionality on ability estimation of an adaptive test were examined using generated data in Bayesian computerized adaptive testing (CAT) simulations. Generally, increasing interdimensional difficulty association produced a slight decrease in test length and an increase in accuracy of ability estimation as assessed by root mean square…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Computer Simulation

Direct link
