NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Psychological Measurement, 2011
Estimation of multidimensional item response theory (MIRT) model parameters can be carried out using the normal ogive with unweighted least squares estimation with the normal-ogive harmonic analysis robust method (NOHARM) software. Previous simulation research has demonstrated that this approach does yield accurate and efficient estimates of item…
Descriptors: Item Response Theory, Computation, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Jin, Kuan-Yu – Applied Psychological Measurement, 2010
In this study, all the advantages of slope parameters, random weights, and latent regression are acknowledged when dealing with component and composite items by adding slope parameters and random weights into the standard item response model with internal restrictions on item difficulty and formulating this new model within a multilevel framework…
Descriptors: Test Items, Difficulty Level, Regression (Statistics), Generalization
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian – Applied Psychological Measurement, 2010
A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…
Descriptors: Item Response Theory, Measurement, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel, Robert C.; Embretson, Susan E. – Applied Psychological Measurement, 2010
Cognitive complexity level is important for measuring both aptitude and achievement in large-scale testing. Tests for standards-based assessment of mathematics, for example, often include cognitive complexity level in the test blueprint. However, little research exists on how mathematics items can be designed to vary in cognitive complexity level.…
Descriptors: Mathematics Tests, Problem Solving, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J. – Applied Psychological Measurement, 2009
Spearman's factor-analytic model has been proposed as a unidimensional linear item response theory (IRT) model for continuous item responses. This article first proposes a reexpression of the model that leads to a form similar to that of standard IRT models for binary responses and discusses the item indices of difficulty discrimination and…
Descriptors: Factor Analysis, Item Response Theory, Discriminant Analysis, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Chang, Hua-Hua; Yi, Qing – Applied Psychological Measurement, 2007
Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…
Descriptors: Adaptive Testing, Item Analysis, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Seraphine, Anne E. – Applied Psychological Measurement, 2000
Examined the performance of DIMTEST, through simulation, for unidimensional and two-dimensional data that exhibited ceiling effects generated through changes in location and scale of the theta distribution. Results indicate that the power of DIMTEST is reduced as the location shifts upward and the scale shifts downward. Considers the selection…
Descriptors: Difficulty Level, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
van Barneveld, Christina – Applied Psychological Measurement, 2007
The purpose of this study is to examine the effects of a false assumption regarding the motivation of examinees on test construction. Simulated data were generated using two models of item responses (the three-parameter logistic item response model alone and in combination with Wise's examinee persistence model) and were calibrated using a…
Descriptors: Test Construction, Item Response Theory, Models, Bayesian Statistics
Peer reviewed Peer reviewed
Kim, Seock-Ho; Cohen, Allan S. – Applied Psychological Measurement, 1998
Compared three methods for developing a common metric under item response theory through simulation. For smaller numbers of common items, linking using the characteristic curve method yielded smaller root mean square differences for both item discrimination and difficulty parameters. For larger numbers of common items, the three methods were…
Descriptors: Comparative Analysis, Difficulty Level, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Meijer, Rob R. – Applied Psychological Measurement, 1995
A statistic used by R. Meijer (1994) to determine person-fit referred to the number of errors from the deterministic Guttman model (L. Guttman, 1950), but this was, in fact, based on the number of errors from the deterministic Guttman model as defined by J. Loevinger (1947, 1948). (SLD)
Descriptors: Difficulty Level, Models, Responses, Scaling
Peer reviewed Peer reviewed
Reckase, Mark D.; McKinley, Robert L. – Applied Psychological Measurement, 1991
The concept of item discrimination is generalized to the case in which more than one ability is required to determine the correct response to an item, using the conceptual framework of item response theory and the definition of multidimensional item difficulty previously developed by M. Reckase (1985). (SLD)
Descriptors: Ability, Definitions, Difficulty Level, Equations (Mathematics)
Peer reviewed Peer reviewed
Direct linkDirect link
Eggen, Theo J. H. M.; Verschoor, Angela J. – Applied Psychological Measurement, 2006
Computerized adaptive tests (CATs) are individualized tests that, from a measurement point of view, are optimal for each individual, possibly under some practical conditions. In the present study, it is shown that maximum information item selection in CATs using an item bank that is calibrated with the one- or the two-parameter logistic model…
Descriptors: Adaptive Testing, Difficulty Level, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Meijer, Rob R.; And Others – Applied Psychological Measurement, 1994
The power of the nonparametric person-fit statistic, U3, is investigated through simulations as a function of item characteristics, test characteristics, person characteristics, and the group to which examinees belong. Results suggest conditions under which relatively short tests can be used for person-fit analysis. (SLD)
Descriptors: Difficulty Level, Group Membership, Item Response Theory, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Ariel, Adelaide; Veldkamp, Bernard P.; Breithaupt, Krista – Applied Psychological Measurement, 2006
Computerized multistage testing (MST) designs require sets of test questions (testlets) to be assembled to meet strict, often competing criteria. Rules that govern testlet assembly may dictate the number of questions on a particular subject or may describe desirable statistical properties for the test, such as measurement precision. In an MST…
Descriptors: Item Response Theory, Item Banks, Psychometrics, Test Items
Previous Page | Next Page ยป
Pages: 1  |  2