Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 20 |
Descriptor
| Computer Software | 28 |
| Item Response Theory | 28 |
| Computation | 11 |
| Models | 10 |
| Simulation | 7 |
| Factor Analysis | 6 |
| Bayesian Statistics | 5 |
| Evaluation Methods | 5 |
| Test Items | 5 |
| Comparative Analysis | 4 |
| Estimation (Mathematics) | 4 |
| More ▼ | |
Source
| Applied Psychological… | 28 |
Author
| Hambleton, Ronald K. | 3 |
| Han, Kyung T. | 3 |
| Wang, Wen-Chung | 3 |
| DeMars, Christine E. | 2 |
| Ferrando, Pere J. | 2 |
| Black, Ryan A. | 1 |
| Butler, Stephen F. | 1 |
| Chen, Po-Hsi | 1 |
| Chen, Wen-Hung | 1 |
| Childs, Ruth A. | 1 |
| Choi, Seung W. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 28 |
| Reports - Research | 10 |
| Reports - Descriptive | 9 |
| Reports - Evaluative | 7 |
| Book/Product Reviews | 2 |
Education Level
| Adult Education | 1 |
| Higher Education | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Postsecondary Education | 1 |
| Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lorenzo-Seva, Urbano; Ferrando, Pere J. – Applied Psychological Measurement, 2013
FACTOR 9.2 was developed for three reasons. First, exploratory factor analysis (FA) is still an active field of research although most recent developments have not been incorporated into available programs. Second, there is now renewed interest in semiconfirmatory (SC) solutions as suitable approaches to the complex structures are commonly found…
Descriptors: Factor Analysis, Item Response Theory, Computer Software
Socha, Alan; DeMars, Christine E. – Applied Psychological Measurement, 2013
The software program DIMTEST can be used to assess the unidimensionality of item scores. The software allows the user to specify a guessing parameter. Using simulated data, the effects of guessing parameter specification for use with the ATFIND procedure for empirically deriving the Assessment Subtest (AT; that is, a subtest composed of items that…
Descriptors: Item Response Theory, Computer Software, Guessing (Tests), Simulation
Black, Ryan A.; Butler, Stephen F. – Applied Psychological Measurement, 2012
Although Rasch models have been shown to be a sound methodological approach to develop and validate measures of psychological constructs for more than 50 years, they remain underutilized in psychology and other social sciences. Until recently, one reason for this underutilization was the lack of syntactically simple procedures to fit Rasch and…
Descriptors: Computer Software, Item Response Theory, Statistical Analysis
Ferrando, Pere J. – Applied Psychological Measurement, 2011
Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…
Descriptors: Factor Analysis, Models, Individual Differences, Responses
Nandakumar, Ratna; Hotchkiss, Lawrence – Applied Psychological Measurement, 2012
The PROC NLMIXED procedure in Statistical Analysis System can be used to estimate parameters of item response theory (IRT) models. The data for this procedure are set up in a particular format called the "long format." The long format takes a substantial amount of time to execute the program. This article describes a format called the "wide…
Descriptors: Item Response Theory, Models, Statistical Analysis, Computer Software
Paek, Insu; Han, Kyung T. – Applied Psychological Measurement, 2013
This article reviews a new item response theory (IRT) model estimation program, IRTPRO 2.1, for Windows that is capable of unidimensional and multidimensional IRT model estimation for existing and user-specified constrained IRT models for dichotomously and polytomously scored item response data. (Contains 1 figure and 2 notes.)
Descriptors: Item Response Theory, Computer Software, Computation, Patients
DeMars, Christine E.; Jurich, Daniel P. – Applied Psychological Measurement, 2012
The nonequivalent groups anchor test (NEAT) design is often used to scale item parameters from two different test forms. A subset of items, called the anchor items or common items, are administered as part of both test forms. These items are used to adjust the item calibrations for any differences in the ability distributions of the groups taking…
Descriptors: Computer Software, Item Response Theory, Scaling, Equated Scores
Svetina, Dubravka; Levy, Roy – Applied Psychological Measurement, 2012
An overview of popular software packages for conducting dimensionality assessment in multidimensional models is presented. Specifically, five popular software packages are described in terms of their capabilities to conduct dimensionality assessment with respect to the nature of analysis (exploratory or confirmatory), types of data (dichotomous,…
Descriptors: Computer Software, Item Response Theory, Models, Factor Analysis
Kieftenbeld, Vincent; Natesan, Prathiba – Applied Psychological Measurement, 2012
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Descriptors: Test Length, Markov Processes, Item Response Theory, Monte Carlo Methods
Deng, Nina; Han, Kyung T.; Hambleton, Ronald K. – Applied Psychological Measurement, 2013
DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…
Descriptors: Item Response Theory, Nonparametric Statistics, Statistical Analysis, Computer Software
Choi, Seung W.; Podrabsky, Tracy; McKinney, Natalie – Applied Psychological Measurement, 2012
Computerized adaptive testing (CAT) enables efficient and flexible measurement of latent constructs. The majority of educational and cognitive measurement constructs are based on dichotomous item response theory (IRT) models. An integral part of developing various components of a CAT system is conducting simulations using both known and empirical…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computer Software, Item Response Theory
Johnson, Timothy R. – Applied Psychological Measurement, 2013
One of the distinctions between classical test theory and item response theory is that the former focuses on sum scores and their relationship to true scores, whereas the latter concerns item responses and their relationship to latent scores. Although item response theory is often viewed as the richer of the two theories, sum scores are still…
Descriptors: Item Response Theory, Scores, Computation, Bayesian Statistics
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien – Applied Psychological Measurement, 2013
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Bayesian Statistics
Huang, Hung-Yu; Wang, Wen-Chung; Chen, Po-Hsi; Su, Chi-Ming – Applied Psychological Measurement, 2013
Many latent traits in the human sciences have a hierarchical structure. This study aimed to develop a new class of higher order item response theory models for hierarchical latent traits that are flexible in accommodating both dichotomous and polytomous items, to estimate both item and person parameters jointly, to allow users to specify…
Descriptors: Item Response Theory, Models, Vertical Organization, Bayesian Statistics
Kreiner, Svend – Applied Psychological Measurement, 2011
To rule out the need for a two-parameter item response theory (IRT) model during item analysis by Rasch models, it is important to check the Rasch model's assumption that all items have the same item discrimination. Biserial and polyserial correlation coefficients measuring the association between items and restscores are often used in an informal…
Descriptors: Item Analysis, Correlation, Item Response Theory, Models
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
