Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 11 |
Descriptor
| Computer Software | 12 |
| Item Response Theory | 10 |
| Computation | 8 |
| Models | 5 |
| Bayesian Statistics | 4 |
| Simulation | 4 |
| Comparative Analysis | 3 |
| Maximum Likelihood Statistics | 3 |
| Monte Carlo Methods | 3 |
| Statistical Analysis | 3 |
| Test Bias | 3 |
| More ▼ | |
Source
| Applied Psychological… | 12 |
Author
| Wang, Wen-Chung | 3 |
| Black, Ryan A. | 1 |
| Butler, Stephen F. | 1 |
| Chen, Po-Hsi | 1 |
| Cho, Sun-Joo | 1 |
| De Boeck, Paul | 1 |
| DeMars, Christine E. | 1 |
| Finch, Holmes | 1 |
| Hu, Huiqin | 1 |
| Huang, Hung-Yu | 1 |
| Jin, Kuan-Yu | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 12 |
| Reports - Research | 12 |
Education Level
| Higher Education | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Postsecondary Education | 1 |
| Secondary Education | 1 |
Audience
Location
| Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Socha, Alan; DeMars, Christine E. – Applied Psychological Measurement, 2013
The software program DIMTEST can be used to assess the unidimensionality of item scores. The software allows the user to specify a guessing parameter. Using simulated data, the effects of guessing parameter specification for use with the ATFIND procedure for empirically deriving the Assessment Subtest (AT; that is, a subtest composed of items that…
Descriptors: Item Response Theory, Computer Software, Guessing (Tests), Simulation
Black, Ryan A.; Butler, Stephen F. – Applied Psychological Measurement, 2012
Although Rasch models have been shown to be a sound methodological approach to develop and validate measures of psychological constructs for more than 50 years, they remain underutilized in psychology and other social sciences. Until recently, one reason for this underutilization was the lack of syntactically simple procedures to fit Rasch and…
Descriptors: Computer Software, Item Response Theory, Statistical Analysis
De Boeck, Paul; Cho, Sun-Joo; Wilson, Mark – Applied Psychological Measurement, 2011
The models used in this article are secondary dimension mixture models with the potential to explain differential item functioning (DIF) between latent classes, called latent DIF. The focus is on models with a secondary dimension that is at the same time specific to the DIF latent class and linked to an item property. A description of the models…
Descriptors: Test Bias, Models, Statistical Analysis, Computation
Johnson, Timothy R. – Applied Psychological Measurement, 2013
One of the distinctions between classical test theory and item response theory is that the former focuses on sum scores and their relationship to true scores, whereas the latter concerns item responses and their relationship to latent scores. Although item response theory is often viewed as the richer of the two theories, sum scores are still…
Descriptors: Item Response Theory, Scores, Computation, Bayesian Statistics
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien – Applied Psychological Measurement, 2013
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Bayesian Statistics
Nandakumar, Ratna; Yu, Feng; Zhang, Yanwei – Applied Psychological Measurement, 2011
DETECT is a nonparametric methodology to identify the dimensional structure underlying test data. The associated DETECT index, "D[subscript max]," denotes the degree of multidimensionality in data. Conditional covariances (CCOV) are the building blocks of this index. In specifying population CCOVs, the latent test composite [theta][subscript TT]…
Descriptors: Nonparametric Statistics, Statistical Analysis, Tests, Data
Huang, Hung-Yu; Wang, Wen-Chung; Chen, Po-Hsi; Su, Chi-Ming – Applied Psychological Measurement, 2013
Many latent traits in the human sciences have a hierarchical structure. This study aimed to develop a new class of higher order item response theory models for hierarchical latent traits that are flexible in accommodating both dichotomous and polytomous items, to estimate both item and person parameters jointly, to allow users to specify…
Descriptors: Item Response Theory, Models, Vertical Organization, Bayesian Statistics
Wang, Wen-Chung; Jin, Kuan-Yu – Applied Psychological Measurement, 2010
In this study, all the advantages of slope parameters, random weights, and latent regression are acknowledged when dealing with component and composite items by adding slope parameters and random weights into the standard item response model with internal restrictions on item difficulty and formulating this new model within a multilevel framework…
Descriptors: Test Items, Difficulty Level, Regression (Statistics), Generalization
Woods, Carol M. – Applied Psychological Measurement, 2011
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another, irrespective of true group-mean differences on the constructs being measured. This article is focused on item response theory based likelihood ratio testing for DIF (IRT-LR or…
Descriptors: Simulation, Item Response Theory, Testing, Questionnaires
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko – Applied Psychological Measurement, 2008
Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…
Descriptors: Item Response Theory, Item Analysis, Computer Simulation, Equated Scores
Peer reviewedSkaggs, Gary; Stevenson, Jose – Applied Psychological Measurement, 1989
Pseudo-Bayesian and joint maximum likelihood procedures were compared for their ability to estimate item parameters for item response theory's (IRT's) three-parameter logistic model. Item responses were generated for sample sizes of 2,000 and 500; test lengths of 35 and 15; and examinees of high, medium, and low ability. (TJH)
Descriptors: Bayesian Statistics, Comparative Analysis, Computer Software, Estimation (Mathematics)

Direct link
