NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 43 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David – Applied Psychological Measurement, 2013
This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…
Descriptors: Item Response Theory, Models, Statistical Analysis, Algebra
Peer reviewed Peer reviewed
Direct linkDirect link
Black, Ryan A.; Butler, Stephen F. – Applied Psychological Measurement, 2012
Although Rasch models have been shown to be a sound methodological approach to develop and validate measures of psychological constructs for more than 50 years, they remain underutilized in psychology and other social sciences. Until recently, one reason for this underutilization was the lack of syntactically simple procedures to fit Rasch and…
Descriptors: Computer Software, Item Response Theory, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Houts, Carrie R.; Edwards, Michael C. – Applied Psychological Measurement, 2013
The violation of the assumption of local independence when applying item response theory (IRT) models has been shown to have a negative impact on all estimates obtained from the given model. Numerous indices and statistics have been proposed to aid analysts in the detection of local dependence (LD). A Monte Carlo study was conducted to evaluate…
Descriptors: Item Response Theory, Psychological Evaluation, Data, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wei; Tay, Louis; Drasgow, Fritz – Applied Psychological Measurement, 2013
There has been growing use of ideal point models to develop scales measuring important psychological constructs. For meaningful comparisons across groups, it is important to identify items on such scales that exhibit differential item functioning (DIF). In this study, the authors examined several methods for assessing DIF on polytomous items…
Descriptors: Test Bias, Effect Size, Item Response Theory, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Nandakumar, Ratna; Hotchkiss, Lawrence – Applied Psychological Measurement, 2012
The PROC NLMIXED procedure in Statistical Analysis System can be used to estimate parameters of item response theory (IRT) models. The data for this procedure are set up in a particular format called the "long format." The long format takes a substantial amount of time to execute the program. This article describes a format called the "wide…
Descriptors: Item Response Theory, Models, Statistical Analysis, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Chiu, Chia-Yi – Applied Psychological Measurement, 2013
Most methods for fitting cognitive diagnosis models to educational test data and assigning examinees to proficiency classes require the Q-matrix that associates each item in a test with the cognitive skills (attributes) needed to answer it correctly. In most cases, the Q-matrix is not known but is constructed from the (fallible) judgments of…
Descriptors: Cognitive Tests, Diagnostic Tests, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yang; Thissen, David – Applied Psychological Measurement, 2012
Local dependence (LD) refers to the violation of the local independence assumption of most item response models. Statistics that indicate LD between a pair of items on a test or questionnaire that is being fitted with an item response model can play a useful diagnostic role in applications of item response theory. In this article, a new score test…
Descriptors: Item Response Theory, Statistical Analysis, Models, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Deng, Nina; Han, Kyung T.; Hambleton, Ronald K. – Applied Psychological Measurement, 2013
DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…
Descriptors: Item Response Theory, Nonparametric Statistics, Statistical Analysis, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Applied Psychological Measurement, 2012
A testlet is a cluster of items that share a common passage, scenario, or other context. These items might measure something in common beyond the trait measured by the test as a whole; if so, the model for the item responses should allow for this testlet trait. But modeling testlet effects that are negligible makes the model unnecessarily…
Descriptors: Test Items, Item Response Theory, Comparative Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Babcock, Ben; Albano, Anthony D. – Applied Psychological Measurement, 2012
Testing programs often rely on common-item equating to maintain a single measurement scale across multiple test administrations and multiple years. Changes over time, in the item parameters and the latent trait underlying the scale, can lead to inaccurate score comparisons and misclassifications of examinees. This study examined how instability in…
Descriptors: Test Items, Measurement, Item Response Theory, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G. – Applied Psychological Measurement, 2013
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Descriptors: Test Construction, Test Items, Item Banks, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Applied Psychological Measurement, 2011
The No Child Left Behind Act requires state assessments to report not only overall scores but also domain scores. To see the information on students' overall achievement, progress, and detailed strengths and weaknesses, and thereby identify areas for improvement in educational quality, students' performances across years or across forms need to be…
Descriptors: Scores, Item Response Theory, Achievement Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Applied Psychological Measurement, 2011
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another. One way to test items with ordinal response scales for DIF is likelihood ratio (LR) testing using item response theory (IRT), or IRT-LR-DIF. Despite the various advantages of…
Descriptors: Test Bias, Test Items, Item Response Theory, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G. – Applied Psychological Measurement, 2012
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
Descriptors: Item Response Theory, Models, Selection, Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ying; Douglas, Jeffrey A.; Henson, Robert A. – Applied Psychological Measurement, 2009
In cognitive diagnosis, the test-taking behavior of some examinees may be idiosyncratic so that their test scores may not reflect their true cognitive abilities as much as that of more typical examinees. Statistical tests are developed to recognize the following: (a) nonmasters of the required attributes who correctly answer the item (spuriously…
Descriptors: Personality Theories, Response Style (Tests), Scores, Cognitive Tests
Previous Page | Next Page ยป
Pages: 1  |  2  |  3