NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lei, Pui-Wa; Li, Hongli – Applied Psychological Measurement, 2013
Minimum sample sizes of about 200 to 250 per group are often recommended for differential item functioning (DIF) analyses. However, there are times when sample sizes for one or both groups of interest are smaller than 200 due to practical constraints. This study attempts to examine the performance of Simultaneous Item Bias Test (SIBTEST),…
Descriptors: Sample Size, Test Bias, Computation, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Jingchen Liu; Gongjun Xu; Zhiliang Ying – Applied Psychological Measurement, 2012
The recent surge of interests in cognitive assessment has led to developments of novel statistical models for diagnostic classification. Central to many such models is the well-known "Q"-matrix, which specifies the item-attribute relationships. This article proposes a data-driven approach to identification of the "Q"-matrix and…
Descriptors: Matrices, Computation, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Jinsong; de la Torre, Jimmy – Applied Psychological Measurement, 2013
Polytomous attributes, particularly those defined as part of the test development process, can provide additional diagnostic information. The present research proposes the polytomous generalized deterministic inputs, noisy, "and" gate (pG-DINA) model to accommodate such attributes. The pG-DINA model allows input from substantive experts…
Descriptors: Models, Cognitive Tests, Diagnostic Tests, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Nandakumar, Ratna; Hotchkiss, Lawrence – Applied Psychological Measurement, 2012
The PROC NLMIXED procedure in Statistical Analysis System can be used to estimate parameters of item response theory (IRT) models. The data for this procedure are set up in a particular format called the "long format." The long format takes a substantial amount of time to execute the program. This article describes a format called the "wide…
Descriptors: Item Response Theory, Models, Statistical Analysis, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Kalender, Ilker – Applied Psychological Measurement, 2012
catcher is a software program designed to compute the [omega] index, a common statistical index for the identification of collusions (cheating) among examinees taking an educational or psychological test. It requires (a) responses and (b) ability estimations of individuals, and (c) item parameters to make computations and outputs the results of…
Descriptors: Computer Software, Computation, Statistical Analysis, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yang; Thissen, David – Applied Psychological Measurement, 2012
Local dependence (LD) refers to the violation of the local independence assumption of most item response models. Statistics that indicate LD between a pair of items on a test or questionnaire that is being fitted with an item response model can play a useful diagnostic role in applications of item response theory. In this article, a new score test…
Descriptors: Item Response Theory, Statistical Analysis, Models, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
De Boeck, Paul; Cho, Sun-Joo; Wilson, Mark – Applied Psychological Measurement, 2011
The models used in this article are secondary dimension mixture models with the potential to explain differential item functioning (DIF) between latent classes, called latent DIF. The focus is on models with a secondary dimension that is at the same time specific to the DIF latent class and linked to an item property. A description of the models…
Descriptors: Test Bias, Models, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G. – Applied Psychological Measurement, 2013
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Descriptors: Test Construction, Test Items, Item Banks, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre – Applied Psychological Measurement, 2013
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Descriptors: Factor Analysis, Predictor Variables, Reliability, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Jinming – Applied Psychological Measurement, 2012
It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…
Descriptors: Simulation, Computation, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Babcock, Ben – Applied Psychological Measurement, 2011
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
Descriptors: Item Response Theory, Sampling, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Nandakumar, Ratna; Yu, Feng; Zhang, Yanwei – Applied Psychological Measurement, 2011
DETECT is a nonparametric methodology to identify the dimensional structure underlying test data. The associated DETECT index, "D[subscript max]," denotes the degree of multidimensionality in data. Conditional covariances (CCOV) are the building blocks of this index. In specifying population CCOVs, the latent test composite [theta][subscript TT]…
Descriptors: Nonparametric Statistics, Statistical Analysis, Tests, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Roberts, James S.; Thompson, Vanessa M. – Applied Psychological Measurement, 2011
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Descriptors: Statistical Analysis, Markov Processes, Computation, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
DeCarlo, Lawrence T. – Applied Psychological Measurement, 2011
Cognitive diagnostic models (CDMs) attempt to uncover latent skills or attributes that examinees must possess in order to answer test items correctly. The DINA (deterministic input, noisy "and") model is a popular CDM that has been widely used. It is shown here that a logistic version of the model can easily be fit with standard software for…
Descriptors: Bayesian Statistics, Computation, Cognitive Tests, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Tianyou; Brennan, Robert L. – Applied Psychological Measurement, 2009
Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…
Descriptors: Computation, Bias, Comparative Analysis, Statistical Analysis
Previous Page | Next Page ยป
Pages: 1  |  2