NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yang; Thissen, David – Applied Psychological Measurement, 2012
Local dependence (LD) refers to the violation of the local independence assumption of most item response models. Statistics that indicate LD between a pair of items on a test or questionnaire that is being fitted with an item response model can play a useful diagnostic role in applications of item response theory. In this article, a new score test…
Descriptors: Item Response Theory, Statistical Analysis, Models, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre – Applied Psychological Measurement, 2013
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Descriptors: Factor Analysis, Predictor Variables, Reliability, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Psychological Measurement, 2011
Estimation of multidimensional item response theory (MIRT) model parameters can be carried out using the normal ogive with unweighted least squares estimation with the normal-ogive harmonic analysis robust method (NOHARM) software. Previous simulation research has demonstrated that this approach does yield accurate and efficient estimates of item…
Descriptors: Item Response Theory, Computation, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Shih, Ching-Lin; Wang, Wen-Chung – Applied Psychological Measurement, 2009
The multiple indicators, multiple causes (MIMIC) method with a pure short anchor was proposed to detect differential item functioning (DIF). A simulation study showed that the MIMIC method with an anchor of 1, 2, 4, or 10 DIF-free items yielded a well-controlled Type I error rate even when such tests contained as many as 40% DIF items. In general,…
Descriptors: Test Bias, Simulation, Methods, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere Joan – Applied Psychological Measurement, 2010
This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…
Descriptors: Factor Analysis, Statistics, Psychological Studies, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Yurdugul, Halil – Applied Psychological Measurement, 2009
This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…
Descriptors: Intervals, Monte Carlo Methods, Computer Software, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Froelich, Amy G.; Habing, Brian – Applied Psychological Measurement, 2008
DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…
Descriptors: Test Items, Monte Carlo Methods, Form Classes (Languages), Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
van Abswoude, Alexandra A. H.; van der Ark, L. Andries; Sijtsma, Klaas – Applied Psychological Measurement, 2004
In this article, an overview of nonparametric item response theory methods for determining the dimensionality of item response data is provided. Four methods were considered: MSP, DETECT, HCA/CCPROX, and DIMTEST. First, the methods were compared theoretically. Second, a simulation study was done to compare the effectiveness of MSP, DETECT, and…
Descriptors: Comparative Analysis, Computer Software, Simulation, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Hoshino, Takahiro; Shigemasu, Kazuo – Applied Psychological Measurement, 2008
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Descriptors: Monte Carlo Methods, Markov Processes, Factor Analysis, Computation
Peer reviewed Peer reviewed
Hattie, John; And Others – Applied Psychological Measurement, 1996
A simulation study was conducted to evaluate the dependability of the "T" index of unidimensionality developed by W. F. Stout and used in his DIMTEST procedure. DIMTEST was found to provide dependable indications of unidimensionality, to be reasonably robust, and to allow for practical demarcation between one and many dimensions. (SLD)
Descriptors: Factor Analysis, Item Response Theory, Robustness (Statistics), Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Balazs, Katalin; Hidegkuti, Istvan; De Boeck, Paul – Applied Psychological Measurement, 2006
In the context of item response theory, it is not uncommon that person-by-item data are correlated beyond the correlation that is captured by the model--in other words, there is extra binomial variation. Heterogeneity of the parameters can explain this variation. There is a need for proper statistical methods to indicate possible extra…
Descriptors: Models, Regression (Statistics), Item Response Theory, Correlation
Peer reviewed Peer reviewed
Liou, Michelle – Applied Psychological Measurement, 1988
In applying I. I. Bejar's method for detecting the dimensionality of achievement tests, researchers should be cautious in interpreting the slope of the principal axis. Other information from the data is needed in conjunction with Bejar's method of addressing item dimensionality. (SLD)
Descriptors: Achievement Tests, Computer Simulation, Difficulty Level, Equated Scores
Peer reviewed Peer reviewed
Roznowski, Mary; And Others – Applied Psychological Measurement, 1991
Three heuristic methods of assessing the dimensionality of binary item pools were evaluated in a Monte Carlo investigation. The indices were based on (1) the local independence of unidimensional tests; (2) patterns of second-factor loadings derived from simplex theory; and (3) the shape of the curve of successive eigenvalues. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Correlation, Evaluation Methods
Peer reviewed Peer reviewed
Camilli, Gregory – Applied Psychological Measurement, 1992
A mathematical model is proposed to describe how group differences in distributions of abilities, which are distinct from the target ability, influence the probability of a correct item response. In the multidimensional approach, differential item functioning is considered a function of the educational histories of the examinees. (SLD)
Descriptors: Ability, Comparative Analysis, Equations (Mathematics), Factor Analysis
Peer reviewed Peer reviewed
Donoghue, John R.; Cliff, Norman – Applied Psychological Measurement, 1991
The validity of the assumptions under which the ordinal true score test theory was derived was examined using (1) simulation based on classical test theory; (2) a long empirical test with data from 321 sixth graders; and (3) an extensive simulation with 480 datasets based on the 3-parameter model. (SLD)
Descriptors: Computer Simulation, Elementary Education, Elementary School Students, Equations (Mathematics)