NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Culpepper, Steven Andrew – Applied Psychological Measurement, 2012
Measurement error significantly biases interaction effects and distorts researchers' inferences regarding interactive hypotheses. This article focuses on the single-indicator case and shows how to accurately estimate group slope differences by disattenuating interaction effects with errors-in-variables (EIV) regression. New analytic findings were…
Descriptors: Evidence, Test Length, Interaction, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Morse, Brendan J.; Johanson, George A.; Griffeth, Rodger W. – Applied Psychological Measurement, 2012
Recent simulation research has demonstrated that using simple raw score to operationalize a latent construct can result in inflated Type I error rates for the interaction term of a moderated statistical model when the interaction (or lack thereof) is proposed at the latent variable level. Rescaling the scores using an appropriate item response…
Descriptors: Item Response Theory, Multiple Regression Analysis, Error of Measurement, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes – Applied Psychological Measurement, 2012
Increasingly, researchers interested in identifying potentially biased test items are encouraged to use a confirmatory, rather than exploratory, approach. One such method for confirmatory testing is rooted in differential bundle functioning (DBF), where hypotheses regarding potential differential item functioning (DIF) for sets of items (bundles)…
Descriptors: Test Bias, Test Items, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Doyoung; De Ayala, R. J.; Ferdous, Abdullah A.; Nering, Michael L. – Applied Psychological Measurement, 2011
To realize the benefits of item response theory (IRT), one must have model-data fit. One facet of a model-data fit investigation involves assessing the tenability of the conditional item independence (CII) assumption. In this Monte Carlo study, the comparative performance of 10 indices for identifying conditional item dependence is assessed. The…
Descriptors: Item Response Theory, Monte Carlo Methods, Error of Measurement, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Seonghoon – Applied Psychological Measurement, 2010
The three types (generalized, unweighted, and weighted) of least squares methods, proposed by Ogasawara, for estimating item response theory (IRT) linking coefficients under dichotomous models are extended to the graded response model. A simulation study was conducted to confirm the accuracy of the extended formulas, and a real data study was…
Descriptors: Least Squares Statistics, Computation, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony; Mallinckrodt, Craig H. – Applied Psychological Measurement, 2010
Longitudinal studies are permeating clinical trials in psychiatry. Therefore, it is of utmost importance to study the psychometric properties of rating scales, frequently used in these trials, within a longitudinal framework. However, intrasubject serial correlation and memory effects are problematic issues often encountered in longitudinal data.…
Descriptors: Psychiatry, Rating Scales, Memory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan – Applied Psychological Measurement, 2007
This article introduces a multinomial error model, which models an examinee's test scores obtained over repeated measurements of an assessment that consists of polytomously scored items. A compound multinomial error model is also introduced for situations in which items are stratified according to content categories and/or prespecified numbers of…
Descriptors: Simulation, Error of Measurement, Scoring, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Hoshino, Takahiro; Shigemasu, Kazuo – Applied Psychological Measurement, 2008
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Descriptors: Monte Carlo Methods, Markov Processes, Factor Analysis, Computation
Peer reviewed Peer reviewed
Raykov, Tenko – Applied Psychological Measurement, 1998
Proposes a method for obtaining standard errors and confidence intervals of composite reliability coefficients based on bootstrap methods and using a structural-equation-modeling framework for estimating the composite reliability of congeneric measures (T. Raykov, 1997). Demonstrates the approach with simulated data. (SLD)
Descriptors: Error of Measurement, Estimation (Mathematics), Reliability, Simulation
Peer reviewed Peer reviewed
Zeng, Lingjia – Applied Psychological Measurement, 1993
A numerical approach for computing standard errors (SEs) of a linear equating is described in which first partial derivatives of equating functions needed to compute SEs are derived numerically. Numerical and analytical approaches are compared using the Tucker equating method. SEs derived numerically are found indistinguishable from SEs derived…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Equations (Mathematics)
Peer reviewed Peer reviewed
Thompson, Paul – Applied Psychological Measurement, 1989
Monte Carlo techniques were used to examine regression approaches to external unfolding. The present analysis examined the technique to determine if various characteristics of the points are recovered (such as ideal points). Generally, monotonic analyses resulted in good recovery. (TJH)
Descriptors: Error of Measurement, Estimation (Mathematics), Mathematical Models, Monte Carlo Methods
Peer reviewed Peer reviewed
Whitely, Susan E. – Applied Psychological Measurement, 1979
A model which gives maximum likelihood estimates of measurement error within the context of a simplex model for practice effects is presented. The appropriateness of the model is tested for five traits, and error estimates are compared to the classical formula estimates. (Author/JKS)
Descriptors: Error of Measurement, Error Patterns, Higher Education, Mathematical Models
Peer reviewed Peer reviewed
Zegers, Frits E. – Applied Psychological Measurement, 1991
The degree of agreement between two raters rating several objects for a single characteristic can be expressed through an association coefficient, such as the Pearson product-moment correlation. How to select an appropriate association coefficient, and the desirable properties and uses of a class of such coefficients--the Euclidean…
Descriptors: Classification, Correlation, Data Interpretation, Equations (Mathematics)
Peer reviewed Peer reviewed
Zimmerman, Donald W.; And Others – Applied Psychological Measurement, 1993
Some of the methods originally used to find relationships between reliability and power associated with a single measurement are extended to difference scores. Results, based on explicit power calculations, show that augmenting the reliability of measurement by reducing error score variance can make significance tests of difference more powerful.…
Descriptors: Equations (Mathematics), Error of Measurement, Individual Differences, Mathematical Models
Previous Page | Next Page ยป
Pages: 1  |  2