NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Andrew T. – Applied Psychological Measurement, 2011
Practitioners often depend on item analysis to select items for exam forms and have a variety of options available to them. These include the point-biserial correlation, the agreement statistic, the B index, and the phi coefficient. Although research has demonstrated that these statistics can be useful for item selection, no research as of yet has…
Descriptors: Test Items, Item Analysis, Cutting Scores, Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Almehrizi, Rashid S. – Applied Psychological Measurement, 2013
The majority of large-scale assessments develop various score scales that are either linear or nonlinear transformations of raw scores for better interpretations and uses of assessment results. The current formula for coefficient alpha (a; the commonly used reliability coefficient) only provides internal consistency reliability estimates of raw…
Descriptors: Raw Scores, Scaling, Reliability, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Ip, Edward H. – Applied Psychological Measurement, 2010
The testlet response model is designed for handling items that are clustered, such as those embedded within the same reading passage. Although the testlet is a powerful tool for handling item clusters in educational and psychological testing, the interpretations of its item parameters, the conditional correlation between item pairs, and the…
Descriptors: Item Response Theory, Models, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David; Kreiner, Svend – Applied Psychological Measurement, 2010
Models of modern test theory imply statistical independence among responses, generally referred to as "local independence." One violation of local independence occurs when the response to one item governs the response to a subsequent item. Expanding on a formulation of this kind of violation as a process in the dichotomous Rasch model,…
Descriptors: Test Theory, Item Response Theory, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kahraman, Nilufer; Kamata, Akihito – Applied Psychological Measurement, 2004
In this study, the precision of subscale score estimates was evaluated when out-of-scale information was incorporated. Procedures that incorporated out-of-scale information and only information within a subscale were compared through a series of simulations. It was revealed that more information (i.e., more precision) was always provided for…
Descriptors: Scores, Computation, Evaluation Methods, Simulation
Peer reviewed Peer reviewed
Roznowski, Mary; And Others – Applied Psychological Measurement, 1991
Three heuristic methods of assessing the dimensionality of binary item pools were evaluated in a Monte Carlo investigation. The indices were based on (1) the local independence of unidimensional tests; (2) patterns of second-factor loadings derived from simplex theory; and (3) the shape of the curve of successive eigenvalues. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Correlation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Gorin, Joanna S.; Embretson, Susan E. – Applied Psychological Measurement, 2006
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…
Descriptors: Difficulty Level, Test Items, Modeling (Psychology), Paragraph Composition