NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Brossman, Bradley G.; Lee, Won-Chan – Applied Psychological Measurement, 2013
The purpose of this research was to develop observed score and true score equating procedures to be used in conjunction with the multidimensional item response theory (MIRT) framework. Three equating procedures--two observed score procedures and one true score procedure--were created and described in detail. One observed score procedure was…
Descriptors: Equated Scores, True Scores, Item Response Theory, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
He, Yong; Cui, Zhongmin; Fang, Yu; Chen, Hanwei – Applied Psychological Measurement, 2013
Common test items play an important role in equating alternate test forms under the common item nonequivalent groups design. When the item response theory (IRT) method is applied in equating, inconsistent item parameter estimates among common items can lead to large bias in equated scores. It is prudent to evaluate inconsistency in parameter…
Descriptors: Regression (Statistics), Item Response Theory, Test Items, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew – Applied Psychological Measurement, 2012
Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…
Descriptors: Intervals, Monte Carlo Methods, Computation, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Seybert, Jacob; Stark, Stephen – Applied Psychological Measurement, 2012
A Monte Carlo study was conducted to examine the accuracy of differential item functioning (DIF) detection using the differential functioning of items and tests (DFIT) method. Specifically, the performance of DFIT was compared using "testwide" critical values suggested by Flowers, Oshima, and Raju, based on simulations involving large numbers of…
Descriptors: Test Bias, Monte Carlo Methods, Form Classes (Languages), Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim; Deng, Weiling; Zhang, Yu-Li – Applied Psychological Measurement, 2011
Nonequivalent groups with anchor test (NEAT) equating functions that use a single anchor can have accuracy problems when the groups are extremely different and/or when the anchor weakly correlates with the tests being equated. Proposals have been made to address these issues by incorporating more than one anchor into NEAT equating functions. These…
Descriptors: Equated Scores, Tests, Comparative Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Belov, Dmitry I. – Applied Psychological Measurement, 2011
This article presents the Variable Match Index (VM-Index), a new statistic for detecting answer copying. The power of the VM-Index relies on two-dimensional conditioning as well as the structure of the test. The asymptotic distribution of the VM-Index is analyzed by reduction to Poisson trials. A computational study comparing the VM-Index with the…
Descriptors: Cheating, Journal Articles, Computation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hennig, Christian; Mullensiefen, Daniel; Bargmann, Jens – Applied Psychological Measurement, 2010
The authors propose a method to compare the influence of a treatment on different properties within subjects. The properties are measured by several Likert-type-scaled items. The results show that many existing approaches, such as repeated measurement analysis of variance on sum and mean scores, a linear partial credit model, and a graded response…
Descriptors: Simulation, Pretests Posttests, Regression (Statistics), Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew D.; Weiss, David J.; Kim-Kang, Gyenam – Applied Psychological Measurement, 2010
Assessing individual change is an important topic in both psychological and educational measurement. An adaptive measurement of change (AMC) method had previously been shown to exhibit greater efficiency in detecting change than conventional nonadaptive methods. However, little work had been done to compare different procedures within the AMC…
Descriptors: Computer Assisted Testing, Hypothesis Testing, Measurement, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Warrens, Matthijs J.; de Gruijter, Dato N. M.; Heiser, Willem J. – Applied Psychological Measurement, 2007
In this article, the relationship between two alternative methods for the analysis of multivariate categorical data is systematically explored. It is shown that the person score of the first dimension of classical optimal scaling correlates strongly with the latent variable for the two-parameter item response theory (IRT) model. Next, under the…
Descriptors: Scaling, Evaluation Methods, Item Response Theory, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Applied Psychological Measurement, 2008
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…
Descriptors: Simulation, Item Response Theory, Monte Carlo Methods, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko – Applied Psychological Measurement, 2008
Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…
Descriptors: Item Response Theory, Item Analysis, Computer Simulation, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Applied Psychological Measurement, 2007
Ramsay curve item response theory (RC-IRT) was recently developed to detect and correct for nonnormal latent variables when unidimensional IRT models are fitted to data using maximum marginal likelihood estimation. The purpose of this research is to evaluate the performance of RC-IRT for Likert-type item responses with varying test lengths, sample…
Descriptors: Test Length, Item Response Theory, Sample Size, Comparative Analysis
Peer reviewed Peer reviewed
Ben-Simon, Anat; Budescu, David V.; Nevo, Baruch – Applied Psychological Measurement, 1997
Tested methods designed to extract information about examinees' partial knowledge when taking multiple-choice tests. Applied seven testing methods to multiple-choice tests and analyzed results according to six criteria. Results with 2,989 Israeli college applicants show a tendency on the part of examinees to take advantage of the special methods.…
Descriptors: College Students, Comparative Analysis, Evaluation Methods, Foreign Countries
Peer reviewed Peer reviewed
Roznowski, Mary; And Others – Applied Psychological Measurement, 1991
Three heuristic methods of assessing the dimensionality of binary item pools were evaluated in a Monte Carlo investigation. The indices were based on (1) the local independence of unidimensional tests; (2) patterns of second-factor loadings derived from simplex theory; and (3) the shape of the curve of successive eigenvalues. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Correlation, Evaluation Methods
Peer reviewed Peer reviewed
Ackerman, Terry A.; Smith, Philip L. – Applied Psychological Measurement, 1988
The similarity of information provided by direct and indirect methods of writing assessment was investigated using 219 tenth graders. A resulting cognitive model of writing skills indicates that practitioners interested in reliably measuring all aspects of the proposed writing process continuum use both direct and indirect methods. (TJH)
Descriptors: Comparative Analysis, Essay Tests, Evaluation Methods, Factor Analysis
Previous Page | Next Page ยป
Pages: 1  |  2