NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Measurement in Education, 2022
Much research has been devoted to identification of differential item functioning (DIF), which occurs when the item responses for individuals from two groups differ after they are conditioned on the latent trait being measured by the scale. There has been less work examining differential step functioning (DSF), which is present for polytomous…
Descriptors: Comparative Analysis, Item Response Theory, Item Analysis, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Yoon Ah; Lee, Won-Chan – Applied Measurement in Education, 2022
This article presents the performance of item response theory (IRT) models when double ratings are used as item scores over single ratings when rater effects are present. Study 1 examined the influence of the number of ratings on the accuracy of proficiency estimation in the generalized partial credit model (GPCM). Study 2 compared the accuracy of…
Descriptors: Item Response Theory, Item Analysis, Scores, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Abulela, Mohammed A. A.; Rios, Joseph A. – Applied Measurement in Education, 2022
When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the…
Descriptors: Comparative Analysis, Robustness (Statistics), Nonparametric Statistics, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, Maria E.; Ercikan, Kadriye – Applied Measurement in Education, 2011
In this study, we examine the degree of construct comparability and possible sources of incomparability of the English and French versions of the Programme for International Student Assessment (PISA) 2003 problem-solving measure administered in Canada. Several approaches were used to examine construct comparability at the test- (examination of…
Descriptors: Foreign Countries, English, French, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Applied Measurement in Education, 2004
Three methods of detecting item drift were compared: the procedure in BILOG-MG for estimating linear trends in item difficulty, the CUSUM procedure that Veerkamp and Glas (2000) used to detect trends in difficulty or discrimination, and a modification of Kim, Cohen, and Park's (1995) x 2 test for multiple-group differential item functioning (DIF),…
Descriptors: Comparative Analysis, Test Items, Testing, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wollack, James A. – Applied Measurement in Education, 2006
Many of the currently available statistical indexes to detect answer copying lack sufficient power at small [alpha] levels or when the amount of copying is relatively small. Furthermore, there is no one index that is uniformly best. Depending on the type or amount of copying, certain indexes are better than others. The purpose of this article was…
Descriptors: Statistical Analysis, Item Analysis, Test Length, Sample Size
Peer reviewed Peer reviewed
Chang, Lei – Applied Measurement in Education, 1999
Compared the Nedelsky (L. Nedelsky, 1954) and Angoff (W. Angoff, 1971) standard-setting methods in three studies involving 80 graduate students as judges. Nedelsky cutscores were significantly lower than Angoff cutscores. Suggests that combining the strong features of both methods would make a better standard-setting procedure. (SLD)
Descriptors: Comparative Analysis, Cutting Scores, Graduate Students, Graduate Study
Peer reviewed Peer reviewed
Direct linkDirect link
Ercikan, Kadriye; Gierl, Mark J.; McCreith, Tanya; Puhan, Gautam; Koh, Kim – Applied Measurement in Education, 2004
This research examined the degree of comparability and sources of incomparability of English and French versions of reading, mathematics, and science tests that were administered as part of a survey of achievement in Canada. The results point to substantial psychometric differences between the 2 language versions. Approximately 18% to 36% of the…
Descriptors: Foreign Countries, Psychometrics, Science Tests, French
Peer reviewed Peer reviewed
Marco, Gary L. – Applied Measurement in Education, 1988
Four simulated mathematical and verbal test forms were produced by test assembly procedures proposed in legislative bills in California and New York in 1986 to minimize differences between majority and minority scores. Item response theory analyses of data for about 22,000 black and 28,000 White high-school students were conducted. (SLD)
Descriptors: Black Students, College Entrance Examinations, Comparative Analysis, Culture Fair Tests