Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 4 |
Descriptor
| Effect Size | 5 |
| Sample Size | 5 |
| Statistical Significance | 5 |
| Test Items | 5 |
| Item Response Theory | 4 |
| Simulation | 3 |
| Statistical Analysis | 3 |
| Classification | 2 |
| Computation | 1 |
| Educational Research | 1 |
| Error of Measurement | 1 |
| More ▼ | |
Author
| Carvajal, Jorge | 1 |
| Debelak, Rudolf | 1 |
| Dorans, Neil | 1 |
| Guo, Hongwen | 1 |
| Hambleton, Ronald K. | 1 |
| Henninger, Mirka | 1 |
| Robin, Frederic | 1 |
| Skorupski, William P. | 1 |
| Strobl, Carolin | 1 |
| Suh, Youngsuk | 1 |
Publication Type
| Journal Articles | 4 |
| Reports - Research | 3 |
| Reports - Evaluative | 2 |
| Information Analyses | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Henninger, Mirka; Debelak, Rudolf; Strobl, Carolin – Educational and Psychological Measurement, 2023
To detect differential item functioning (DIF), Rasch trees search for optimal split-points in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF…
Descriptors: Item Response Theory, Test Items, Effect Size, Statistical Significance
Guo, Hongwen; Robin, Frederic; Dorans, Neil – Journal of Educational Measurement, 2017
The early detection of item drift is an important issue for frequently administered testing programs because items are reused over time. Unfortunately, operational data tend to be very sparse and do not lend themselves to frequent monitoring analyses, particularly for on-demand testing. Building on existing residual analyses, the authors propose…
Descriptors: Testing, Test Items, Identification, Sample Size
Suh, Youngsuk – Journal of Educational Measurement, 2016
This study adapted an effect size measure used for studying differential item functioning (DIF) in unidimensional tests and extended the measure to multidimensional tests. Two effect size measures were considered in a multidimensional item response theory model: signed weighted P-difference and unsigned weighted P-difference. The performance of…
Descriptors: Effect Size, Goodness of Fit, Statistical Analysis, Statistical Significance
Carvajal, Jorge; Skorupski, William P. – Educational and Psychological Measurement, 2010
This study is an evaluation of the behavior of the Liu-Agresti estimator of the cumulative common odds ratio when identifying differential item functioning (DIF) with polytomously scored test items using small samples. The Liu-Agresti estimator has been proposed by Penfield and Algina as a promising approach for the study of polytomous DIF but no…
Descriptors: Test Bias, Sample Size, Test Items, Computation
Hambleton, Ronald K.; And Others – 1993
The development and evaluation of methods for detecting potentially biased items or differentially functioning items (DIF) represent a critical area of research for psychometricians because of the negative impact of biased items on test validity. A summary is provided of the authors' 12 years of research at the University of Massachusetts…
Descriptors: Educational Research, Effect Size, Guidelines, Item Bias

Peer reviewed
Direct link
