Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 9 |
Descriptor
Item Response Theory | 12 |
Test Bias | 11 |
Test Items | 11 |
Item Bias | 10 |
Test Construction | 5 |
Comparative Analysis | 3 |
Identification | 3 |
Models | 3 |
Sample Size | 3 |
Simulation | 3 |
Statistical Analysis | 3 |
More ▼ |
Source
Applied Psychological… | 4 |
Educational and Psychological… | 4 |
Applied Measurement in… | 3 |
Journal of Educational… | 2 |
Educational Measurement:… | 1 |
International Journal of… | 1 |
Author
Oshima, T. C. | 19 |
Raju, Nambury S. | 5 |
Flowers, Claudia P. | 4 |
Miller, M. David | 2 |
Fikis, David R. J. | 1 |
Fortmann-Johnson, Kristen A. | 1 |
Kim, Jihye | 1 |
Kim, Wonsuk | 1 |
McCarty, F. A. | 1 |
Morris, S. B. | 1 |
Morris, Scott B. | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Evaluative | 11 |
Reports - Research | 7 |
Speeches/Meeting Papers | 5 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Fikis, David R. J.; Oshima, T. C. – Educational and Psychological Measurement, 2017
Purification of the test has been a well-accepted procedure in enhancing the performance of tests for differential item functioning (DIF). As defined by Lord, purification requires reestimation of ability parameters after removing DIF items before conducting the final DIF analysis. IRTPRO 3 is a recently updated program for analyses in item…
Descriptors: Test Bias, Item Response Theory, Statistical Analysis, Computer Software
Wright, Keith D.; Oshima, T. C. – Educational and Psychological Measurement, 2015
This study established an effect size measure for differential functioning for items and tests' noncompensatory differential item functioning (NCDIF). The Mantel-Haenszel parameter served as the benchmark for developing NCDIF's effect size measure for reporting moderate and large differential item functioning in test items. The effect size of…
Descriptors: Effect Size, Test Bias, Test Items, Difficulty Level
Oshima, T. C.; Wright, Keith; White, Nick – International Journal of Testing, 2015
Raju, van der Linden, and Fleer (1995) introduced a framework for differential functioning of items and tests (DFIT) for unidimensional dichotomous models. Since then, DFIT has been shown to be a quite versatile framework as it can handle polytomous as well as multidimensional models both at the item and test levels. However, DFIT is still limited…
Descriptors: Test Bias, Item Response Theory, Test Items, Simulation
Kim, Jihye; Oshima, T. C. – Educational and Psychological Measurement, 2013
In a typical differential item functioning (DIF) analysis, a significance test is conducted for each item. As a test consists of multiple items, such multiple testing may increase the possibility of making a Type I error at least once. The goal of this study was to investigate how to control a Type I error rate and power using adjustment…
Descriptors: Test Bias, Test Items, Statistical Analysis, Error of Measurement
Snow, Teresa K.; Oshima, T. C. – Educational and Psychological Measurement, 2009
Oshima, Raju, and Flowers demonstrated the use of an item response theory-based technique for analyzing differential item function (DIF) and differential test function for dichotomously scored data that are intended to be multidimensional. Their study assumed that the number of intended-to-be measured dimensions was correctly identified. In…
Descriptors: Test Bias, Item Response Theory, Psychometrics
Oshima, T. C.; Morris, S. B. – Educational Measurement: Issues and Practice, 2008
Nambury S. Raju (1937-2005) developed two model-based indices for differential item functioning (DIF) during his prolific career in psychometrics. Both methods, Raju's area measures (Raju, 1988) and Raju's DFIT (Raju, van der Linden, & Fleer, 1995), are based on quantifying the gap between item characteristic functions (ICFs). This approach…
Descriptors: Test Bias, Psychometrics, Methods, Test Items
Raju, Nambury S.; Fortmann-Johnson, Kristen A.; Kim, Wonsuk; Morris, Scott B.; Nering, Michael L.; Oshima, T. C. – Applied Psychological Measurement, 2009
The recent study of Oshima, Raju, and Nanda proposes the item parameter replication (IPR) method for assessing statistical significance of the noncompensatory differential item functioning (NCDIF) index within the differential functioning of items and tests (DFIT) framework. Previous Monte Carlo simulations have found that the appropriate cutoff…
Descriptors: Test Bias, Statistical Significance, Item Response Theory, Monte Carlo Methods

Flowers, Claudia P.; Oshima, T. C.; Raju, Nambury S. – Applied Psychological Measurement, 1999
Examined the polytomous differential functioning of items and tests (DFIT) framework proposed by N. Raju and others through simulation. Findings show that the DFIT framework is effective in identifying differential item functioning and differential test functioning. (SLD)
Descriptors: Identification, Item Bias, Models, Test Bias

Oshima, T. C.; Raju, Nambury S. Rajo; Flowers, Claudia P. – Journal of Educational Measurement, 1997
Defines and demonstrates a framework for studying differential item functioning and differential test functioning for tests that are intended to be multidimensional. The procedure, which is illustrated with simulated data, is an extension of the unidimensional differential functioning of items and tests approach (N. Raju, W. van der Linden, and P.…
Descriptors: Item Bias, Item Response Theory, Models, Simulation

Miller, M. David; Oshima, T. C. – Applied Psychological Measurement, 1992
A two-stage procedure for estimating item bias was examined with six indices of item bias and the Mantel-Haenszel statistic. Results suggest that the two-stage procedure is not very useful when the number of biased items is small and bias magnitude is weak. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Estimation (Mathematics), Ethnic Groups

Oshima, T. C.; Miller, M. David – Applied Psychological Measurement, 1992
How item bias indexes based on item response theory (IRT) identify bias that results from multidimensionality is demonstrated. Simulation results suggest that IRT-based bias indexes detect multidimensional items with bias but do not detect multidimensional items without bias. They also do not confound between-group differences on the primary test.…
Descriptors: Computer Simulation, Item Bias, Item Response Theory, Mathematical Models
McCarty, F. A.; Oshima, T. C.; Raju, Nambury S. – Applied Measurement in Education, 2007
Oshima, Raju, Flowers, and Slinde (1998) described procedures for identifying sources of differential functioning for dichotomous data using differential bundle functioning (DBF) derived from the differential functioning of items and test (DFIT) framework (Raju, van der Linden, & Fleer, 1995). The purpose of this study was to extend the…
Descriptors: Rating Scales, Test Bias, Scoring, Test Items
Oshima, T. C.; Raju, Nambury S.; Nanda, Alice O. – Journal of Educational Measurement, 2006
A new item parameter replication method is proposed for assessing the statistical significance of the noncompensatory differential item functioning (NCDIF) index associated with the differential functioning of items and tests framework. In this new method, a cutoff score for each item is determined by obtaining a (1-alpha ) percentile rank score…
Descriptors: Evaluation Methods, Statistical Distributions, Statistical Significance, Test Bias
Flowers, Claudia P.; Oshima, T. C. – 1994
This study was patterned after a previous study by Skaggs and Lissitz (1992) in which inconsistency of differential item functioning (DIF) was reported across test administrations. They suggested multidimensionality of test data as one possible reason for inconsistency. Therefore, in this study, DIF indices which were developed recently with a…
Descriptors: Ethnic Groups, Item Bias, Mathematics, Reliability

Oshima, T. C.; And Others – Applied Measurement in Education, 1994
A procedure to detect differential item functioning (DIF) is introduced that is suitable for tests with a cutoff score. DIF is assessed on a limited closed interval of thetas in which a cutoff score falls. How this approach affects the identification of DIF items is demonstrated with real data sets. (SLD)
Descriptors: Ability, Classification, Cutting Scores, Identification
Previous Page | Next Page ยป
Pages: 1 | 2