Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 10 |
Descriptor
Test Bias | 13 |
Simulation | 12 |
Item Response Theory | 11 |
Test Items | 10 |
Evaluation Methods | 8 |
Models | 8 |
Foreign Countries | 5 |
Monte Carlo Methods | 5 |
Computation | 4 |
Error of Measurement | 3 |
Item Bias | 3 |
More ▼ |
Source
Educational and Psychological… | 6 |
Applied Psychological… | 4 |
Applied Measurement in… | 2 |
Journal of Educational… | 2 |
Journal of Applied Measurement | 1 |
Journal of Experimental… | 1 |
Author
Wang, Wen-Chung | 18 |
Shih, Ching-Lin | 5 |
Su, Ya-Hui | 3 |
Huang, Hung-Yu | 2 |
Cheng, Ying-Yao | 1 |
Ho, Yi-Hui | 1 |
Li, Xiaomin | 1 |
Liu, Tien-Hsiang | 1 |
Sun, Guo-Wei | 1 |
Wilson, Mark | 1 |
Yang, Chih-Chien | 1 |
More ▼ |
Publication Type
Journal Articles | 16 |
Reports - Research | 10 |
Reports - Evaluative | 7 |
Speeches/Meeting Papers | 1 |
Education Level
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Taiwan | 3 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
Assessment of Differential Item Functioning under Cognitive Diagnosis Models: The DINA Model Example
Li, Xiaomin; Wang, Wen-Chung – Journal of Educational Measurement, 2015
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…
Descriptors: Test Bias, Models, Cognitive Measurement, Evaluation Methods
Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…
Descriptors: Test Bias, Regression (Statistics), Test Items, True Scores
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Wang, Wen-Chung; Shih, Ching-Lin; Sun, Guo-Wei – Educational and Psychological Measurement, 2012
The DIF-free-then-DIF (DFTD) strategy consists of two steps: (a) select a set of items that are the most likely to be DIF-free and (b) assess the other items for DIF (differential item functioning) using the designated items as anchors. The rank-based method together with the computer software IRTLRDIF can select a set of DIF-free polytomous items…
Descriptors: Test Bias, Test Items, Item Response Theory, Evaluation Methods
Wang, Wen-Chung; Shih, Ching-Lin – Applied Psychological Measurement, 2010
Three multiple indicators-multiple causes (MIMIC) methods, namely, the standard MIMIC method (M-ST), the MIMIC method with scale purification (M-SP), and the MIMIC method with a pure anchor (M-PA), were developed to assess differential item functioning (DIF) in polytomous items. In a series of simulations, it appeared that all three methods…
Descriptors: Methods, Test Bias, Test Items, Error of Measurement
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2013
Both testlet design and hierarchical latent traits are fairly common in educational and psychological measurements. This study aimed to develop a new class of higher order testlet response models that consider both local item dependence within testlets and a hierarchy of latent traits. Due to high dimensionality, the authors adopted the Bayesian…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Computation
Shih, Ching-Lin; Wang, Wen-Chung – Applied Psychological Measurement, 2009
The multiple indicators, multiple causes (MIMIC) method with a pure short anchor was proposed to detect differential item functioning (DIF). A simulation study showed that the MIMIC method with an anchor of 1, 2, 4, or 10 DIF-free items yielded a well-controlled Type I error rate even when such tests contained as many as 40% DIF items. In general,…
Descriptors: Test Bias, Simulation, Methods, Factor Analysis
Cheng, Ying-Yao; Wang, Wen-Chung; Ho, Yi-Hui – Educational and Psychological Measurement, 2009
Educational and psychological tests are often composed of multiple short subtests, each measuring a distinct latent trait. Unfortunately, short subtests suffer from low measurement precision, which makes the bandwidth-fidelity dilemma inevitable. In this study, the authors demonstrate how a multidimensional Rasch analysis can be employed to take…
Descriptors: Item Response Theory, Measurement, Correlation, Measures (Individuals)
Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien – Educational and Psychological Measurement, 2009
This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…
Descriptors: Test Items, Measures (Individuals), Test Bias, Evaluation Research
Wang, Wen-Chung – Applied Psychological Measurement, 2008
Raju and Oshima (2005) proposed two prophecy formulas based on item response theory in order to predict the reliability of ability estimates for a test after change in its length. The first prophecy formula is equivalent to the classical Spearman-Brown prophecy formula. The second prophecy formula is misleading because of an underlying false…
Descriptors: Test Reliability, Item Response Theory, Computation, Evaluation Methods

Wang, Wen-Chung – Journal of Applied Measurement, 2000
Extended conventional two-group differential item function (DIF) analysis for dichotomous items to factorial DIF analysis for polytomous items where multiple grouping factors with multiple groups in each are analyzed jointly. Simulation studies and analysis of a real data set with 1,924 subjects show the parameters of the proposed modeling can be…
Descriptors: Groups, Item Bias, Models, Simulation
Wang, Wen-Chung – 1998
Conventional two-group differential item functioning (DIF) analysis for dichotomous items is extended to factorial DIF analysis for polytomous items where multiple grouping factors with multiple groups in each are jointly analyzed. By adopting the formulation of general linear models, item parameters across all possible groups are treated as a…
Descriptors: Foreign Countries, Identification, Item Bias, Models
Wang, Wen-Chung – Journal of Experimental Education, 2004
Scale indeterminacy in analysis of differential item functioning (DIF) within the framework of item response theory can be resolved by imposing 3 anchor item methods: the equal-mean-difficulty method, the all-other anchor item method, and the constant anchor item method. In this article, applicability and limitations of these 3 methods are…
Descriptors: Test Bias, Models, Item Response Theory, Comparative Analysis
Wang, Wen-Chung – 1998
The conventional two-group differential item functioning (DIF) analysis is extended to an analysis of variance-like (ANOVA-like) DIF analysis where multiple factors with multiple groups are compared simultaneously. Moreover, DIF is treated as a parameter to be estimated rather than simply a sign to be detected. This proposed approach allows the…
Descriptors: Analysis of Variance, Foreign Countries, Item Bias, Item Response Theory
Wang, Wen-Chung; Su, Ya-Hui – Applied Measurement in Education, 2004
In this study we investigated the effects of the average signed area (ASA) between the item characteristic curves of the reference and focal groups and three test purification procedures on the uniform differential item functioning (DIF) detection via the Mantel-Haenszel (M-H) method through Monte Carlo simulations. The results showed that ASA,…
Descriptors: Test Bias, Student Evaluation, Evaluation Methods, Test Items
Previous Page | Next Page ยป
Pages: 1 | 2