Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 8 |
Descriptor
| Bayesian Statistics | 9 |
| Models | 9 |
| Test Bias | 9 |
| Item Response Theory | 6 |
| Test Items | 5 |
| Psychometrics | 3 |
| Statistical Analysis | 3 |
| Comparative Analysis | 2 |
| Computation | 2 |
| Difficulty Level | 2 |
| Factor Analysis | 2 |
| More ▼ | |
Source
| Educational and Psychological… | 2 |
| Journal of Educational… | 2 |
| Journal of Educational and… | 2 |
| Applied Psychological… | 1 |
| ETS Research Report Series | 1 |
| ProQuest LLC | 1 |
Author
Publication Type
| Journal Articles | 8 |
| Reports - Research | 4 |
| Reports - Descriptive | 2 |
| Reports - Evaluative | 2 |
| Dissertations/Theses -… | 1 |
Education Level
| Elementary Education | 1 |
| Grade 5 | 1 |
| Intermediate Grades | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
| Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
| Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
Sooyong Lee; Suhwa Han; Seung W. Choi – Journal of Educational Measurement, 2024
Research has shown that multiple-indicator multiple-cause (MIMIC) models can result in inflated Type I error rates in detecting differential item functioning (DIF) when the assumption of equal latent variance is violated. This study explains how the violation of the equal variance assumption adversely impacts the detection of nonuniform DIF and…
Descriptors: Factor Analysis, Bayesian Statistics, Test Bias, Item Response Theory
Patarapichayatham, Chalie; Kamata, Akihito; Kanjanawasee, Sirichai – Educational and Psychological Measurement, 2012
Model specification issues on the cross-level two-way differential item functioning model were previously investigated by Patarapichayatham et al. (2009). Their study clarified that an incorrect model specification can easily lead to biased estimates of key parameters. The objective of this article is to provide further insights on the issue by…
Descriptors: Test Bias, Models, Bayesian Statistics, Statistical Analysis
Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David – Journal of Educational Measurement, 2010
In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…
Descriptors: Test Bias, Models, Test Items, Difficulty Level
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2013
Both testlet design and hierarchical latent traits are fairly common in educational and psychological measurements. This study aimed to develop a new class of higher order testlet response models that consider both local item dependence within testlets and a hierarchy of latent traits. Due to high dimensionality, the authors adopted the Bayesian…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Computation
Soares, Tufi M.; Goncalves, Flavio B.; Gamerman, Dani – Journal of Educational and Behavioral Statistics, 2009
In this article, an integrated Bayesian model for differential item functioning (DIF) analysis is proposed. The model is integrated in the sense of modeling the responses along with the DIF analysis. This approach allows DIF detection and explanation in a simultaneous setup. Previous empirical studies and/or subjective beliefs about the item…
Descriptors: Test Bias, Bayesian Statistics, Models, Item Response Theory
Bilir, Mustafa Kuzey – ProQuest LLC, 2009
This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…
Descriptors: Test Items, Testing Programs, Markov Processes, Psychometrics
Wainer, Howard – Journal of Educational and Behavioral Statistics, 2010
In this essay, the author tries to look forward into the 21st century to divine three things: (i) What skills will researchers in the future need to solve the most pressing problems? (ii) What are some of the most likely candidates to be those problems? and (iii) What are some current areas of research that seem mined out and should not distract…
Descriptors: Research Skills, Researchers, Internet, Access to Information
van Barneveld, Christina – Applied Psychological Measurement, 2007
The purpose of this study is to examine the effects of a false assumption regarding the motivation of examinees on test construction. Simulated data were generated using two models of item responses (the three-parameter logistic item response model alone and in combination with Wise's examinee persistence model) and were calibrated using a…
Descriptors: Test Construction, Item Response Theory, Models, Bayesian Statistics
Sinharay, Sandip – ETS Research Report Series, 2004
Assessing fit of psychometric models has always been an issue of enormous interest, but there exists no unanimously agreed upon item fit diagnostic for the models. Bayesian networks, frequently used in educational assessments (see, for example, Mislevy, Almond, Yan, & Steinberg, 2001) primarily for learning about students' knowledge and…
Descriptors: Bayesian Statistics, Networks, Models, Goodness of Fit

Peer reviewed
Direct link
