NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sooyong Lee; Suhwa Han; Seung W. Choi – Journal of Educational Measurement, 2024
Research has shown that multiple-indicator multiple-cause (MIMIC) models can result in inflated Type I error rates in detecting differential item functioning (DIF) when the assumption of equal latent variance is violated. This study explains how the violation of the equal variance assumption adversely impacts the detection of nonuniform DIF and…
Descriptors: Factor Analysis, Bayesian Statistics, Test Bias, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Beheshti, Shima; Safa, Mohammad Ahmadi – Iranian Journal of Language Teaching Research, 2023
The indefinite nature of test fairness and different interpretations and definitions of the concept have stirred a lot of controversy over the years, necessitating the reconceptualization of the concept. On this basis, this study aimed to explore the empirical validity of Kunnan's (2008) Test Fairness Framework (TFF) and revisit the established…
Descriptors: Test Bias, Equal Education, Grounded Theory, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tim Jacobbe; Bob delMas; Brad Hartlaub; Jeff Haberstroh; Catherine Case; Steven Foti; Douglas Whitaker – Numeracy, 2023
The development of assessments as part of the funded LOCUS project is described. The assessments measure students' conceptual understanding of statistics as outlined in the GAISE PreK-12 Framework. Results are reported from a large-scale administration to 3,430 students in grades 6 through 12 in the United States. Items were designed to assess…
Descriptors: Statistics Education, Common Core State Standards, Student Evaluation, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
El Masri, Yasmine H.; Andrich, David – Applied Measurement in Education, 2020
In large-scale educational assessments, it is generally required that tests are composed of items that function invariantly across the groups to be compared. Despite efforts to ensure invariance in the item construction phase, for a range of reasons (including the security of items) it is often necessary to account for differential item…
Descriptors: Models, Goodness of Fit, Test Validity, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Walker, Cindy M.; Gocer Sahin, Sakine – Educational and Psychological Measurement, 2017
The theoretical reason for the presence of differential item functioning (DIF) is that data are multidimensional and two groups of examinees differ in their underlying ability distribution for the secondary dimension(s). Therefore, the purpose of this study was to determine how much the secondary ability distributions must differ before DIF is…
Descriptors: Item Response Theory, Test Bias, Correlation, Statistical Significance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baghaei, Purya; Kubinger, Klaus D. – Practical Assessment, Research & Evaluation, 2015
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Descriptors: Item Response Theory, Models, Test Validity, Hypothesis Testing
Goldhaber, Dan; Chaplin, Duncan – Center for Education Data & Research, 2012
In a provocative and influential paper, Jesse Rothstein (2010) finds that standard value added models (VAMs) suggest implausible future teacher effects on past student achievement, a finding that obviously cannot be viewed as causal. This is the basis of a falsification test (the Rothstein falsification test) that appears to indicate bias in VAM…
Descriptors: School Effectiveness, Teacher Effectiveness, Achievement Gains, Statistical Bias
Peoples, Shelagh – ProQuest LLC, 2012
The purpose of this study was to determine which of three competing models will provide, reliable, interpretable, and responsive measures of elementary students' understanding of the nature of science (NOS). The Nature of Science Instrument-Elementary (NOSI-E), a 28-item Rasch-based instrument, was used to assess students' NOS…
Descriptors: Scientific Principles, Science Tests, Elementary School Students, Item Response Theory
Hou, Likun – ProQuest LLC, 2013
Analyzing examinees' responses using cognitive diagnostic models (CDMs) have the advantages of providing richer diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this dissertation, the model-based DIF detection method, Wald-CDM procedure is…
Descriptors: Test Bias, Models, Cognitive Processes, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bornstein, Robert F. – Psychological Assessment, 2011
Although definitions of validity have evolved considerably since L. J. Cronbach and P. E. Meehl's classic (1955) review, contemporary validity research continues to emphasize correlational analyses assessing predictor-criterion relationships, with most outcome criteria being self-reports. The present article describes an alternative way of…
Descriptors: Test Validity, Scores, Models, Psychological Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael – Language Testing, 2010
This paper presents the author's critique on Xiaoming Xi's article, "How do we go about investigating test fairness?," which lays out a broad framework for studying fairness as comparable validity across groups within the population of interest. Xi proposes to develop a fairness argument that would identify and evaluate potential fairness-based…
Descriptors: Test Bias, Test Validity, Language Tests, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Young, John W. – Educational Assessment, 2009
In this article, I specify a conceptual framework for test validity research on content assessments taken by English language learners (ELLs) in U.S. schools in grades K-12. This framework is modeled after one previously delineated by Willingham et al. (1988), which was developed to guide research on students with disabilities. In this framework…
Descriptors: Test Validity, Evaluation Research, Achievement Tests, Elementary Secondary Education
Kim, Jihye – ProQuest LLC, 2010
In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…
Descriptors: Test Bias, Test Length, Simulation, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Emily C.; Meade, Adam W.; DuVernet, Amy M. – Structural Equation Modeling: A Multidisciplinary Journal, 2009
Confirmatory factor analytic tests of measurement invariance (MI) require a referent indicator (RI) for model identification. Although the assumption that the RI is perfectly invariant across groups is acknowledged as problematic, the literature provides relatively little guidance for researchers to identify the conditions under which the practice…
Descriptors: Measurement, Validity, Factor Analysis, Models
Previous Page | Next Page ยป
Pages: 1  |  2  |  3