Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 4 |
Descriptor
| Difficulty Level | 7 |
| Factor Analysis | 7 |
| Sample Size | 7 |
| Test Items | 5 |
| Item Response Theory | 4 |
| Monte Carlo Methods | 4 |
| Correlation | 3 |
| Computation | 2 |
| Effect Size | 2 |
| Error of Measurement | 2 |
| Least Squares Statistics | 2 |
| More ▼ | |
Author
| Ahn, Soyeon | 2 |
| Finch, Holmes | 1 |
| Ito, Kyoko | 1 |
| Jin, Ying | 1 |
| Kogar, Hakan | 1 |
| Marsh, Herbert W. | 1 |
| Myers, Nicholas D. | 1 |
| Park, Sung Eun | 1 |
| Penfield, Randall D. | 1 |
| Sykes, Robert C. | 1 |
| Tucker, Ledyard R. | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 6 |
| Journal Articles | 4 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
| Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Kogar, Hakan – International Journal of Assessment Tools in Education, 2018
The aim of this simulation study, determine the relationship between true latent scores and estimated latent scores by including various control variables and different statistical models. The study also aimed to compare the statistical models and determine the effects of different distribution types, response formats and sample sizes on latent…
Descriptors: Simulation, Context Effect, Computation, Statistical Analysis
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D. – Educational and Psychological Measurement, 2013
The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…
Descriptors: Test Bias, Effect Size, Item Response Theory, Comparative Analysis
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Marsh, Herbert W.; And Others – 1989
The purpose of the present investigation is to examine the influence of sample size (N) and model complexity on a set of 23 goodness-of-fit (GOF) indices, including those typically used in confirmatory factor analysis. The focus was on two potential problems in assessing GOF: (1) some fit indices are substantially influenced by N so that tests of…
Descriptors: Computer Simulation, Difficulty Level, Factor Analysis, Goodness of Fit
Sykes, Robert C.; Ito, Kyoko – 1995
Whether the presence of bidimensionality has any effect on the adaptive recalibration of test items was studied through live-data simulation of computer adaptive testing (CAT) forms. The source data were examinee responses to the 298 scored multiple choice items of a licensure examination in a health care profession. Three 75-item part-forms,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Estimation (Mathematics)
Tucker, Ledyard R.; And Others – 1986
A Monte Carlo study of five indices of dimensionality of binary items used a computer model that allowed sampling of both items and people. Five parameters were systematically varied in a factorial design: (1) number of common factors from one to five; (2) number of items, including 20, 30, 40, and 60; (3) sample sizes of 125 and 500; (4) nearly…
Descriptors: Correlation, Difficulty Level, Educational Research, Expectancy Tables

Peer reviewed
Direct link
