Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 4 |
Descriptor
| Correlation | 5 |
| Difficulty Level | 5 |
| Test Items | 5 |
| Item Response Theory | 3 |
| Comparative Analysis | 2 |
| Computation | 2 |
| Hierarchical Linear Modeling | 2 |
| Psychometrics | 2 |
| Simulation | 2 |
| Test Bias | 2 |
| Accuracy | 1 |
| More ▼ | |
Source
| Journal of Educational… | 5 |
Author
| Albano, Anthony D. | 1 |
| Bielinski, John | 1 |
| Bolt, Daniel M. | 1 |
| Cai, Liuhan | 1 |
| Davison, Mark L. | 1 |
| He, Wei | 1 |
| Jiao, Hong | 1 |
| Lease, Erin M. | 1 |
| Liao, Xiangyi | 1 |
| McConnell, Scott R. | 1 |
| Robitzsch, Alexander | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Research | 3 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
Education Level
| Elementary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 1 |
| Trends in International… | 1 |
What Works Clearinghouse Rating
Bolt, Daniel M.; Liao, Xiangyi – Journal of Educational Measurement, 2021
We revisit the empirically observed positive correlation between DIF and difficulty studied by Freedle and commonly seen in tests of verbal proficiency when comparing populations of different mean latent proficiency levels. It is shown that a positive correlation between DIF and difficulty estimates is actually an expected result (absent any true…
Descriptors: Test Bias, Difficulty Level, Correlation, Verbal Tests
Albano, Anthony D.; Cai, Liuhan; Lease, Erin M.; McConnell, Scott R. – Journal of Educational Measurement, 2019
Studies have shown that item difficulty can vary significantly based on the context of an item within a test form. In particular, item position may be associated with practice and fatigue effects that influence item parameter estimation. The purpose of this research was to examine the relevance of item position specifically for assessments used in…
Descriptors: Test Items, Computer Assisted Testing, Item Analysis, Difficulty Level
Schroeders, Ulrich; Robitzsch, Alexander; Schipolowski, Stefan – Journal of Educational Measurement, 2014
C-tests are a specific variant of cloze tests that are considered time-efficient, valid indicators of general language proficiency. They are commonly analyzed with models of item response theory assuming local item independence. In this article we estimated local interdependencies for 12 C-tests and compared the changes in item difficulties,…
Descriptors: Comparative Analysis, Psychometrics, Cloze Procedure, Language Tests
Jiao, Hong; Wang, Shudong; He, Wei – Journal of Educational Measurement, 2013
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Descriptors: Computation, Item Response Theory, Models, Monte Carlo Methods
Peer reviewedBielinski, John; Davison, Mark L. – Journal of Educational Measurement, 2001
Used mathematics achievement data from the 1992 National Assessment of Educational Progress, the Third International Mathematics and Science Study, and the National Education Longitudinal Study of 1988 to examine the sex difference by item difficulty interaction. The predicted negative correlation was found for all eight populations and was…
Descriptors: Correlation, Difficulty Level, Interaction, Mathematics Tests

Direct link
