Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 5 |
Descriptor
Source
| Educational and Psychological… | 2 |
| Educational Assessment | 1 |
| Educational Sciences: Theory… | 1 |
| Journal of Applied Testing… | 1 |
| Online Submission | 1 |
Author
| Ackerman, Terry A. | 1 |
| Bhola, Dennison S. | 1 |
| Caroline M. Böhm | 1 |
| Chang, Yu-Wen | 1 |
| Davison, Mark L. | 1 |
| DeMars, Christine E. | 1 |
| Jiao, Hong | 1 |
| Kelecioglu, Hülya | 1 |
| Kong, Xiaojing J. | 1 |
| Lee, Minji K. | 1 |
| Li, Ying | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Evaluative | 4 |
| Reports - Research | 4 |
| Speeches/Meeting Papers | 3 |
Education Level
| Elementary Secondary Education | 2 |
| Higher Education | 2 |
| Elementary Education | 1 |
| Grade 8 | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Trends in International… | 1 |
| Woodcock Johnson Psycho… | 1 |
What Works Clearinghouse Rating
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Lee, Minji K.; Sweeney, Kevin; Melican, Gerald J. – Educational Assessment, 2017
This study investigates the relationships among factor correlations, inter-item correlations, and the reliability estimates of subscores, providing a guideline with respect to psychometric properties of useful subscores. In addition, it compares subscore estimation methods with respect to reliability and distinctness. The subscore estimation…
Descriptors: Scores, Test Construction, Test Reliability, Test Validity
Öztürk-Gübes, Nese; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2016
The purpose of this study was to examine the impact of dimensionality, common-item set format, and different scale linking methods on preserving equity property with mixed-format test equating. Item response theory (IRT) true-score equating (TSE) and IRT observed-score equating (OSE) methods were used under common-item nonequivalent groups design.…
Descriptors: Test Format, Item Response Theory, True Scores, Equated Scores
Li, Ying; Jiao, Hong; Lissitz, Robert W. – Journal of Applied Testing Technology, 2012
This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…
Descriptors: Achievement Tests, Science Tests, Item Response Theory, Measures (Individuals)
Kong, Xiaojing J.; Wise, Steven L.; Bhola, Dennison S. – Educational and Psychological Measurement, 2007
This study compared four methods for setting item response time thresholds to differentiate rapid-guessing behavior from solution behavior. Thresholds were either (a) common for all test items, (b) based on item surface features such as the amount of reading required, (c) based on visually inspecting response time frequency distributions, or (d)…
Descriptors: Test Items, Reaction Time, Timed Tests, Item Response Theory
Ackerman, Terry A. – 1991
Many researchers have suggested that the main cause of item bias is the misspecification of the latent ability space. That is, items that measure multiple abilities are scored as though they are measuring a single ability. If two different groups of examinees have different underlying multidimensional ability distributions and the test items are…
Descriptors: Equations (Mathematics), Item Bias, Item Response Theory, Mathematical Models
DeMars, Christine E. – Online Submission, 2005
Several methods for estimating item response theory scores for multiple subtests were compared. These methods included two multidimensional item response theory models: a bi-factor model where each subtest was a composite score based on the primary trait measured by the set of tests and a secondary trait measured by the individual subtest, and a…
Descriptors: Item Response Theory, Multidimensional Scaling, Correlation, Scoring Rubrics
Davison, Mark L.; Chang, Yu-Wen – 1992
A two-dimensional, compensatory item response model and a unidimensional model were fitted to the reading and mathematics items in the Woodcock-Johnson Psycho-Educational Battery-Revised for a sample of 1,000 adults aged 20-39 years. Multidimensional information theory predicts that if the unidimensional abilities can be represented as vectors in…
Descriptors: Achievement Tests, Adults, Equations (Mathematics), Error of Measurement

Peer reviewed
Direct link
