Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 9 |
Descriptor
| Test Format | 9 |
| Test Items | 6 |
| Foreign Countries | 4 |
| Item Response Theory | 4 |
| Comparative Analysis | 3 |
| Computer Assisted Testing | 3 |
| Test Validity | 3 |
| Culture Fair Tests | 2 |
| Error of Measurement | 2 |
| High Stakes Tests | 2 |
| Models | 2 |
| More ▼ | |
Source
| International Journal of… | 9 |
Author
Publication Type
| Journal Articles | 9 |
| Reports - Research | 9 |
Education Level
| Higher Education | 4 |
| Postsecondary Education | 3 |
Audience
Location
| China | 1 |
| Germany | 1 |
| Greece | 1 |
| Ireland (Dublin) | 1 |
| Israel | 1 |
| South Korea | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Kim, Kyung Yong; Lim, Euijin; Lee, Won-Chan – International Journal of Testing, 2019
For passage-based tests, items that belong to a common passage often violate the local independence assumption of unidimensional item response theory (UIRT). In this case, ignoring local item dependence (LID) and estimating item parameters using a UIRT model could be problematic because doing so might result in inaccurate parameter estimates,…
Descriptors: Item Response Theory, Equated Scores, Test Items, Models
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Moon, Jung Aa; Sinharay, Sandip; Keehner, Madeleine; Katz, Irvin R. – International Journal of Testing, 2020
The current study examined the relationship between test-taker cognition and psychometric item properties in multiple-selection multiple-choice and grid items. In a study with content-equivalent mathematics items in alternative item formats, adult participants' tendency to respond to an item was affected by the presence of a grid and variations of…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Test Wiseness, Psychometrics
FIPC Linking across Multidimensional Test Forms: Effects of Confounding Difficulty within Dimensions
Kim, Sohee; Cole, Ki Lynn; Mwavita, Mwarumba – International Journal of Testing, 2018
This study investigated the effects of linking potentially multidimensional test forms using the fixed item parameter calibration. Forms had equal or unequal total test difficulty with and without confounding difficulty. The mean square errors and bias of estimated item and ability parameters were compared across the various confounding tests. The…
Descriptors: Test Items, Item Response Theory, Test Format, Difficulty Level
Karakolidis, Anastasios; O'Leary, Michael; Scully, Darina – International Journal of Testing, 2021
The linguistic complexity of many text-based tests can be a source of construct-irrelevant variance, as test-takers' performance may be affected by factors that are beyond the focus of the assessment itself, such as reading comprehension skills. This experimental study examined the extent to which the use of animated videos, as opposed to written…
Descriptors: Animation, Vignettes, Video Technology, Test Format
Martin-Raugh, Michelle P.; Anguiano-Carrsaco, Cristina; Jackson, Teresa; Brenneman, Meghan W.; Carney, Lauren; Barnwell, Patrick; Kochert, Jonathan – International Journal of Testing, 2018
Single-response situational judgment tests (SRSJTs) differ from multiple-response SJTs (MRSJTS) in that they present test takers with edited critical incidents and simply ask test takers to read over the action described and evaluate it according to its effectiveness. Research comparing the reliability and validity of SRSJTs and MRSJTs is thus far…
Descriptors: Test Format, Test Reliability, Test Validity, Predictive Validity
Moshinsky, Avital; Ziegler, David; Gafni, Naomi – International Journal of Testing, 2017
Many medical schools have adopted multiple mini-interviews (MMI) as an advanced selection tool. MMIs are expensive and used to test only a few dozen candidates per day, making it infeasible to develop a different test version for each test administration. Therefore, some items are reused both within and across years. This study investigated the…
Descriptors: Interviews, Medical Schools, Test Validity, Test Reliability

Peer reviewed
Direct link
