Publication Date
In 2025 | 3 |
Since 2024 | 16 |
Since 2021 (last 5 years) | 56 |
Since 2016 (last 10 years) | 100 |
Since 2006 (last 20 years) | 183 |
Descriptor
Item Response Theory | 271 |
Test Format | 271 |
Test Items | 148 |
Test Construction | 64 |
Comparative Analysis | 60 |
Foreign Countries | 60 |
Equated Scores | 56 |
Multiple Choice Tests | 56 |
Difficulty Level | 50 |
Scores | 48 |
Computer Assisted Testing | 46 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Location
Turkey | 8 |
Germany | 7 |
Australia | 5 |
Canada | 5 |
Indonesia | 4 |
Netherlands | 3 |
United Kingdom | 3 |
Florida | 2 |
Hong Kong | 2 |
Illinois | 2 |
Iowa | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Journal of Educational Measurement, 2024
This paper presents the item and test information functions of the Rank two-parameter logistic models (Rank-2PLM) for items with two (pair) and three (triplet) statements in forced-choice questionnaires. The Rank-2PLM model for pairs is the MUPP-2PLM (Multi-Unidimensional Pairwise Preference) and, for triplets, is the Triplet-2PLM. Fisher's…
Descriptors: Questionnaires, Test Items, Item Response Theory, Models
Uk Hyun Cho – ProQuest LLC, 2024
The present study investigates the influence of multidimensionality on linking and equating in a unidimensional IRT. Two hypothetical multidimensional scenarios are explored under a nonequivalent group common-item equating design. The first scenario examines test forms designed to measure multiple constructs, while the second scenario examines a…
Descriptors: Item Response Theory, Classification, Correlation, Test Format
Monica Casella; Pasquale Dolce; Michela Ponticorvo; Nicola Milano; Davide Marocco – Educational and Psychological Measurement, 2024
Short-form development is an important topic in psychometric research, which requires researchers to face methodological choices at different steps. The statistical techniques traditionally used for shortening tests, which belong to the so-called exploratory model, make assumptions not always verified in psychological data. This article proposes a…
Descriptors: Artificial Intelligence, Test Construction, Test Format, Psychometrics
Nana Kim; Daniel M. Bolt – Journal of Educational and Behavioral Statistics, 2024
Some previous studies suggest that response times (RTs) on rating scale items can be informative about the content trait, but a more recent study suggests they may also be reflective of response styles. The latter result raises questions about the possible consideration of RTs for content trait estimation, as response styles are generally viewed…
Descriptors: Item Response Theory, Reaction Time, Response Style (Tests), Psychometrics
Choe, Edison M.; Han, Kyung T. – Journal of Educational Measurement, 2022
In operational testing, item response theory (IRT) models for dichotomous responses are popular for measuring a single latent construct [theta], such as cognitive ability in a content domain. Estimates of [theta], also called IRT scores or [theta hat], can be computed using estimators based on the likelihood function, such as maximum likelihood…
Descriptors: Scores, Item Response Theory, Test Items, Test Format
Chunyan Liu; Raja Subhiyah; Richard A. Feinberg – Applied Measurement in Education, 2024
Mixed-format tests that include both multiple-choice (MC) and constructed-response (CR) items have become widely used in many large-scale assessments. When an item response theory (IRT) model is used to score a mixed-format test, the unidimensionality assumption may be violated if the CR items measure a different construct from that measured by MC…
Descriptors: Test Format, Response Style (Tests), Multiple Choice Tests, Item Response Theory
Jianbin Fu; Patrick C. Kyllonen; Xuan Tan – Measurement: Interdisciplinary Research and Perspectives, 2024
Users of forced-choice questionnaires (FCQs) to measure personality commonly assume statement parameter invariance across contexts -- between Likert and forced-choice (FC) items and between different FC items that share a common statement. In this paper, an empirical study was designed to check these two assumptions for an FCQ assessment measuring…
Descriptors: Measurement Techniques, Questionnaires, Personality Measures, Interpersonal Competence
Pentecost, Thomas C.; Raker, Jeffery R.; Murphy, Kristen L. – Practical Assessment, Research & Evaluation, 2023
Using multiple versions of an assessment has the potential to introduce item environment effects. These types of effects result in version dependent item characteristics (i.e., difficulty and discrimination). Methods to detect such effects and resulting implications are important for all levels of assessment where multiple forms of an assessment…
Descriptors: Item Response Theory, Test Items, Test Format, Science Tests
Xueliang Chen; Vahid Aryadoust; Wenxin Zhang – Language Testing, 2025
The growing diversity among test takers in second or foreign language (L2) assessments makes the importance of fairness front and center. This systematic review aimed to examine how fairness in L2 assessments was evaluated through differential item functioning (DIF) analysis. A total of 83 articles from 27 journals were included in a systematic…
Descriptors: Second Language Learning, Language Tests, Test Items, Item Analysis
Cornelia Eva Neuert – Sociological Methods & Research, 2024
The quality of data in surveys is affected by response burden and questionnaire length. With an increasing number of questions, respondents can become bored, tired, and annoyed and may take shortcuts to reduce the effort needed to complete the survey. In this article, direct evidence is presented on how the position of items within a web…
Descriptors: Online Surveys, Test Items, Test Format, Test Construction
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Kunz, Tanja; Meitinger, Katharina – Field Methods, 2022
Although list-style open-ended questions generally help us gain deeper insights into respondents' thoughts, opinions, and behaviors, the quality of responses is often compromised. We tested a dynamic and a follow-up design to motivate respondents to give higher quality responses than with a static design, but without overburdening them. Our…
Descriptors: Online Surveys, Item Response Theory, Test Items, Test Format
Practical Considerations in Choosing an Anchor Test Form for Equating under the Random Groups Design
Cui, Zhongmin; He, Yong – Measurement: Interdisciplinary Research and Perspectives, 2023
Careful considerations are necessary when there is a need to choose an anchor test form from a list of old test forms for equating under the random groups design. The choice of the anchor form potentially affects the accuracy of equated scores on new test forms. Few guidelines, however, can be found in the literature on choosing the anchor form.…
Descriptors: Test Format, Equated Scores, Best Practices, Test Construction
Cuhadar, Ismail; Binici, Salih – Educational Measurement: Issues and Practice, 2022
This study employs the 4-parameter logistic item response theory model to account for the unexpected incorrect responses or slipping effects observed in a large-scale Algebra 1 End-of-Course assessment, including several innovative item formats. It investigates whether modeling the misfit at the upper asymptote has any practical impact on the…
Descriptors: Item Response Theory, Measurement, Student Evaluation, Algebra