Publication Date
In 2025 | 45 |
Since 2024 | 227 |
Since 2021 (last 5 years) | 866 |
Descriptor
Item Response Theory | 866 |
Test Items | 334 |
Foreign Countries | 282 |
Psychometrics | 165 |
Models | 158 |
Test Validity | 150 |
Test Reliability | 149 |
Item Analysis | 145 |
Scores | 129 |
Test Construction | 120 |
Difficulty Level | 107 |
More ▼ |
Source
Author
Chun Wang | 12 |
Gongjun Xu | 11 |
A. Corinne Huggins-Manley | 6 |
Joshua B. Gilbert | 6 |
Lee, Won-Chan | 6 |
Stefanie A. Wind | 6 |
Aybek, Eren Can | 5 |
Luke W. Miratrix | 5 |
Sun-Joo Cho | 5 |
Wind, Stefanie A. | 5 |
Amanda Goodwin | 4 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 4 |
Researchers | 4 |
Location
Indonesia | 35 |
Turkey | 29 |
Germany | 15 |
Malaysia | 13 |
United States | 13 |
China | 12 |
South Korea | 11 |
Japan | 10 |
Australia | 9 |
Florida | 9 |
Iran | 8 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Gerhard Tutz; Pascal Jordan – Journal of Educational and Behavioral Statistics, 2024
A general framework of latent trait item response models for continuous responses is given. In contrast to classical test theory (CTT) models, which traditionally distinguish between true scores and error scores, the responses are clearly linked to latent traits. It is shown that CTT models can be derived as special cases, but the model class is…
Descriptors: Item Response Theory, Responses, Scores, Models
Stefanie A. Wind; Beyza Aksu-Dunya – Applied Measurement in Education, 2024
Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent…
Descriptors: Response Style (Tests), Computer Assisted Testing, Adaptive Testing, Affective Measures
Embretson, Susan – Large-scale Assessments in Education, 2023
Understanding the cognitive processes, skills and strategies that examinees use in testing is important for construct validity and score interpretability. Although response processes evidence has long been included as an important aspect of validity (i.e., "Standards for Educational and Psychological Tests," 1999), relevant studies are…
Descriptors: Cognitive Processes, Test Validity, Item Response Theory, Test Wiseness
Yue Liu; Zhen Li; Hongyun Liu; Xiaofeng You – Applied Measurement in Education, 2024
Low test-taking effort of examinees has been considered a source of construct-irrelevant variance in item response modeling, leading to serious consequences on parameter estimation. This study aims to investigate how non-effortful response (NER) influences the estimation of item and person parameters in item-pool scale linking (IPSL) and whether…
Descriptors: Item Response Theory, Computation, Simulation, Responses
Nana Kim; Daniel M. Bolt – Journal of Educational and Behavioral Statistics, 2024
Some previous studies suggest that response times (RTs) on rating scale items can be informative about the content trait, but a more recent study suggests they may also be reflective of response styles. The latter result raises questions about the possible consideration of RTs for content trait estimation, as response styles are generally viewed…
Descriptors: Item Response Theory, Reaction Time, Response Style (Tests), Psychometrics
Jochen Ranger; Christoph König; Benjamin W. Domingue; Jörg-Tobias Kuhn; Andreas Frey – Journal of Educational and Behavioral Statistics, 2024
In the existing multidimensional extensions of the log-normal response time (LNRT) model, the log response times are decomposed into a linear combination of several latent traits. These models are fully compensatory as low levels on traits can be counterbalanced by high levels on other traits. We propose an alternative multidimensional extension…
Descriptors: Models, Statistical Distributions, Item Response Theory, Response Rates (Questionnaires)
Hung-Yu Huang – Educational and Psychological Measurement, 2025
The use of discrete categorical formats to assess psychological traits has a long-standing tradition that is deeply embedded in item response theory models. The increasing prevalence and endorsement of computer- or web-based testing has led to greater focus on continuous response formats, which offer numerous advantages in both respondent…
Descriptors: Response Style (Tests), Psychological Characteristics, Item Response Theory, Test Reliability
Malikovic, Marko; Toncic, Marko – International Journal of Social Research Methodology, 2023
In web questionnaires which are created in paging design where each question is on a separate page, a progress indicator is an element that should inform the respondent about their current position within the questionnaire. Linear progress indicators are commonly used, and sometimes fast-to-slow progress indicators are used for research purposes.…
Descriptors: Online Surveys, Internet, Response Rates (Questionnaires), Dropout Rate
Anirudhan Badrinath; Zachary Pardos – Journal of Educational Data Mining, 2025
Bayesian Knowledge Tracing (BKT) is a well-established model for formative assessment, with optimization typically using expectation maximization, conjugate gradient descent, or brute force search. However, one of the flaws of existing optimization techniques for BKT models is convergence to undesirable local minima that negatively impact…
Descriptors: Bayesian Statistics, Intelligent Tutoring Systems, Problem Solving, Audience Response Systems
Martijn Schoenmakers; Jesper Tijmstra; Jeroen Vermunt; Maria Bolsinova – Educational and Psychological Measurement, 2024
Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questionnaire results. For this reason, various item response theory (IRT) models have been proposed to model ERS and correct for it. Comparisons of these…
Descriptors: Item Response Theory, Response Style (Tests), Models, Likert Scales
Wu, Tong; Kim, Stella Y.; Westine, Carl – Educational and Psychological Measurement, 2023
For large-scale assessments, data are often collected with missing responses. Despite the wide use of item response theory (IRT) in many testing programs, however, the existing literature offers little insight into the effectiveness of various approaches to handling missing responses in the context of scale linking. Scale linking is commonly used…
Descriptors: Data Analysis, Responses, Statistical Analysis, Measurement
Chunyan Liu; Raja Subhiyah; Richard A. Feinberg – Applied Measurement in Education, 2024
Mixed-format tests that include both multiple-choice (MC) and constructed-response (CR) items have become widely used in many large-scale assessments. When an item response theory (IRT) model is used to score a mixed-format test, the unidimensionality assumption may be violated if the CR items measure a different construct from that measured by MC…
Descriptors: Test Format, Response Style (Tests), Multiple Choice Tests, Item Response Theory
Gisele Magarotto Machado; Nelson Hauck-Filho; Ana Celi Pallini; João Lucas Dias-Viana; Leilane Henriette Barreto Chiappetta Santana; Cristina Aparecida Nunes Medeiros da Silva; Felipe Valentini – International Journal of Testing, 2024
Our primary objective was to examine the impact of acquiescent responding on empathy measures. We selected the Affective and Cognitive Measure of Empathy (ACME) as the measure for this case study due to its composition--the affective dissonance scale consists solely of items that are semantically reversed relative to the empathy construct, while…
Descriptors: Cognitive Measurement, Empathy, Adults, Foreign Countries
Cornelia Eva Neuert – Sociological Methods & Research, 2024
The quality of data in surveys is affected by response burden and questionnaire length. With an increasing number of questions, respondents can become bored, tired, and annoyed and may take shortcuts to reduce the effort needed to complete the survey. In this article, direct evidence is presented on how the position of items within a web…
Descriptors: Online Surveys, Test Items, Test Format, Test Construction
Esther Ulitzsch; Janine Buchholz; Hyo Jeong Shin; Jonas Bertling; Oliver Lüdtke – Large-scale Assessments in Education, 2024
Common indicator-based approaches to identifying careless and insufficient effort responding (C/IER) in survey data scan response vectors or timing data for aberrances, such as patterns signaling straight lining, multivariate outliers, or signals that respondents rushed through the administered items. Each of these approaches is susceptible to…
Descriptors: Response Style (Tests), Attention, Achievement Tests, Foreign Countries