NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 202514
Since 2022 (last 5 years)76
Since 2017 (last 10 years)218
Laws, Policies, & Programs
Showing 1 to 15 of 218 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yue Liu; Zhen Li; Hongyun Liu; Xiaofeng You – Applied Measurement in Education, 2024
Low test-taking effort of examinees has been considered a source of construct-irrelevant variance in item response modeling, leading to serious consequences on parameter estimation. This study aims to investigate how non-effortful response (NER) influences the estimation of item and person parameters in item-pool scale linking (IPSL) and whether…
Descriptors: Item Response Theory, Computation, Simulation, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Henninger, Mirka – Journal of Educational Measurement, 2021
Item Response Theory models with varying thresholds are essential tools to account for unknown types of response tendencies in rating data. However, in order to separate constructs to be measured and response tendencies, specific constraints have to be imposed on varying thresholds and their interrelations. In this article, a multidimensional…
Descriptors: Response Style (Tests), Item Response Theory, Models, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jianbin Fu; TsungHan Ho; Xuan Tan – Practical Assessment, Research & Evaluation, 2025
Item parameter estimation using an item response theory (IRT) model with fixed ability estimates is useful in equating with small samples on anchor items. The current study explores the impact of three ability estimation methods (weighted likelihood estimation [WLE], maximum a posteriori [MAP], and posterior ability distribution estimation [PST])…
Descriptors: Item Response Theory, Test Items, Computation, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Soland, James – Educational and Psychological Measurement, 2021
As low-stakes testing contexts increase, low test-taking effort may serve as a serious validity threat. One common solution to this problem is to identify noneffortful responses and treat them as missing during parameter estimation via the effort-moderated item response theory (EM-IRT) model. Although this model has been shown to outperform…
Descriptors: Computation, Accuracy, Item Response Theory, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Kuan-Yu Jin; Yi-Jhen Wu; Ming Ming Chiu – Measurement: Interdisciplinary Research and Perspectives, 2025
Many education tests and psychological surveys elicit respondent views of similar constructs across scenarios (e.g., story followed by multiple choice questions) by repeating common statements across scales (one-statement-multiple-scale, OSMS). However, a respondent's earlier responses to the common statement can affect later responses to it…
Descriptors: Administrator Surveys, Teacher Surveys, Responses, Test Items
Jiaying Xiao – ProQuest LLC, 2024
Multidimensional Item Response Theory (MIRT) has been widely used in educational and psychological assessments. It estimates multiple constructs simultaneously and models the correlations among latent constructs. While it provides more accurate results, the unidimensional IRT model is still dominant in real applications. One major reason is that…
Descriptors: Item Response Theory, Algorithms, Computation, Efficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Sijia; Luo, Jinwen; Cai, Li – Educational and Psychological Measurement, 2023
Random item effects item response theory (IRT) models, which treat both person and item effects as random, have received much attention for more than a decade. The random item effects approach has several advantages in many practical settings. The present study introduced an explanatory multidimensional random item effects rating scale model. The…
Descriptors: Rating Scales, Item Response Theory, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Huber, Chuck; Marcoulides, George A.; Pusic, Martin; Menold, Natalja – Measurement: Interdisciplinary Research and Perspectives, 2021
A readily and widely applicable procedure is discussed that can be used to point and interval estimate the probabilities of particular responses on polytomous items at pre-specified points along underlying latent continua. The items are assumed thereby to be part of unidimensional multi-component measuring instruments that may contain also binary…
Descriptors: Probability, Computation, Test Items, Responses
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yanxuan Qu; Sandip Sinharay – ETS Research Report Series, 2023
Though a substantial amount of research exists on imputing missing scores in educational assessments, there is little research on cases where responses or scores to an item are missing for all test takers. In this paper, we tackled the problem of imputing missing scores for tests for which the responses to an item are missing for all test takers.…
Descriptors: Scores, Test Items, Accuracy, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Katherine Williams; Chenmu Xing; Kolbi Bradley; Hilary Barth; Andrea L. Patalano – Journal of Numerical Cognition, 2023
Recent work reveals a left digit effect in number line estimation such that adults' and children's estimates for three-digit numbers with different hundreds-place digits but nearly identical magnitudes are systematically different (e.g., 398 is placed too far to the left of 401 on a 0-1000 line, despite their almost indistinguishable magnitudes;…
Descriptors: Computation, Visual Aids, Feedback (Response), Undergraduate Students
Hess, Jessica – ProQuest LLC, 2023
This study was conducted to further research into the impact of student-group item parameter drift (SIPD) --referred to as subpopulation item parameter drift in previous research-- on ability estimates and proficiency classification accuracy when occurring in the discrimination parameter of a 2-PL item response theory (IRT) model. Using Monte…
Descriptors: Test Items, Groups, Ability, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Boris Forthmann; Benjamin Goecke; Roger E. Beaty – Creativity Research Journal, 2025
Human ratings are ubiquitous in creativity research. Yet, the process of rating responses to creativity tasks -- typically several hundred or thousands of responses, per rater -- is often time-consuming and expensive. Planned missing data designs, where raters only rate a subset of the total number of responses, have been recently proposed as one…
Descriptors: Creativity, Research, Researchers, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Smitha S. Kumar; Michael A. Lones; Manuel Maarek; Hind Zantout – ACM Transactions on Computing Education, 2025
Programming demands a variety of cognitive skills, and mastering these competencies is essential for success in computer science education. The importance of formative feedback is well acknowledged in programming education, and thus, a diverse range of techniques has been proposed to generate and enhance formative feedback for programming…
Descriptors: Automation, Computer Science Education, Programming, Feedback (Response)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  15