NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20259
Since 2022 (last 5 years)82
Since 2017 (last 10 years)199
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 199 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hung-Yu Huang – Educational and Psychological Measurement, 2025
The use of discrete categorical formats to assess psychological traits has a long-standing tradition that is deeply embedded in item response theory models. The increasing prevalence and endorsement of computer- or web-based testing has led to greater focus on continuous response formats, which offer numerous advantages in both respondent…
Descriptors: Response Style (Tests), Psychological Characteristics, Item Response Theory, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Danielle R. Blazek; Jason T. Siegel – International Journal of Social Research Methodology, 2024
Social scientists have long agreed that satisficing behavior increases error and reduces the validity of survey data. There have been numerous reviews on detecting satisficing behavior, but preventing this behavior has received less attention. The current narrative review provides empirically supported guidance on preventing satisficing by…
Descriptors: Response Style (Tests), Responses, Reaction Time, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua B. Gilbert; Zachary Himmelsbach; James Soland; Mridul Joshi; Benjamin W. Domingue – Journal of Policy Analysis and Management, 2025
Analyses of heterogeneous treatment effects (HTE) are common in applied causal inference research. However, when outcomes are latent variables assessed via psychometric instruments such as educational tests, standard methods ignore the potential HTE that may exist among the individual items of the outcome measure. Failing to account for…
Descriptors: Item Response Theory, Test Items, Error of Measurement, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Benjamin Lugu; Yurou Wang – International Journal of Testing, 2025
Mokken Scale Analysis (MSA) is a nonparametric approach that offers exploratory tools for understanding the nature of item responses while emphasizing invariance requirements. MSA is often discussed as it relates to Rasch measurement theory, which also emphasizes invariance, but uses parametric models. Researchers who have compared and combined…
Descriptors: Item Response Theory, Scaling, Surveys, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Ádám Stefkovics – International Journal of Social Research Methodology, 2025
Interviewer effects in telephone surveys on political topics are likely to occur. The literature has yielded considerable evidence about the impact of basic interviewer characteristics, but research is lacking on how interviewers' beliefs may shape responses. This study is aimed at assessing the association between the interviewers' party…
Descriptors: Interviews, Political Attitudes, Telephone Surveys, Political Issues
Zebing Wu – ProQuest LLC, 2024
Response style, one common aberrancy in non-cognitive assessments in psychological fields, is problematic in terms of inaccurate estimation of item and person parameters, which leads to serious reliability, validity, and fairness issues (Baumgartner & Steenkamp, 2001; Bolt & Johnson, 2009; Bolt & Newton, 2011). Response style refers to…
Descriptors: Response Style (Tests), Accuracy, Preferences, Psychological Testing
Peer reviewed Peer reviewed
Direct linkDirect link
In-Hee Choi – Asia Pacific Education Review, 2024
Longitudinal item response data often exhibit two types of measurement noninvariance: the noninvariance of item parameters between subject groups and that of item parameters across multiple time points. This study proposes a comprehensive approach to the simultaneous modeling of both types of measurement noninvariance in terms of longitudinal item…
Descriptors: Longitudinal Studies, Item Response Theory, Growth Models, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
William C. M. Belzak; Daniel J. Bauer – Journal of Educational and Behavioral Statistics, 2024
Testing for differential item functioning (DIF) has undergone rapid statistical developments recently. Moderated nonlinear factor analysis (MNLFA) allows for simultaneous testing of DIF among multiple categorical and continuous covariates (e.g., sex, age, ethnicity, etc.), and regularization has shown promising results for identifying DIF among…
Descriptors: Test Bias, Algorithms, Factor Analysis, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Seyma Erbay Mermer – Pegem Journal of Education and Instruction, 2024
This study aims to compare item and student parameters of dichotomously scored multidimensional constructs estimated based on unidimensional and multidimensional Item Response Theory (IRT) under different conditions of sample size, interdimensional correlation and number of dimensions. This research, conducted with simulations, is of a basic…
Descriptors: Item Response Theory, Correlation, Error of Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Xiaowen Liu – International Journal of Testing, 2024
Differential item functioning (DIF) often arises from multiple sources. Within the context of multidimensional item response theory, this study examined DIF items with varying secondary dimensions using the three DIF methods: SIBTEST, Mantel-Haenszel, and logistic regression. The effect of the number of secondary dimensions on DIF detection rates…
Descriptors: Item Analysis, Test Items, Item Response Theory, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Hwanggyu Lim; Danqi Zhu; Edison M. Choe; Kyung T. Han – Journal of Educational Measurement, 2024
This study presents a generalized version of the residual differential item functioning (RDIF) detection framework in item response theory, named GRDIF, to analyze differential item functioning (DIF) in multiple groups. The GRDIF framework retains the advantages of the original RDIF framework, such as computational efficiency and ease of…
Descriptors: Item Response Theory, Test Bias, Test Reliability, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Xin Guo; Qiang Fu – Sociological Methods & Research, 2024
Grouped and right-censored (GRC) counts have been used in a wide range of attitudinal and behavioural surveys yet they cannot be readily analyzed or assessed by conventional statistical models. This study develops a unified regression framework for the design and optimality of GRC counts in surveys. To process infinitely many grouping schemes for…
Descriptors: Attitude Measures, Surveys, Research Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Jiaying Xiao; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Accurate item parameters and standard errors (SEs) are crucial for many multidimensional item response theory (MIRT) applications. A recent study proposed the Gaussian Variational Expectation Maximization (GVEM) algorithm to improve computational efficiency and estimation accuracy (Cho et al., 2021). However, the SE estimation procedure has yet to…
Descriptors: Error of Measurement, Models, Evaluation Methods, Item Analysis
Jiangqiong Li – ProQuest LLC, 2024
When measuring latent constructs, for example, language ability, we use statistical models to specify appropriate relationships between the latent construct and observe responses to test items. These models rely on theoretical assumptions to ensure accurate parameter estimates for valid inferences based on the test results. This dissertation…
Descriptors: Goodness of Fit, Item Response Theory, Models, Measurement Techniques
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  14