Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 13 |
| Since 2017 (last 10 years) | 34 |
| Since 2007 (last 20 years) | 128 |
Descriptor
| Factor Analysis | 161 |
| Models | 161 |
| Item Response Theory | 104 |
| Correlation | 44 |
| Foreign Countries | 30 |
| Goodness of Fit | 30 |
| Test Items | 28 |
| Evaluation Methods | 25 |
| Statistical Analysis | 24 |
| Comparative Analysis | 23 |
| Measures (Individuals) | 23 |
| More ▼ | |
Source
Author
| Ferrando, Pere J. | 7 |
| Svetina, Dubravka | 4 |
| Levy, Roy | 3 |
| Maydeu-Olivares, Alberto | 3 |
| Bauer, Daniel J. | 2 |
| Cai, Li | 2 |
| Cole, Rachel | 2 |
| Edwards, Michael C. | 2 |
| Finch, Holmes | 2 |
| Kemple, James J. | 2 |
| Lent, Jessica | 2 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 3 |
| Practitioners | 1 |
| Students | 1 |
Location
| China | 2 |
| Germany | 2 |
| Hong Kong | 2 |
| Netherlands | 2 |
| New York | 2 |
| North Carolina | 2 |
| United Kingdom | 2 |
| Arizona | 1 |
| Australia | 1 |
| Austria | 1 |
| Bahrain | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jochen Ranger; Christoph König; Benjamin W. Domingue; Jörg-Tobias Kuhn; Andreas Frey – Journal of Educational and Behavioral Statistics, 2024
In the existing multidimensional extensions of the log-normal response time (LNRT) model, the log response times are decomposed into a linear combination of several latent traits. These models are fully compensatory as low levels on traits can be counterbalanced by high levels on other traits. We propose an alternative multidimensional extension…
Descriptors: Models, Statistical Distributions, Item Response Theory, Response Rates (Questionnaires)
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Pere J. Ferrando; Fabia Morales-Vives; Ana Hernández-Dorado – Educational and Psychological Measurement, 2024
In recent years, some models for binary and graded format responses have been proposed to assess unipolar variables or "quasi-traits." These studies have mainly focused on clinical variables that have traditionally been treated as bipolar traits. In the present study, we have made a proposal for unipolar traits measured with continuous…
Descriptors: Item Analysis, Goodness of Fit, Accuracy, Test Validity
Sooyong Lee; Suhwa Han; Seung W. Choi – Journal of Educational Measurement, 2024
Research has shown that multiple-indicator multiple-cause (MIMIC) models can result in inflated Type I error rates in detecting differential item functioning (DIF) when the assumption of equal latent variance is violated. This study explains how the violation of the equal variance assumption adversely impacts the detection of nonuniform DIF and…
Descriptors: Factor Analysis, Bayesian Statistics, Test Bias, Item Response Theory
Hoang V. Nguyen; Niels G. Waller – Educational and Psychological Measurement, 2024
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter…
Descriptors: Monte Carlo Methods, Item Response Theory, Correlation, Error of Measurement
Guo, Wenjing; Choi, Youn-Jeng – Educational and Psychological Measurement, 2023
Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Guidelines
Schweizer, Karl; Wang, Tengfei; Ren, Xuezhu – Journal of Experimental Education, 2022
The essay reports two studies on confirmatory factor analysis of speeded data with an effect of selective responding. This response strategy leads test takers to choose their own working order instead of completing the items along with the given order. Methods for detecting speededness despite such a deviation from the given order are proposed and…
Descriptors: Factor Analysis, Response Style (Tests), Decision Making, Test Items
Ferrando, Pere J.; Navarro-González, David – Educational and Psychological Measurement, 2021
Item response theory "dual" models (DMs) in which both items and individuals are viewed as sources of differential measurement error so far have been proposed only for unidimensional measures. This article proposes two multidimensional extensions of existing DMs: the M-DTCRM (dual Thurstonian continuous response model), intended for…
Descriptors: Item Response Theory, Error of Measurement, Models, Factor Analysis
Kim, Kyung Yong – Journal of Educational Measurement, 2020
New items are often evaluated prior to their operational use to obtain item response theory (IRT) item parameter estimates for quality control purposes. Fixed parameter calibration is one linking method that is widely used to estimate parameters for new items and place them on the desired scale. This article provides detailed descriptions of two…
Descriptors: Item Response Theory, Evaluation Methods, Test Items, Simulation
Dakota W. Cintron – ProQuest LLC, 2020
Observable data in empirical social and behavioral science studies are often categorical (i.e., binary, ordinal, or nominal). When categorical data are outcomes, they fail to maintain the scale and distributional properties of linear regression and factor analysis. Attempting to estimate model parameters for categorical outcome data with the…
Descriptors: Factor Analysis, Computation, Statistics, Methods
Sarsa, Sami; Leinonen, Juho; Hellas, Arto – Journal of Educational Data Mining, 2022
New knowledge tracing models are continuously being proposed, even at a pace where state-of-the-art models cannot be compared with each other at the time of publication. This leads to a situation where ranking models is hard, and the underlying reasons of the models' performance -- be it architectural choices, hyperparameter tuning, performance…
Descriptors: Learning Processes, Artificial Intelligence, Intelligent Tutoring Systems, Memory
Mateja Ploj Virtic; Andre Du Plessis; Andrej Šorgo – Center for Educational Policy Studies Journal, 2023
In the context of improving the quality of teacher education, the focus of the present work was to adapt the Mentoring for Effective Primary Science Teaching instrument to become more universal and have the potential to be used beyond the elementary science mentoring context. The adapted instrument was renamed the Mentoring for Effective Teaching…
Descriptors: Test Construction, Test Validity, Test Reliability, Measures (Individuals)
Chung, Seungwon; Houts, Carrie – Measurement: Interdisciplinary Research and Perspectives, 2020
Advanced modeling of item response data through the item response theory (IRT) or item factor analysis frameworks is becoming increasingly popular. In the social and behavioral sciences, the underlying structure of tests/assessments is often multidimensional (i.e., more than 1 latent variable/construct is represented in the items). This review…
Descriptors: Item Response Theory, Evaluation Methods, Models, Factor Analysis
Baris Pekmezci, Fulya; Gulleroglu, H. Deniz – Eurasian Journal of Educational Research, 2019
Purpose: This study aims to investigate the orthogonality assumption, which restricts the use of Bifactor item response theory under different conditions. Method: Data of the study have been obtained in accordance with the Bifactor model. It has been produced in accordance with two different models (Model 1 and Model 2) in a simulated way.…
Descriptors: Item Response Theory, Accuracy, Item Analysis, Correlation
de Vries, Jitske; Feskens, Remco; Keuning, Jos; van der Kleij, Fabienne – Education Sciences, 2022
The aim of this study was to investigate the comparability of feedback across culturally diverse countries by assessing the measurement invariance in PISA 2015 data. A multi-group confirmatory factor analysis showed that the feedback scale implemented in PISA 2015 was not invariant across countries. The intercepts and residuals of the factor model…
Descriptors: Cross Cultural Studies, Feedback (Response), Cultural Differences, Factor Analysis

Peer reviewed
Direct link
