NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 42 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mostafa Hosseinzadeh; Ki Lynn Matlock Cole – Educational and Psychological Measurement, 2024
In real-world situations, multidimensional data may appear on large-scale tests or psychological surveys. The purpose of this study was to investigate the effects of the quantity and magnitude of cross-loadings and model specification on item parameter recovery in multidimensional Item Response Theory (MIRT) models, especially when the model was…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Algorithms
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jehanzeb Rashid Cheema – Journal of Education in Muslim Societies, 2024
This study explores the relationship between the Spiral Dynamics and the 3H (head, heart, hands) models of human growth and development, using constructs such as empathy, moral reasoning, forgiveness, and community mindedness that have been shown to have implications for education. The specific research question is, "Can a combination of…
Descriptors: Correlation, Factor Analysis, Computer Software, Moral Values
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; DiStefano, Christine; Calvocoressi, Lisa; Volker, Martin – Educational and Psychological Measurement, 2022
A class of effect size indices are discussed that evaluate the degree to which two nested confirmatory factor analysis models differ from each other in terms of fit to a set of observed variables. These descriptive effect measures can be used to quantify the impact of parameter restrictions imposed in an initially considered model and are free…
Descriptors: Effect Size, Models, Measurement Techniques, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cao, Chunhua; Kim, Eun Sook; Chen, Yi-Hsin; Ferron, John – Educational and Psychological Measurement, 2021
This study examined the impact of omitting covariates interaction effect on parameter estimates in multilevel multiple-indicator multiple-cause models as well as the sensitivity of fit indices to model misspecification when the between-level, within-level, or cross-level interaction effect was left out in the models. The parameter estimates…
Descriptors: Goodness of Fit, Hierarchical Linear Modeling, Computation, Models
Ben Stenhaug; Ben Domingue – Grantee Submission, 2022
The fit of an item response model is typically conceptualized as whether a given model could have generated the data. We advocate for an alternative view of fit, "predictive fit", based on the model's ability to predict new data. We derive two predictive fit metrics for item response models that assess how well an estimated item response…
Descriptors: Goodness of Fit, Item Response Theory, Prediction, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Yanyan; Strachan, Tyler; Ip, Edward H.; Willse, John T.; Chen, Shyh-Huei; Ackerman, Terry – International Journal of Testing, 2020
This research examined correlation estimates between latent abilities when using the two-dimensional and three-dimensional compensatory and noncompensatory item response theory models. Simulation study results showed that the recovery of the latent correlation was best when the test contained 100% of simple structure items for all models and…
Descriptors: Item Response Theory, Models, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Su-Pin; Huang, Hung-Yu – Journal of Educational and Behavioral Statistics, 2022
To address response style or bias in rating scales, forced-choice items are often used to request that respondents rank their attitudes or preferences among a limited set of options. The rating scales used by raters to render judgments on ratees' performance also contribute to rater bias or errors; consequently, forced-choice items have recently…
Descriptors: Evaluation Methods, Rating Scales, Item Analysis, Preferences
Peer reviewed Peer reviewed
Direct linkDirect link
Zhou, Sherry; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2020
The semi-generalized partial credit model (Semi-GPCM) has been proposed as a unidimensional modeling method for handling not applicable scale responses and neutral scale responses, and it has been suggested that the model may be of use in handling missing data in scale items. The purpose of this study is to evaluate the ability of the…
Descriptors: Models, Statistical Analysis, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Nicewander, W. Alan – Educational and Psychological Measurement, 2018
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
Descriptors: Error of Measurement, Correlation, Sample Size, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Toprak, Emre; Gelbal, Selahattin – International Journal of Assessment Tools in Education, 2020
This study aims to compare the performances of the artificial neural network, decision trees and discriminant analysis methods to classify student achievement. The study uses multilayer perceptron model to form the artificial neural network model, chi-square automatic interaction detection (CHAID) algorithm to apply the decision trees method and…
Descriptors: Comparative Analysis, Classification, Artificial Intelligence, Networks
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Soo; Bulut, Okan; Suh, Youngsuk – Educational and Psychological Measurement, 2017
A number of studies have found multiple indicators multiple causes (MIMIC) models to be an effective tool in detecting uniform differential item functioning (DIF) for individual items and item bundles. A recently developed MIMIC-interaction model is capable of detecting both uniform and nonuniform DIF in the unidimensional item response theory…
Descriptors: Test Bias, Test Items, Models, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Alper; Anil, Duygu – Educational Sciences: Theory and Practice, 2017
This study investigates the effects of sample size and test length on item-parameter estimation in test development utilizing three unidimensional dichotomous models of item response theory (IRT). For this purpose, a real language test comprised of 50 items was administered to 6,288 students. Data from this test was used to obtain data sets of…
Descriptors: Test Length, Sample Size, Item Response Theory, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Soo; Suh, Youngsuk – Journal of Educational Measurement, 2018
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Descriptors: Item Response Theory, Sample Size, Models, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Koran, Jennifer – Measurement and Evaluation in Counseling and Development, 2016
Proactive preliminary minimum sample size determination can be useful for the early planning stages of a latent variable modeling study to set a realistic scope, long before the model and population are finalized. This study examined existing methods and proposed a new method for proactive preliminary minimum sample size determination.
Descriptors: Factor Analysis, Sample Size, Models, Sampling
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kogar, Esin Yilmaz; Kelecioglu, Hülya – Journal of Education and Learning, 2017
The purpose of this research is to first estimate the item and ability parameters and the standard error values related to those parameters obtained from Unidimensional Item Response Theory (UIRT), bifactor (BIF) and Testlet Response Theory models (TRT) in the tests including testlets, when the number of testlets, number of independent items, and…
Descriptors: Item Response Theory, Models, Mathematics Tests, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3