NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mostafa Hosseinzadeh; Ki Lynn Matlock Cole – Educational and Psychological Measurement, 2024
In real-world situations, multidimensional data may appear on large-scale tests or psychological surveys. The purpose of this study was to investigate the effects of the quantity and magnitude of cross-loadings and model specification on item parameter recovery in multidimensional Item Response Theory (MIRT) models, especially when the model was…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Zhou, Sherry; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2020
The semi-generalized partial credit model (Semi-GPCM) has been proposed as a unidimensional modeling method for handling not applicable scale responses and neutral scale responses, and it has been suggested that the model may be of use in handling missing data in scale items. The purpose of this study is to evaluate the ability of the…
Descriptors: Models, Statistical Analysis, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Woo-yeol; Cho, Sun-Joo – Journal of Educational Measurement, 2017
Cross-level invariance in a multilevel item response model can be investigated by testing whether the within-level item discriminations are equal to the between-level item discriminations. Testing the cross-level invariance assumption is important to understand constructs in multilevel data. However, in most multilevel item response model…
Descriptors: Test Items, Item Response Theory, Item Analysis, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony – Educational and Psychological Measurement, 2018
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
Descriptors: Test Items, Test Format, Correlation, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Dodeen, Hamzeh – Journal of Psychoeducational Assessment, 2015
The purpose of this study was to evaluate the factor structure of the University of California, Los Angeles (UCLA) Loneliness Scale and examine possible wording effects on a sample of 1,429 students from the United Arab Emirates University. Correlated traits-correlated uniqueness as well as correlated traits-correlated methods were used to examine…
Descriptors: Affective Measures, Test Items, Factor Structure, College Students
Finster, Matthew – Online Submission, 2017
This brief presents initial evidence about the reliability and validity of a novice teacher survey and a novice teacher supervisor survey. The novice teacher and novice teacher supervisor surveys assess how well prepared novice teachers are to meet the job requirements of teaching. The surveys are designed to provide educator preparation programs…
Descriptors: Test Construction, Test Validity, Teacher Surveys, Beginning Teachers
MacDonald, George T. – ProQuest LLC, 2014
A simulation study was conducted to explore the performance of the linear logistic test model (LLTM) when the relationships between items and cognitive components were misspecified. Factors manipulated included percent of misspecification (0%, 1%, 5%, 10%, and 15%), form of misspecification (under-specification, balanced misspecification, and…
Descriptors: Simulation, Item Response Theory, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Jensen, Nate; Rice, Andrew; Soland, James – Educational Evaluation and Policy Analysis, 2018
While most educators assume that not all students try their best on achievement tests, no current research examines if behaviors associated with low test effort, like rapidly guessing on test items, affect teacher value-added estimates. In this article, we examined the prevalence of rapid guessing to determine if this behavior varied by grade,…
Descriptors: Item Response Theory, Value Added Models, Achievement Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Jiao, Hong; Wang, Shudong; He, Wei – Journal of Educational Measurement, 2013
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Descriptors: Computation, Item Response Theory, Models, Monte Carlo Methods
Reckase, Mark D.; McKinley, Robert L. – 1982
A class of multidimensional latent trait models is described. The properties of the model parameters, and initial results on the accuracy of a maximum likelihood procedure for estimating the model parameters are discussed. The model presented is a special case of the general model described by Rasch (1961), with close similarities to the models…
Descriptors: Correlation, Item Analysis, Latent Trait Theory, Mathematical Models
PDF pending restoration PDF pending restoration
Koch, William R. – 1981
The two-parameter graded response latent trait model was applied under various conditions to two simulation data sets and to data obtained from a Likert-type attitude scale. The purpose was to investigate the invariance property of the item and person parameter estimates for polychotomously scored data. Correlation and regression analyses, as well…
Descriptors: Attitude Measures, Correlation, Difficulty Level, Goodness of Fit
Peer reviewed Peer reviewed
Cattell, Raymond B.; Krug, Samuel E. – Educational and Psychological Measurement, 1986
Critics have occasionally asserted that the number of factors in the 16PF tests is too large. This study discusses factor-analytic methodology and reviews more than 50 studies in the field. It concludes that the number of important primaries encapsulated in the series is no fewer than the stated number. (Author/JAZ)
Descriptors: Correlation, Cross Cultural Studies, Factor Analysis, Maximum Likelihood Statistics
Carlson, James E. – 1993
In this article some results are presented relating to the dimensionality of instruments containing polytomously scored as well as dichotomously scored items, concentrating on the 1992 National Assessment of Educational Progress' (NAEP) mathematics and reading assessment data and several simulated datasets. The maximum likelihood factor analytic…
Descriptors: Computer Simulation, Correlation, Elementary Secondary Education, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liu, Ou Lydia; Minsky, Jennifer; Ling, Guangming; Kyllonen, Patrick – ETS Research Report Series, 2007
In an effort to standardize academic application procedures, the Standardized Letter of Recommendation (SLR) was developed to capture important cognitive and noncognitive qualities of graduate school candidates. The SLR consists of seven scales ("knowledge," "analytical skills," "communication skills,"…
Descriptors: Letters (Correspondence), Graduate Students, College Applicants, Cognitive Ability