NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)11
Since 2007 (last 20 years)17
Audience
Researchers1
Location
Iowa1
Minnesota1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Karadavut, Tugba – Applied Measurement in Education, 2021
Mixture IRT models address the heterogeneity in a population by extracting latent classes and allowing item parameters to vary between latent classes. Once the latent classes are extracted, they need to be further examined to be characterized. Some approaches have been adopted in the literature for this purpose. These approaches examine either the…
Descriptors: Item Response Theory, Models, Test Items, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Zhou, Sherry; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2020
The semi-generalized partial credit model (Semi-GPCM) has been proposed as a unidimensional modeling method for handling not applicable scale responses and neutral scale responses, and it has been suggested that the model may be of use in handling missing data in scale items. The purpose of this study is to evaluate the ability of the…
Descriptors: Models, Statistical Analysis, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Troche, Stefan – Educational and Psychological Measurement, 2018
In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of…
Descriptors: Investigations, Difficulty Level, Factor Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Andersson, Björn; Xin, Tao – Educational and Psychological Measurement, 2018
In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…
Descriptors: Item Response Theory, Test Reliability, Test Items, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Savalei, Victoria; Rhemtulla, Mijke – Journal of Educational and Behavioral Statistics, 2017
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately…
Descriptors: Computation, Statistical Analysis, Test Items, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Zeller, Florian; Krampen, Dorothea; Reiß, Siegbert; Schweizer, Karl – Educational and Psychological Measurement, 2017
The item-position effect describes how an item's position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine…
Descriptors: Goodness of Fit, Simulation, Factor Analysis, Intelligence Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony – Educational and Psychological Measurement, 2018
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
Descriptors: Test Items, Test Format, Correlation, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Bulut, Okan; Quo, Qi; Gierl, Mark J. – Large-scale Assessments in Education, 2017
Position effects may occur in both paper--pencil tests and computerized assessments when examinees respond to the same test items located in different positions on the test. To examine position effects in large-scale assessments, previous studies often used multilevel item response models within the generalized linear mixed modeling framework.…
Descriptors: Structural Equation Models, Educational Assessment, Measurement, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jörg-Tobias – Journal of Educational and Behavioral Statistics, 2015
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Descriptors: Psychological Testing, Reaction Time, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Edwards, Julianne M. – Educational and Psychological Measurement, 2016
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
Descriptors: Item Response Theory, Computation, Nonparametric Statistics, Bayesian Statistics
Finster, Matthew – Online Submission, 2017
This brief presents initial evidence about the reliability and validity of a novice teacher survey and a novice teacher supervisor survey. The novice teacher and novice teacher supervisor surveys assess how well prepared novice teachers are to meet the job requirements of teaching. The surveys are designed to provide educator preparation programs…
Descriptors: Test Construction, Test Validity, Teacher Surveys, Beginning Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Cormier, Damien C.; Yeo, Seungsoo; Christ, Theodore J.; Offrey, Laura D.; Pratt, Katherine – Exceptionality, 2016
The purpose of this study is to evaluate the relationship of mathematics calculation rate (curriculum-based measurement of mathematics; CBM-M), reading rate (curriculum-based measurement of reading; CBM-R), and mathematics application and problem solving skills (mathematics screener) among students at four levels of proficiency on a statewide…
Descriptors: Computation, Problem Solving, Grade 3, Elementary School Students
MacDonald, George T. – ProQuest LLC, 2014
A simulation study was conducted to explore the performance of the linear logistic test model (LLTM) when the relationships between items and cognitive components were misspecified. Factors manipulated included percent of misspecification (0%, 1%, 5%, 10%, and 15%), form of misspecification (under-specification, balanced misspecification, and…
Descriptors: Simulation, Item Response Theory, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Jensen, Nate; Rice, Andrew; Soland, James – Educational Evaluation and Policy Analysis, 2018
While most educators assume that not all students try their best on achievement tests, no current research examines if behaviors associated with low test effort, like rapidly guessing on test items, affect teacher value-added estimates. In this article, we examined the prevalence of rapid guessing to determine if this behavior varied by grade,…
Descriptors: Item Response Theory, Value Added Models, Achievement Tests, Test Items
Previous Page | Next Page »
Pages: 1  |  2