NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)9
Since 2007 (last 20 years)40
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 45 results Save | Export
Mohammed Alqabbaa – ProQuest LLC, 2021
Psychometricians at an organization named the Education and Training Evaluation Commission (ETEC) developed a new test scoring method called the latent D-scoring method (DSM-L) where it is believed that the new method itself is much easier and more efficient to use compared to the Item Response Theory (IRT) method. However, there are no studies…
Descriptors: Item Response Theory, Scoring, Item Analysis, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Cai, Tianji; Xia, Yiwei; Zhou, Yisu – Sociological Methods & Research, 2021
Analysts of discrete data often face the challenge of managing the tendency of inflation on certain values. When treated improperly, such phenomenon may lead to biased estimates and incorrect inferences. This study extends the existing literature on single-value inflated models and develops a general framework to handle variables with more than…
Descriptors: Statistical Distributions, Probability, Statistical Analysis, Statistical Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kartal, Seval Kula – International Journal of Progressive Education, 2020
One of the aims of the current study is to specify the model providing the best fit to the data among the exploratory, the bifactor exploratory and the confirmatory structural equation models. The study compares the three models based on the model data fit statistics and item parameter estimations (factor loadings, cross-loadings, factor…
Descriptors: Learning Motivation, Measures (Individuals), Undergraduate Students, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jörg-Tobias – Educational and Psychological Measurement, 2016
In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…
Descriptors: Reaction Time, Models, Multivariate Analysis, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
McGill, Ryan J. – Psychology in the Schools, 2017
The present study examined the factor structure of the Luria interpretive model for the Kaufman Assessment Battery for Children-Second Edition (KABC-II) with normative sample participants aged 7-18 (N = 2,025) using confirmatory factor analysis with maximum-likelihood estimation. For the eight subtest Luria configuration, an alternative…
Descriptors: Children, Intelligence Tests, Models, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Zeller, Florian; Krampen, Dorothea; Reiß, Siegbert; Schweizer, Karl – Educational and Psychological Measurement, 2017
The item-position effect describes how an item's position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine…
Descriptors: Goodness of Fit, Simulation, Factor Analysis, Intelligence Tests
Potgieter, Cornelis; Kamata, Akihito; Kara, Yusuf – Grantee Submission, 2017
This study proposes a two-part model that includes components for reading accuracy and reading speed. The speed component is a log-normal factor model, for which speed data are measured by reading time for each sentence being assessed. The accuracy component is a binomial-count factor model, where the accuracy data are measured by the number of…
Descriptors: Reading Rate, Oral Reading, Accuracy, Models
Koziol, Natalie A.; Bovaird, James A. – Educational and Psychological Measurement, 2018
Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…
Descriptors: Computation, Tests, Error of Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Tong, Bing – Educational and Psychological Measurement, 2016
A latent variable modeling procedure is discussed that can be used to test if two or more homogeneous multicomponent instruments with distinct components are measuring the same underlying construct. The method is widely applicable in scale construction and development research and can also be of special interest in construct validation studies.…
Descriptors: Models, Statistical Analysis, Measurement Techniques, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Beaujean, A. Alexander; Morgan, Grant B. – Practical Assessment, Research & Evaluation, 2016
Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares) either with or without transforming the count variables. In either case, using typical regression for count data can…
Descriptors: Multiple Regression Analysis, Educational Research, Least Squares Statistics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jörg-Tobias – Journal of Educational and Behavioral Statistics, 2015
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Descriptors: Psychological Testing, Reaction Time, Statistical Analysis, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Falk, Carl F.; Cai, Li – Grantee Submission, 2014
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest…
Descriptors: Maximum Likelihood Statistics, Item Response Theory, Computation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Mei; Paulson, Sharon E.; Finch, W. Holmes; Mcintosh, David E.; Rothlisberg, Barbara A. – Psychology in the Schools, 2014
This study examined the underlying constructs measured by the Woodcock-Johnson Tests of Cognitive Abilities, Third Edition (WJ-III COG) and the Stanford-Binet Intelligence Scales, Fifth Edition (SB5), based on the Cattell-Horn-Carrol (CHC) theory of cognitive abilities. This study reports the results of the first joint confirmatory factor analysis…
Descriptors: Factor Analysis, Intelligence Tests, Preschool Children, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
France, Stephen L.; Batchelder, William H. – Educational and Psychological Measurement, 2015
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…
Descriptors: Maximum Likelihood Statistics, Test Items, Difficulty Level, Test Theory
Previous Page | Next Page »
Pages: 1  |  2  |  3