NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 31 to 45 of 241 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Gongjun Xu; Zhuoran Shang – Grantee Submission, 2018
This article focuses on a family of restricted latent structure models with wide applications in psychological and educational assessment, where the model parameters are restricted via a latent structure matrix to reflect prespecified assumptions on the latent attributes. Such a latent matrix is often provided by experts and assumed to be correct…
Descriptors: Psychological Evaluation, Educational Assessment, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Troche, Stefan – Educational and Psychological Measurement, 2018
In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of…
Descriptors: Investigations, Difficulty Level, Factor Analysis, Models
Liu, Haiyan; Zhang, Zhiyong – Grantee Submission, 2017
Misclassification means the observed category is different from the underlying one and it is a form of measurement error in categorical data. The measurement error in continuous, especially normally distributed, data is well known and studied in the literature. But the misclassification in a binary outcome variable has not yet drawn much attention…
Descriptors: Classification, Regression (Statistics), Statistical Bias, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kartal, Seval Kula – International Journal of Progressive Education, 2020
One of the aims of the current study is to specify the model providing the best fit to the data among the exploratory, the bifactor exploratory and the confirmatory structural equation models. The study compares the three models based on the model data fit statistics and item parameter estimations (factor loadings, cross-loadings, factor…
Descriptors: Learning Motivation, Measures (Individuals), Undergraduate Students, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mahmud, Jumailiyah – Educational Research and Reviews, 2017
With the development in computing technology, item response theory (IRT) develops rapidly, and has become a user friendly application in psychometrics world. Limitation in classical theory is one aspect that encourages the use of IRT. In this study, the basic concept of IRT will be discussed. In addition, it will briefly review the ability…
Descriptors: Item Response Theory, Fundamental Concepts, Maximum Likelihood Statistics, Psychometrics
Liu, Haiyan; Jin, Ick Hoon; Zhang, Zhiyong – Grantee Submission, 2018
Psychologists are interested in whether friends and couples share similar personalities or not. However, no statistical models are readily available to test the association between personalities and social relations in the literature. In this study, we develop a statistical model for analyzing social network data with the latent personality traits…
Descriptors: Structural Equation Models, Social Networks, Personality Traits, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jörg-Tobias – Educational and Psychological Measurement, 2016
In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…
Descriptors: Reaction Time, Models, Multivariate Analysis, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Jackson, Dan; Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose – Research Synthesis Methods, 2017
Network meta-analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta-analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between-study heterogeneity. Models for network meta-analysis with random…
Descriptors: Meta Analysis, Network Analysis, Comparative Analysis, Outcomes of Treatment
Peer reviewed Peer reviewed
Direct linkDirect link
McGill, Ryan J. – Psychology in the Schools, 2017
The present study examined the factor structure of the Luria interpretive model for the Kaufman Assessment Battery for Children-Second Edition (KABC-II) with normative sample participants aged 7-18 (N = 2,025) using confirmatory factor analysis with maximum-likelihood estimation. For the eight subtest Luria configuration, an alternative…
Descriptors: Children, Intelligence Tests, Models, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chen-Wei; Wang, Wen-Chung – Journal of Educational Measurement, 2017
The examinee-selected-item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Zeller, Florian; Krampen, Dorothea; Reiß, Siegbert; Schweizer, Karl – Educational and Psychological Measurement, 2017
The item-position effect describes how an item's position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine…
Descriptors: Goodness of Fit, Simulation, Factor Analysis, Intelligence Tests
Potgieter, Cornelis; Kamata, Akihito; Kara, Yusuf – Grantee Submission, 2017
This study proposes a two-part model that includes components for reading accuracy and reading speed. The speed component is a log-normal factor model, for which speed data are measured by reading time for each sentence being assessed. The accuracy component is a binomial-count factor model, where the accuracy data are measured by the number of…
Descriptors: Reading Rate, Oral Reading, Accuracy, Models
Oluwalana, Olasumbo O. – ProQuest LLC, 2019
A primary purpose of cognitive diagnosis models (CDMs) is to classify examinees based on their attribute patterns. The Q-matrix (Tatsuoka, 1985), a common component of all CDMs, specifies the relationship between the set of required dichotomous attributes and the test items. Since a Q-matrix is often developed by content-knowledge experts and can…
Descriptors: Classification, Validity, Test Items, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Jue; Engelhard, George, Jr.; Wolfe, Edward W. – Educational and Psychological Measurement, 2016
The number of performance assessments continues to increase around the world, and it is important to explore new methods for evaluating the quality of ratings obtained from raters. This study describes an unfolding model for examining rater accuracy. Accuracy is defined as the difference between observed and expert ratings. Dichotomous accuracy…
Descriptors: Evaluators, Accuracy, Performance Based Assessment, Models
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  17