Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 7 |
| Since 2017 (last 10 years) | 12 |
| Since 2007 (last 20 years) | 22 |
Descriptor
| Accuracy | 22 |
| Maximum Likelihood Statistics | 22 |
| Models | 22 |
| Item Response Theory | 16 |
| Computation | 10 |
| Monte Carlo Methods | 9 |
| Test Items | 9 |
| Simulation | 8 |
| Comparative Analysis | 7 |
| Error of Measurement | 7 |
| Reading Rate | 5 |
| More ▼ | |
Source
Author
| Amanda Goodwin | 4 |
| Matthew Naveiras | 4 |
| Sun-Joo Cho | 4 |
| Akihito Kamata | 2 |
| Cornelis Potgieter | 2 |
| Jorge Salas | 2 |
| Monroe, Scott | 2 |
| Paul De Boeck | 2 |
| Xin Qiao | 2 |
| Yusuf Kara | 2 |
| Becker, Betsy Jane | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 19 |
| Journal Articles | 16 |
| Reports - Descriptive | 2 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
| Secondary Education | 4 |
| Junior High Schools | 3 |
| Middle Schools | 3 |
| Elementary Education | 2 |
| Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Big Five Inventory | 1 |
| Program for International… | 1 |
| Trends in International… | 1 |
What Works Clearinghouse Rating
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Grantee Submission, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. (2020) estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores,…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Journal of Educational Measurement, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores, including…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Grantee Submission, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Journal of Educational Measurement, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Grantee Submission, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Ippel, Lianne; Magis, David – Educational and Psychological Measurement, 2020
In dichotomous item response theory (IRT) framework, the asymptotic standard error (ASE) is the most common statistic to evaluate the precision of various ability estimators. Easy-to-use ASE formulas are readily available; however, the accuracy of some of these formulas was recently questioned and new ASE formulas were derived from a general…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Standards
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Potgieter, Cornelis; Kamata, Akihito; Kara, Yusuf – Grantee Submission, 2017
This study proposes a two-part model that includes components for reading accuracy and reading speed. The speed component is a log-normal factor model, for which speed data are measured by reading time for each sentence being assessed. The accuracy component is a binomial-count factor model, where the accuracy data are measured by the number of…
Descriptors: Reading Rate, Oral Reading, Accuracy, Models
Wang, Jue; Engelhard, George, Jr.; Wolfe, Edward W. – Educational and Psychological Measurement, 2016
The number of performance assessments continues to increase around the world, and it is important to explore new methods for evaluating the quality of ratings obtained from raters. This study describes an unfolding model for examining rater accuracy. Accuracy is defined as the difference between observed and expert ratings. Dichotomous accuracy…
Descriptors: Evaluators, Accuracy, Performance Based Assessment, Models
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Monroe, Scott; Cai, Li – Educational and Psychological Measurement, 2014
In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…
Descriptors: Item Response Theory, Models, Computation, Mathematics
Koziol, Natalie A. – Applied Measurement in Education, 2016
Testlets, or groups of related items, are commonly included in educational assessments due to their many logistical and conceptual advantages. Despite their advantages, testlets introduce complications into the theory and practice of educational measurement. Responses to items within a testlet tend to be correlated even after controlling for…
Descriptors: Classification, Accuracy, Comparative Analysis, Models
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
