Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 9 |
| Since 2017 (last 10 years) | 31 |
| Since 2007 (last 20 years) | 86 |
Descriptor
| Maximum Likelihood Statistics | 158 |
| Monte Carlo Methods | 158 |
| Computation | 52 |
| Sample Size | 43 |
| Estimation (Mathematics) | 41 |
| Bayesian Statistics | 40 |
| Comparative Analysis | 39 |
| Item Response Theory | 38 |
| Error of Measurement | 36 |
| Statistical Analysis | 30 |
| Simulation | 29 |
| More ▼ | |
Source
Author
| Kim, Seock-Ho | 5 |
| Bentler, Peter M. | 4 |
| Cai, Li | 4 |
| Cohen, Allan S. | 4 |
| DeSarbo, Wayne S. | 4 |
| Yuan, Ke-Hai | 4 |
| Finch, Holmes | 3 |
| Lee, Sik-Yum | 3 |
| Monroe, Scott | 3 |
| Savalei, Victoria | 3 |
| Stone, Clement A. | 3 |
| More ▼ | |
Publication Type
| Journal Articles | 127 |
| Reports - Research | 90 |
| Reports - Evaluative | 57 |
| Speeches/Meeting Papers | 18 |
| Dissertations/Theses -… | 5 |
| Reports - Descriptive | 5 |
| Numerical/Quantitative Data | 2 |
| Information Analyses | 1 |
| Opinion Papers | 1 |
Education Level
| Higher Education | 3 |
| Postsecondary Education | 3 |
| Elementary Education | 2 |
| Junior High Schools | 2 |
| Early Childhood Education | 1 |
| Grade 1 | 1 |
| Grade 4 | 1 |
| Grade 5 | 1 |
| Intermediate Grades | 1 |
| Middle Schools | 1 |
| Primary Education | 1 |
| More ▼ | |
Audience
| Researchers | 2 |
Location
| Austria | 2 |
| South Korea | 2 |
| Armenia | 1 |
| Australia | 1 |
| Barbados | 1 |
| Belgium | 1 |
| Canada | 1 |
| China (Shanghai) | 1 |
| Cyprus | 1 |
| Czech Republic | 1 |
| Denmark | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Rubio-Aparicio, María; López-López, José Antonio; Sánchez-Meca, Julio; Marín-Martínez, Fulgencio; Viechtbauer, Wolfgang; Van den Noortgate, Wim – Research Synthesis Methods, 2018
The random-effects model, applied in most meta-analyses nowadays, typically assumes normality of the distribution of the effect parameters. The purpose of this study was to examine the performance of various random-effects methods (standard method, Hartung's method, profile likelihood method, and bootstrapping) for computing an average effect size…
Descriptors: Effect Size, Meta Analysis, Intervals, Monte Carlo Methods
Bolin, Jocelyn H.; Finch, W. Holmes; Stenger, Rachel – Educational and Psychological Measurement, 2019
Multilevel data are a reality for many disciplines. Currently, although multiple options exist for the treatment of multilevel data, most disciplines strictly adhere to one method for multilevel data regardless of the specific research design circumstances. The purpose of this Monte Carlo simulation study is to compare several methods for the…
Descriptors: Hierarchical Linear Modeling, Computation, Statistical Analysis, Maximum Likelihood Statistics
Andersson, Björn; Xin, Tao – Educational and Psychological Measurement, 2018
In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…
Descriptors: Item Response Theory, Test Reliability, Test Items, Scores
Zheng, Xiaying; Yang, Ji Seung – AERA Online Paper Repository, 2018
Measuring change in an educational or psychological construct over time is often achieved by repeatedly administering the same items to the same examinees over time. When the response data are categorical, item response theory (IRT) model can be used as the measurement model of a second-order latent growth model (referred to as LGM-IRT) to measure…
Descriptors: Statistical Analysis, Item Response Theory, Computation, Longitudinal Studies
Lockwood, J. R.; Castellano, Katherine E.; Shear, Benjamin R. – Journal of Educational and Behavioral Statistics, 2018
This article proposes a flexible extension of the Fay--Herriot model for making inferences from coarsened, group-level achievement data, for example, school-level data consisting of numbers of students falling into various ordinal performance categories. The model builds on the heteroskedastic ordered probit (HETOP) framework advocated by Reardon,…
Descriptors: Bayesian Statistics, Mathematical Models, Statistical Inference, Computation
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; Lee, Daniel; Goodrich, Ben; Betancourt, Michael; Brubaker, Marcus A.; Guo, Jiqiang; Li, Peter; Riddell, Allen – Grantee Submission, 2017
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the…
Descriptors: Programming Languages, Probability, Bayesian Statistics, Monte Carlo Methods
Liu, Yang; Yang, Ji Seung – Journal of Educational and Behavioral Statistics, 2018
The uncertainty arising from item parameter estimation is often not negligible and must be accounted for when calculating latent variable (LV) scores in item response theory (IRT). It is particularly so when the calibration sample size is limited and/or the calibration IRT model is complex. In the current work, we treat two-stage IRT scoring as a…
Descriptors: Intervals, Scores, Item Response Theory, Bayesian Statistics
Potgieter, Cornelis; Kamata, Akihito; Kara, Yusuf – Grantee Submission, 2017
This study proposes a two-part model that includes components for reading accuracy and reading speed. The speed component is a log-normal factor model, for which speed data are measured by reading time for each sentence being assessed. The accuracy component is a binomial-count factor model, where the accuracy data are measured by the number of…
Descriptors: Reading Rate, Oral Reading, Accuracy, Models
Faucon, Louis; Kidzinski, Lukasz; Dillenbourg, Pierre – International Educational Data Mining Society, 2016
Large-scale experiments are often expensive and time consuming. Although Massive Online Open Courses (MOOCs) provide a solid and consistent framework for learning analytics, MOOC practitioners are still reluctant to risk resources in experiments. In this study, we suggest a methodology for simulating MOOC students, which allow estimation of…
Descriptors: Markov Processes, Monte Carlo Methods, Bayesian Statistics, Online Courses
Koziol, Natalie A.; Bovaird, James A. – Educational and Psychological Measurement, 2018
Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…
Descriptors: Computation, Tests, Error of Measurement, Comparative Analysis
Lee, Taehun; Cai, Li; Kuhfeld, Megan – Grantee Submission, 2016
Posterior Predictive Model Checking (PPMC) is a Bayesian model checking method that compares the observed data to (plausible) future observations from the posterior predictive distribution. We propose an alternative to PPMC in the context of structural equation modeling, which we term the Poor Persons PPMC (PP-PPMC), for the situation wherein one…
Descriptors: Structural Equation Models, Bayesian Statistics, Prediction, Monte Carlo Methods
Lee, Soo; Suh, Youngsuk – Journal of Educational Measurement, 2018
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Descriptors: Item Response Theory, Sample Size, Models, Error of Measurement
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo – Educational and Psychological Measurement, 2015
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
Descriptors: Factor Analysis, Error of Measurement, Accuracy, Hypothesis Testing

Peer reviewed
Direct link
