Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Bayesian Statistics | 2 |
| Computation | 2 |
| Educational Research | 2 |
| Hierarchical Linear Modeling | 2 |
| Matrices | 2 |
| Maximum Likelihood Statistics | 2 |
| Regression (Statistics) | 2 |
| Statistical Analysis | 2 |
| Statistical Inference | 2 |
Author
| Chung, Yeojin | 2 |
| Dorie, Vincent | 2 |
| Gelman, Andrew | 2 |
| Liu, Jingchen | 2 |
| Rabe-Hesketh, Sophia | 2 |
Publication Type
| Journal Articles | 2 |
| Reports - Research | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent – Journal of Educational and Behavioral Statistics, 2015
When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…
Descriptors: Regression (Statistics), Hierarchical Linear Modeling, Bayesian Statistics, Statistical Inference
Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent – Grantee Submission, 2015
When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix [sigma] of group-level varying coefficients are often degenerate. One can do better, even…
Descriptors: Regression (Statistics), Hierarchical Linear Modeling, Bayesian Statistics, Statistical Inference

Peer reviewed
Direct link
