Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 15 |
Descriptor
| Maximum Likelihood Statistics | 21 |
| Probability | 21 |
| Simulation | 21 |
| Computation | 11 |
| Item Response Theory | 7 |
| Models | 7 |
| Statistical Analysis | 5 |
| Bayesian Statistics | 4 |
| Comparative Analysis | 4 |
| Inferences | 4 |
| Monte Carlo Methods | 4 |
| More ▼ | |
Source
Author
Publication Type
| Journal Articles | 16 |
| Reports - Research | 11 |
| Reports - Evaluative | 5 |
| Reports - Descriptive | 4 |
| Speeches/Meeting Papers | 1 |
Education Level
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
| Researchers | 1 |
Location
| Australia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| National Longitudinal Survey… | 1 |
What Works Clearinghouse Rating
Yongyun Shin; Stephen W. Raudenbush – Grantee Submission, 2023
We consider two-level models where a continuous response R and continuous covariates C are assumed missing at random. Inferences based on maximum likelihood or Bayes are routinely made by estimating their joint normal distribution from observed data R[subscript obs] and C[subscript obs]. However, if the model for R given C includes random…
Descriptors: Maximum Likelihood Statistics, Hierarchical Linear Modeling, Error of Measurement, Statistical Distributions
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Li, Zhen; Cai, Li – Grantee Submission, 2017
In standard item response theory (IRT) applications, the latent variable is typically assumed to be normally distributed. If the normality assumption is violated, the item parameter estimates can become biased. Summed score likelihood based statistics may be useful for testing latent variable distribution fit. We develop Satorra-Bentler type…
Descriptors: Scores, Goodness of Fit, Statistical Distributions, Item Response Theory
Kopanidis, Foula Zografina; Shaw, Michael John – Education & Training, 2017
Purpose: Educational institutions are caught between increasing their offer rates and attracting and retaining those prospective students who are most suited to course completion. The purpose of this paper is to demonstrate the influence of demographic and psychological constructs on students' preferences when choosing to study in a particular…
Descriptors: Student Attitudes, Course Selection (Students), Preferences, Models
Blackwell, Matthew; Honaker, James; King, Gary – Sociological Methods & Research, 2017
We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model…
Descriptors: Error of Measurement, Correlation, Simulation, Bayesian Statistics
Vuolo, Mike – Sociological Methods & Research, 2017
Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…
Descriptors: Sociology, Research Methodology, Social Science Research, Models
Petersen, Janne; Bandeen-Roche, Karen; Budtz-Jorgensen, Esben; Larsen, Klaus Groes – Psychometrika, 2012
Latent class regression models relate covariates and latent constructs such as psychiatric disorders. Though full maximum likelihood estimation is available, estimation is often in three steps: (i) a latent class model is fitted without covariates; (ii) latent class scores are predicted; and (iii) the scores are regressed on covariates. We propose…
Descriptors: Computation, Prediction, Regression (Statistics), Maximum Likelihood Statistics
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini – Psychometrika, 2012
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
Descriptors: Geometric Concepts, Computation, Probability, Longitudinal Studies
Schuster, Christof; Yuan, Ke-Hai – Journal of Educational and Behavioral Statistics, 2011
Because of response disturbances such as guessing, cheating, or carelessness, item response models often can only approximate the "true" individual response probabilities. As a consequence, maximum-likelihood estimates of ability will be biased. Typically, the nature and extent to which response disturbances are present is unknown, and, therefore,…
Descriptors: Computation, Item Response Theory, Probability, Maximum Likelihood Statistics
Paek, Insu; Wilson, Mark – Educational and Psychological Measurement, 2011
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Descriptors: Test Bias, Test Length, Statistical Inference, Geometric Concepts
Cai, Li – Psychometrika, 2010
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
Descriptors: Quality of Life, Factor Structure, Factor Analysis, Computation
Verkuilen, Jay; Smithson, Michael – Journal of Educational and Behavioral Statistics, 2012
Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…
Descriptors: Responses, Regression (Statistics), Statistical Analysis, Models
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length
Rose, Roderick A.; Fraser, Mark W. – Social Work Research, 2008
Missing data are nearly always a problem in research, and missing values represent a serious threat to the validity of inferences drawn from findings. Increasingly, social science researchers are turning to multiple imputation to handle missing data. Multiple imputation, in which missing values are replaced by values repeatedly drawn from…
Descriptors: Simulation, Research Methodology, Social Sciences, Probability
Ryden, Jesper – International Journal of Mathematical Education in Science and Technology, 2008
Extreme-value statistics is often used to estimate so-called return values (actually related to quantiles) for environmental quantities like wind speed or wave height. A basic method for estimation is the method of block maxima which consists in partitioning observations in blocks, where maxima from each block could be considered independent.…
Descriptors: Simulation, Probability, Computation, Nonparametric Statistics
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
