NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 61 to 75 of 318 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ming; Harring, Jeffrey R. – Educational and Psychological Measurement, 2017
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
Descriptors: Simulation, Comparative Analysis, Monte Carlo Methods, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung – Educational and Psychological Measurement, 2015
Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…
Descriptors: Regression (Statistics), Models, Statistical Analysis, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
McCoach, D. Betsy; Rifenbark, Graham G.; Newton, Sarah D.; Li, Xiaoran; Kooken, Janice; Yomtov, Dani; Gambino, Anthony J.; Bellara, Aarti – Journal of Educational and Behavioral Statistics, 2018
This study compared five common multilevel software packages via Monte Carlo simulation: HLM 7, M"plus" 7.4, R (lme4 V1.1-12), Stata 14.1, and SAS 9.4 to determine how the programs differ in estimation accuracy and speed, as well as convergence, when modeling multiple randomly varying slopes of different magnitudes. Simulated data…
Descriptors: Hierarchical Linear Modeling, Computer Software, Comparative Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Zaidi, Nikki L.; Swoboda, Christopher M.; Kelcey, Benjamin M.; Manuel, R. Stephen – Advances in Health Sciences Education, 2017
The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation…
Descriptors: Interviews, Scores, Generalizability Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Wooyeol; Cho, Sun-Joo – Applied Measurement in Education, 2017
Utilizing a longitudinal item response model, this study investigated the effect of item parameter drift (IPD) on item parameters and person scores via a Monte Carlo study. Item parameter recovery was investigated for various IPD patterns in terms of bias and root mean-square error (RMSE), and percentage of time the 95% confidence interval covered…
Descriptors: Item Response Theory, Test Items, Bias, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Yong; Jiao, Hong – Educational and Psychological Measurement, 2018
Stan is a new Bayesian statistical software program that implements the powerful and efficient Hamiltonian Monte Carlo (HMC) algorithm. To date there is not a source that systematically provides Stan code for various item response theory (IRT) models. This article provides Stan code for three representative IRT models, including the…
Descriptors: Bayesian Statistics, Item Response Theory, Probability, Computer Software
Koziol, Natalie A.; Bovaird, James A. – Educational and Psychological Measurement, 2018
Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…
Descriptors: Computation, Tests, Error of Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L. – Educational and Psychological Measurement, 2018
Cluster randomized trials involving participants nested within intact treatment and control groups are commonly performed in various educational, psychological, and biomedical studies. However, recruiting and retaining intact groups present various practical, financial, and logistical challenges to evaluators and often, cluster randomized trials…
Descriptors: Multivariate Analysis, Sampling, Statistical Inference, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Jie; Lee, Jonathan – Autism: The International Journal of Research and Practice, 2018
Previous studies have found that individuals with autism spectrum disorders show impairments in mentalizing processes and aberrant brain activity compared with typically developing participants. However, the findings are mainly from male participants and the aberrant effects in autism spectrum disorder females and sex differences are still…
Descriptors: Autism, Pervasive Developmental Disorders, Gender Differences, Brain Hemisphere Functions
Peer reviewed Peer reviewed
Direct linkDirect link
Del Giudice, Marco – Developmental Psychology, 2016
According to models of differential susceptibility, the same neurobiological and temperamental traits that determine increased sensitivity to stress and adversity also confer enhanced responsivity to the positive aspects of the environment. Differential susceptibility models have expanded to include complex developmental processes in which genetic…
Descriptors: Twins, Environmental Influences, Individual Development, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Leroux, Audrey J.; Dodd, Barbara G. – Journal of Experimental Education, 2016
The current study compares the progressive-restricted standard error (PR-SE) exposure control method with the Sympson-Hetter, randomesque, and no exposure control (maximum information) procedures using the generalized partial credit model with fixed- and variable-length CATs and two item pools. The PR-SE method administered the entire item pool…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kohli, Nidhi; Koran, Jennifer; Henn, Lisa – Educational and Psychological Measurement, 2015
There are well-defined theoretical differences between the classical test theory (CTT) and item response theory (IRT) frameworks. It is understood that in the CTT framework, person and item statistics are test- and sample-dependent. This is not the perception with IRT. For this reason, the IRT framework is considered to be theoretically superior…
Descriptors: Test Theory, Item Response Theory, Factor Analysis, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Martin-Fernandez, Manuel; Revuelta, Javier – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Descriptors: Bayesian Statistics, Item Response Theory, Models, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Çokluk, Ömay; Koçak, Duygu – Educational Sciences: Theory and Practice, 2016
In this study, the number of factors obtained from parallel analysis, a method used for determining the number of factors in exploratory factor analysis, was compared to that of the factors obtained from eigenvalue and scree plot--two traditional methods for determining the number of factors--in terms of consistency. Parallel analysis is based on…
Descriptors: Factor Analysis, Comparative Analysis, Elementary School Teachers, Trust (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Cribb, Serena J.; Olaithe, Michelle; Di Lorenzo, Renata; Dunlop, Patrick D.; Maybery, Murray T. – Journal of Autism and Developmental Disorders, 2016
People with autism show superior performance to controls on the Embedded Figures Test (EFT). However, studies examining the relationship between autistic-like traits and EFT performance in neurotypical individuals have yielded inconsistent findings. To examine the inconsistency, a meta-analysis was conducted of studies that (a) compared high and…
Descriptors: Autism, Pervasive Developmental Disorders, Meta Analysis, Symptoms (Individual Disorders)
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  22