Publication Date
| In 2026 | 0 |
| Since 2025 | 15 |
| Since 2022 (last 5 years) | 170 |
| Since 2017 (last 10 years) | 410 |
| Since 2007 (last 20 years) | 1010 |
Descriptor
Source
Author
| Kromrey, Jeffrey D. | 21 |
| Fan, Xitao | 18 |
| Barcikowski, Robert S. | 16 |
| DeSarbo, Wayne S. | 14 |
| Donoghue, John R. | 12 |
| Ferron, John M. | 12 |
| Finch, W. Holmes | 12 |
| Zhang, Zhiyong | 11 |
| Cohen, Allan S. | 10 |
| Finch, Holmes | 10 |
| Kim, Seock-Ho | 10 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 49 |
| Practitioners | 22 |
| Teachers | 20 |
| Students | 4 |
| Administrators | 2 |
Location
| Germany | 10 |
| Australia | 7 |
| United Kingdom | 7 |
| Canada | 6 |
| Netherlands | 6 |
| United States | 6 |
| Belgium | 5 |
| California | 5 |
| Hong Kong | 5 |
| South Korea | 5 |
| Spain | 5 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 4 |
| Pell Grant Program | 2 |
| Aid to Families with… | 1 |
| American Recovery and… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Leroux, Audrey J.; Dodd, Barbara G. – Journal of Experimental Education, 2016
The current study compares the progressive-restricted standard error (PR-SE) exposure control method with the Sympson-Hetter, randomesque, and no exposure control (maximum information) procedures using the generalized partial credit model with fixed- and variable-length CATs and two item pools. The PR-SE method administered the entire item pool…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Error of Measurement
Lee, Taehun; Cai, Li; Kuhfeld, Megan – Grantee Submission, 2016
Posterior Predictive Model Checking (PPMC) is a Bayesian model checking method that compares the observed data to (plausible) future observations from the posterior predictive distribution. We propose an alternative to PPMC in the context of structural equation modeling, which we term the Poor Persons PPMC (PP-PPMC), for the situation wherein one…
Descriptors: Structural Equation Models, Bayesian Statistics, Prediction, Monte Carlo Methods
Sanou Gozalo, Eduard; Hernández-Fernández, Antoni; Arias, Marta; Ferrer-i-Cancho, Ramon – Journal of Technology and Science Education, 2017
In a course of the degree of computer science, the programming project has changed from individual to teamed work, tentatively in couples (pair programming). Students have full freedom to team up with minimum intervention from teachers. The analysis of the working groups made indicates that students do not tend to associate with students with a…
Descriptors: Group Activities, Group Dynamics, Computer Science, Programming
Guyon, Hervé; Tensaout, Mouloud – Measurement: Interdisciplinary Research and Perspectives, 2015
This article is a commentary on the Focus Article, "Interpretational Confounding or Confounded Interpretations of Causal Indicators?" and a commentary that was published in issue 12(4) 2014 of "Measurement: Interdisciplinary Research & Perspectives". The authors challenge two claims: (a) Bainter and Bollen argue that the…
Descriptors: Causal Models, Measurement, Data Interpretation, Structural Equation Models
Kohli, Nidhi; Koran, Jennifer; Henn, Lisa – Educational and Psychological Measurement, 2015
There are well-defined theoretical differences between the classical test theory (CTT) and item response theory (IRT) frameworks. It is understood that in the CTT framework, person and item statistics are test- and sample-dependent. This is not the perception with IRT. For this reason, the IRT framework is considered to be theoretically superior…
Descriptors: Test Theory, Item Response Theory, Factor Analysis, Models
Assessment of Differential Item Functioning under Cognitive Diagnosis Models: The DINA Model Example
Li, Xiaomin; Wang, Wen-Chung – Journal of Educational Measurement, 2015
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…
Descriptors: Test Bias, Models, Cognitive Measurement, Evaluation Methods
Çokluk, Ömay; Koçak, Duygu – Educational Sciences: Theory and Practice, 2016
In this study, the number of factors obtained from parallel analysis, a method used for determining the number of factors in exploratory factor analysis, was compared to that of the factors obtained from eigenvalue and scree plot--two traditional methods for determining the number of factors--in terms of consistency. Parallel analysis is based on…
Descriptors: Factor Analysis, Comparative Analysis, Elementary School Teachers, Trust (Psychology)
Cribb, Serena J.; Olaithe, Michelle; Di Lorenzo, Renata; Dunlop, Patrick D.; Maybery, Murray T. – Journal of Autism and Developmental Disorders, 2016
People with autism show superior performance to controls on the Embedded Figures Test (EFT). However, studies examining the relationship between autistic-like traits and EFT performance in neurotypical individuals have yielded inconsistent findings. To examine the inconsistency, a meta-analysis was conducted of studies that (a) compared high and…
Descriptors: Autism, Pervasive Developmental Disorders, Meta Analysis, Symptoms (Individual Disorders)
Zhao, Yu; Lei, Pui-Wa – AERA Online Paper Repository, 2016
Despite the prevalence of ordinal observed variables in applied structural equation modeling (SEM) research, limited attention has been given to model evaluation methods suitable for ordinal variables, thus providing practitioners in the field with few guidelines to follow. This study represents a first attempt to thoroughly examine the…
Descriptors: Factor Analysis, Monte Carlo Methods, Causal Models, Least Squares Statistics
Blackwell, Matthew; Honaker, James; King, Gary – Sociological Methods & Research, 2017
Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model…
Descriptors: Error of Measurement, Monte Carlo Methods, Data Collection, Simulation
Martin-Fernandez, Manuel; Revuelta, Javier – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Descriptors: Bayesian Statistics, Item Response Theory, Models, Comparative Analysis
Meyburg, Jan Philipp; Diesing, Detlef – Journal of Chemical Education, 2017
This article describes the implementation and application of a metal deposition and surface diffusion Monte Carlo simulation in a physical chemistry lab course. Here the self-diffusion of Ag atoms on a Ag(111) surface is modeled and compared to published experimental results. Both the thin-film homoepitaxial growth during adatom deposition onto a…
Descriptors: Monte Carlo Methods, Computer Simulation, Chemistry, Laboratory Experiments
Lee, Soo; Suh, Youngsuk – Journal of Educational Measurement, 2018
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Descriptors: Item Response Theory, Sample Size, Models, Error of Measurement
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo – Educational and Psychological Measurement, 2015
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
Descriptors: Factor Analysis, Error of Measurement, Accuracy, Hypothesis Testing
Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick – Journal of Experimental Education, 2017
This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…
Descriptors: Monte Carlo Methods, Simulation, Intervention, Replication (Evaluation)

Peer reviewed
Direct link
