NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 451 to 465 of 3,310 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xinru; Dusseldorp, Elise; Meulman, Jacqueline J. – Research Synthesis Methods, 2019
In meta-analytic studies, there are often multiple moderators available (eg, study characteristics). In such cases, traditional meta-analysis methods often lack sufficient power to investigate interaction effects between moderators, especially high-order interactions. To overcome this problem, meta-CART was proposed: an approach that applies…
Descriptors: Correlation, Meta Analysis, Identification, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Boisen, Olivia; Corral, Alesha; Pope, Emily; Goeltz, John C. – Journal of Chemical Education, 2019
Standard glass pH electrodes are ubiquitous instruments used in research and in classrooms to measure the hydrogen ions present in a solution. While many chemists and educators have communicated ways to support teaching conceptual understanding of solution pH and the function of pH probes and dyes, the community lacks a methodology that enables…
Descriptors: Measurement Equipment, Chemistry, College Science, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Minzi; Zhang, Xian – Language Testing, 2021
This meta-analysis explores the correlation between self-assessment (SA) and language performance. Sixty-seven studies with 97 independent samples involving more than 68,500 participants were included in our analysis. It was found that the overall correlation between SA and language performance was 0.466 (p < 0.01). Moderator analysis was…
Descriptors: Meta Analysis, Self Evaluation (Individuals), Likert Scales, Research Reports
Peer reviewed Peer reviewed
Direct linkDirect link
Montoya, Amanda K.; Edwards, Michael C. – Educational and Psychological Measurement, 2021
Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the…
Descriptors: Goodness of Fit, Factor Analysis, Cutting Scores, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Zhonghua – Applied Measurement in Education, 2020
The characteristic curve methods have been applied to estimate the equating coefficients in test equating under the graded response model (GRM). However, the approaches for obtaining the standard errors for the estimates of these coefficients have not been developed and examined. In this study, the delta method was applied to derive the…
Descriptors: Error of Measurement, Computation, Equated Scores, True Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Huynh, Kiet D.; Sheridan, Daniel J.; Lee, Debbiesiu L. – Measurement and Evaluation in Counseling and Development, 2020
The original and revised versions of the Internalized Homophobia Scale (IHP) were examined for gender invariance. The revised version was found to have a better fit, and passed the test of configural, metric, and scalar invariance. The revised version is recommended as a brief measure of internalized heterosexism.
Descriptors: LGBTQ People, Homosexuality, Social Bias, Gender Differences
Peer reviewed Peer reviewed
Direct linkDirect link
van Zundert, Camiel H. J.; Miocevic, Milica – Research Synthesis Methods, 2020
Synthesizing findings about the indirect (mediated) effect plays an important role in determining the mechanism through which variables affect one another. This simulation study compared six methods for synthesizing indirect effects: correlation-based MASEM, parameter-based MASEM, marginal likelihood synthesis, an adjustment to marginal likelihood…
Descriptors: Correlation, Comparative Analysis, Meta Analysis, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Niehaus, Elizabeth; Nyunt, Gudrun – Journal of College Student Development, 2020
In recent years, improving the quantitative methods used to assess the effect of college, and particular college experiences, on student outcomes has received increased attention (e.g., Mayhew et al., 2016). In "How College Affects Students," Mayhew et al. (2016) highlighted the importance of issues of practical vs. statistical…
Descriptors: Educational Experience, Change, College Students, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Kirkup, Les; Frenkel, Bob – Physics Education, 2020
When the relationship between two physical variables, such as voltage and current, can be expressed as y = bx where b is a constant. b may be estimated by least squares, or by averaging the values of b obtained for each x-y data pair. We show for data gathered in an experiment, as well as through Monte Carlo simulation and mathematical analysis,…
Descriptors: Comparative Analysis, Least Squares Statistics, Monte Carlo Methods, Physics
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Ren, Hao – Journal of Educational and Behavioral Statistics, 2020
The Bayesian way of accounting for the effects of error in the ability and item parameters in adaptive testing is through the joint posterior distribution of all parameters. An optimized Markov chain Monte Carlo algorithm for adaptive testing is presented, which samples this distribution in real time to score the examinee's ability and optimally…
Descriptors: Bayesian Statistics, Adaptive Testing, Error of Measurement, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Seide, Svenja E.; Jensen, Katrin; Kieser, Meinhard – Research Synthesis Methods, 2020
The performance of statistical methods is often evaluated by means of simulation studies. In case of network meta-analysis of binary data, however, simulations are not currently available for many practically relevant settings. We perform a simulation study for sparse networks of trials under between-trial heterogeneity and including multi-arm…
Descriptors: Bayesian Statistics, Meta Analysis, Data Analysis, Networks
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mansolf, Maxwell; Jorgensen, Terrence D.; Enders, Craig K. – Grantee Submission, 2020
Structural equation modeling (SEM) applications routinely employ a trilogy of significance tests that includes the likelihood ratio test, Wald test, and score test or modification index. Researchers use these tests to assess global model fit, evaluate whether individual estimates differ from zero, and identify potential sources of local misfit,…
Descriptors: Structural Equation Models, Computation, Scores, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
John B. Buncher; Jayson M. Nissen; Ben Van Dusen; Robert M. Talbot – Physical Review Physics Education Research, 2025
Research-based assessments (RBAs) allow researchers and practitioners to compare student performance across different contexts and institutions. In recent years, research attention has focused on the student populations these RBAs were initially developed with because much of that research was done with "samples of convenience" that were…
Descriptors: Science Tests, Physics, Comparative Analysis, Gender Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Lehmann, Vicky; Hillen, Marij A.; Verdam, Mathilde G. E.; Pieterse, Arwen H.; Labrie, Nanon H. M.; Fruijtier, Agnetha D.; Oreel, Tom H.; Smets, Ellen M. A.; Visser, Leonie N. C. – International Journal of Social Research Methodology, 2023
The Video Engagement Scale (VES) is a quality indicator to assess engagement in experimental video-vignette studies, but its measurement properties warrant improvement. Data from previous studies were combined (N = 2676) and split into three subsamples for a stepped analytical approach. We tested construct validity, criterion validity,…
Descriptors: Likert Scales, Video Technology, Vignettes, Construct Validity
Pages: 1  |  ...  |  27  |  28  |  29  |  30  |  31  |  32  |  33  |  34  |  35  |  ...  |  221