NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers4
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 53 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yilan; Lee, Sue Ann S.; Chen, Wenjun – Journal of Speech, Language, and Hearing Research, 2022
Introduction: Assessment of resonance characteristics is essential in research and clinical practice in individuals with velopharyngeal impairment. The purpose of this study was to systematically review correlations between auditory perceptual ratings and nasalance scores obtained by a nasometer in individuals with resonance disorders and to…
Descriptors: Correlation, Auditory Perception, Meta Analysis, Guidelines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kübra Karakaya Özyer – Journal of Educators Online, 2025
This meta-analytic study investigates the impact of online peer assessment on academic achievement in higher education. By synthesizing 20 effect sizes, we provide a comprehensive understanding of how online peer assessment influences student learning outcomes. The findings reveal a statistically significant positive effect (Hedges's g = 0.672),…
Descriptors: Electronic Learning, Peer Evaluation, Higher Education, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Norouzian, Reza – Studies in Second Language Acquisition, 2021
There has recently been a surge of interest in improving the replicability of second language (L2) research. However, less attention is paid to replicability in the context of L2 meta-analyses. I argue that conducting interrater reliability (IRR) analyses is a key step toward improving the replicability of L2 meta-analyses. To that end, I first…
Descriptors: Interrater Reliability, Second Languages, Language Research, Meta Analysis
Moeyaert, Mariola; Yang, Panpan; Xu, Xinyun; Kim, Esther – Grantee Submission, 2021
Hierarchical linear modeling (HLM) has been recommended as a meta-analytic technique for the quantitative synthesis of single-case experimental design (SCED) studies. The HLM approach is flexible and can model a variety of different SCED data complexities, such as intervention heterogeneity. A major advantage of using HLM is that participant…
Descriptors: Meta Analysis, Case Studies, Research Design, Hierarchical Linear Modeling
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Research & Practice in Assessment, 2022
Meta-assessment is a useful strategy to document assessment practices and guide efforts to improve the culture of assessment at an institution. In this study, a meta-assessment of undergraduate and graduate academic program assessment reports evaluated the maturity of assessment work. Assessment reports submitted in the first year (75…
Descriptors: Program Evaluation, Educational Assessment, Meta Analysis, Undergraduate Study
Peer reviewed Peer reviewed
Direct linkDirect link
Saluja, Ronak; Cheng, Sierra; delos Santos, Keemo Althea; Chan, Kelvin K. W. – Research Synthesis Methods, 2019
Objective: Various statistical methods have been developed to estimate hazard ratios (HRs) from published Kaplan-Meier (KM) curves for the purpose of performing meta-analyses. The objective of this study was to determine the reliability, accuracy, and precision of four commonly used methods by Guyot, Williamson, Parmar, and Hoyle and Henley.…
Descriptors: Meta Analysis, Reliability, Accuracy, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Cervetti, Gina N.; Fitzgerald, Miranda S.; Hiebert, Elfrieda H.; Hebert, Michael – Reading Psychology, 2023
We report on a meta-analysis designed to test the theory that instruction that involves direct teaching of academic vocabulary and teaching strategies to determine the meaning of unknown words develops students' abilities to infer new words' meanings and builds students' overall vocabulary knowledge. We meta-analyzed 39 experimental and…
Descriptors: Meta Analysis, Vocabulary Development, Reading Instruction, Direct Instruction
Jiyeo Yun – English Teaching, 2023
Studies on automatic scoring systems in writing assessments have also evaluated the relationship between human and machine scores for the reliability of automated essay scoring systems. This study investigated the magnitudes of indices for inter-rater agreement and discrepancy, especially regarding human and machine scoring, in writing assessment.…
Descriptors: Meta Analysis, Interrater Reliability, Essays, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Beck, Klaus – Frontline Learning Research, 2020
Many test developers try to ensure the content validity of their tests by having external experts review the items, e.g. in terms of relevance, difficulty, or clarity. Although this approach is widely accepted, a closer look reveals several pitfalls need to be avoided if experts' advice is to be truly helpful. The purpose of this paper is to…
Descriptors: Content Validity, Psychological Testing, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Özdas, Faysal; Batdi, Veli – Journal of Education and Training Studies, 2017
This thematic-based meta-analytic study aims to examine the effect of creativity on the academic success and learning retention scores of students. In the context of this aim, 18 out of 225 studies regarding creativity that were carried out between 2001 and 2011 have been obtained from certain national and international databases. The studies…
Descriptors: Meta Analysis, Creativity, Scores, Retention (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Plonsky, Luke; Derrick, Deirdre J. – Modern Language Journal, 2016
Ensuring internal validity in quantitative research requires, among other conditions, reliable instrumentation. Unfortunately, however, second language (L2) researchers often fail to report and even more often fail to interpret reliability estimates beyond generic benchmarks for acceptability. As a means to guide interpretations of such estimates,…
Descriptors: Second Language Learning, Meta Analysis, Reliability, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Kersten, Paula; Czuba, Karol; McPherson, Kathryn; Dudley, Margaret; Elder, Hinemoa; Tauroa, Robyn; Vandal, Alain – International Journal of Behavioral Development, 2016
This article synthesized evidence for the validity and reliability of the Strengths and Difficulties Questionnaire in children aged 3-5 years. A systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement guidelines was carried out. Study quality was rated using the Consensus-based Standards for the…
Descriptors: Psychometrics, Meta Analysis, Questionnaires, Behavior Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Toste, Jessica R.; Didion, Lisa; Peng, Peng; Filderman, Marissa J.; McClelland, Amanda M. – Review of Educational Research, 2020
The purpose of this meta-analytic review was to investigate the relation between motivation and reading achievement among students in kindergarten through 12th grade. A comprehensive search of peer-reviewed published research resulted in 132 articles with 185 independent samples and 1,154 reported effect sizes (Pearson's r). Results of our…
Descriptors: Meta Analysis, Reading Achievement, Reading Motivation, Kindergarten
Peer reviewed Peer reviewed
Direct linkDirect link
Derrick, Deirdre J. – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2016
Second language (L2) researchers often have to develop or change the instruments they use to measure numerous constructs (Norris & Ortega, 2012). Given the prevalence of researcher-developed and -adapted data collection instruments, and given the profound effect instrumentation can have on results, thorough reporting of instrumentation is…
Descriptors: Second Language Learning, Language Research, Research Methodology, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Breidbord, Jonathan; Croudace, Tim J. – Journal of Autism and Developmental Disorders, 2013
The Childhood Autism Rating Scale (CARS) is a popular behavior-observation instrument that was developed more than 34 years ago and has since been adopted in a wide variety of contexts for assessing the presence and severity of autism symptomatology in both children and adolescents. This investigation of the reliability of CARS scores involves…
Descriptors: Autism, Test Reliability, Scores, Symptoms (Individual Disorders)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4