Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 44 |
Descriptor
Source
Author
Guo, Jiin-Huarng | 2 |
Luh, Wei-Ming | 2 |
Alexander, Jonathan | 1 |
Allwood, Carl Martin | 1 |
Alrik Thiem | 1 |
Armstrong, Sonya | 1 |
Aruguete, Mara S. | 1 |
Ashworth, Gregory J. | 1 |
Asparouhov, Tihomir | 1 |
Athy, Jeremy | 1 |
Babcock, Ben | 1 |
More ▼ |
Publication Type
Journal Articles | 70 |
Reports - Research | 40 |
Reports - Evaluative | 13 |
Reports - Descriptive | 11 |
Information Analyses | 8 |
Opinion Papers | 6 |
Guides - Classroom - Teacher | 1 |
Reports - General | 1 |
Tests/Questionnaires | 1 |
Education Level
Audience
Researchers | 5 |
Practitioners | 2 |
Teachers | 2 |
Laws, Policies, & Programs
Assessments and Surveys
Wechsler Intelligence Scale… | 1 |
What Works Clearinghouse Rating
Alrik Thiem; Lusine Mkrtchyan – Field Methods, 2024
Qualitative comparative analysis (QCA) is an empirical research method that has gained some popularity in the social sciences. At the same time, the literature has long been convinced that QCA is prone to committing causal fallacies when confronted with non-causal data. More specifically, beyond a certain case-to-factor ratio, the method is…
Descriptors: Qualitative Research, Comparative Analysis, Research Methodology, Benchmarking
Babcock, Ben; Marks, Peter E. L.; van den Berg, Yvonne H. M.; Cillessen, Antonius H. N. – International Journal of Behavioral Development, 2022
A wide variety of methodological choices and situations can affect the quality of peer nomination measurements but have not received adequate study. This article begins by focusing on systematic nominator missingness as an example of one such situation. We reanalyzed findings from a recent study by Bukowski, Dirks, Commisso, Velàsquez, and Lopez…
Descriptors: Research Methodology, Peer Relationship, Statistical Analysis, Error Patterns
DeAnne Priddis; Heather L. Hundley – Journal of Communication Pedagogy, 2023
Traditional research examining student stress relies on surveys using pre-determined categories. This study diverts from that approach by adopting a Conflict in Communication class assignment over seven classes (N = 115) using photovoice to determine if results fluctuate by using a different methodology. Additionally, we sought to understand if…
Descriptors: College Students, Stress Variables, Photography, Research Methodology
Aruguete, Mara S.; Huynh, Ho; Browne, Blaine L.; Jurs, Bethany; Flint, Emilia; McCutcheon, Lynn E. – International Journal of Social Research Methodology, 2019
This study compared the quality of survey data collected from Mechanical Turk (MTurk) workers and college students. Three groups of participants completed the same survey. "MTurk" respondents completed the survey as paid workers using the Mechanical Turk crowdsourcing platform. "Student Online" respondents also completed the…
Descriptors: Data Collection, Research Methodology, Sampling, College Students
Onwuegbuzie, Anthony J.; Hwang, Eunjin – Research in the Schools, 2019
Much has been written about the importance of "writing with discipline" in order to increase the readability and, hence, the publishability of manuscripts submitted to journals for consideration for publication. More specifically, empirical evidence has been provided that links American Psychological Association (APA) errors, citation…
Descriptors: Visual Aids, Writing for Publication, Tables (Data), Grammar
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath – Journal of Speech, Language, and Hearing Research, 2019
Purpose: Speech-evoked neurophysiological responses are often collected to answer clinically and theoretically driven questions concerning speech and language processing. Here, we highlight the practical application of machine learning (ML)-based approaches to analyzing speech-evoked neurophysiological responses. Method: Two categories of ML-based…
Descriptors: Speech Language Pathology, Intervention, Communication Problems, Speech Impairments
Maeda, Yukiko; Harwell, Michael R. – Mid-Western Educational Researcher, 2016
The "Q" test is regularly used in meta-analysis to examine variation in effect sizes. However, the assumptions of "Q" are unlikely to be satisfied in practice prompting methodological researchers to conduct computer simulation studies examining its statistical properties. Narrative summaries of this literature are available but…
Descriptors: Meta Analysis, Q Methodology, Effect Size, Research Methodology
Sinharay, Sandip – Journal of Educational Measurement, 2016
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Descriptors: Sampling, Research Methodology, Error Patterns, Monte Carlo Methods
Polanin, Joshua R.; Pigott, Terri D. – Research Synthesis Methods, 2015
Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…
Descriptors: Meta Analysis, Statistical Significance, Error Patterns, Research Methodology
Bishara, Anthony J.; Hittner, James B. – Educational and Psychological Measurement, 2015
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…
Descriptors: Research Methodology, Monte Carlo Methods, Correlation, Simulation
O'Shaughnessy, Molly – NAMTA Journal, 2016
Once the reasons for habitual observation in the classroom have been established, and the intent to observe has been settled, the practical details of observation must be organized. In this article, O'Shaughnessy gives us a model for the implementation of observation. She thoroughly reviews Montessori's work curves and how they can be used to show…
Descriptors: Montessori Method, Classroom Observation Techniques, Early Childhood Education, Environmental Influences
Davis, Alexander L.; Fischhoff, Baruch – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2014
Four experiments examined when laypeople attribute unexpected experimental outcomes to error, in foresight and in hindsight, along with their judgments of whether the data should be published. Participants read vignettes describing hypothetical experiments, along with the result of the initial observation, considered as either a possibility…
Descriptors: Evidence, Vignettes, Error Patterns, Error of Measurement
Gelman, Andrew; Hill, Jennifer; Yajima, Masanao – Journal of Research on Educational Effectiveness, 2012
Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian…
Descriptors: Intervals, Comparative Analysis, Inferences, Error Patterns
Bernard, Robert M.; Borokhovski, Eugene; Schmid, Richard F.; Tamim, Rana M. – Journal of Computing in Higher Education, 2014
This article contains a second-order meta-analysis and an exploration of bias in the technology integration literature in higher education. Thirteen meta-analyses, dated from 2000 to 2014 were selected to be included based on the questions asked and the presence of adequate statistical information to conduct a quantitative synthesis. The weighted…
Descriptors: Meta Analysis, Bias, Technology Integration, Higher Education
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making