Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 29 |
Descriptor
Source
Author
Publication Type
| Reports - Research | 62 |
| Journal Articles | 40 |
| Information Analyses | 6 |
| Speeches/Meeting Papers | 4 |
| Books | 1 |
| Non-Print Media | 1 |
Education Level
Audience
| Researchers | 5 |
Laws, Policies, & Programs
| Basic Educational Opportunity… | 1 |
| Pell Grant Program | 1 |
Assessments and Surveys
| Wechsler Intelligence Scale… | 1 |
What Works Clearinghouse Rating
DeAnne Priddis; Heather L. Hundley – Journal of Communication Pedagogy, 2023
Traditional research examining student stress relies on surveys using pre-determined categories. This study diverts from that approach by adopting a Conflict in Communication class assignment over seven classes (N = 115) using photovoice to determine if results fluctuate by using a different methodology. Additionally, we sought to understand if…
Descriptors: College Students, Stress Variables, Photography, Research Methodology
Mojgan Rashtchi; SeyyedeFateme Ghazi Mir Saeed – Sage Research Methods Cases, 2023
The reason for conducting the present case study was the problems the researchers encountered during data collection for another research project (Primary Study) entitled "The effects of virtual versus traditional flipped classes on EFL learners' grammar knowledge, self-regulation, and autonomy." Two online questionnaires were…
Descriptors: Data Collection, Questionnaires, Barriers, Research Methodology
Aruguete, Mara S.; Huynh, Ho; Browne, Blaine L.; Jurs, Bethany; Flint, Emilia; McCutcheon, Lynn E. – International Journal of Social Research Methodology, 2019
This study compared the quality of survey data collected from Mechanical Turk (MTurk) workers and college students. Three groups of participants completed the same survey. "MTurk" respondents completed the survey as paid workers using the Mechanical Turk crowdsourcing platform. "Student Online" respondents also completed the…
Descriptors: Data Collection, Research Methodology, Sampling, College Students
Onwuegbuzie, Anthony J.; Hwang, Eunjin – Research in the Schools, 2019
Much has been written about the importance of "writing with discipline" in order to increase the readability and, hence, the publishability of manuscripts submitted to journals for consideration for publication. More specifically, empirical evidence has been provided that links American Psychological Association (APA) errors, citation…
Descriptors: Visual Aids, Writing for Publication, Tables (Data), Grammar
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath – Journal of Speech, Language, and Hearing Research, 2019
Purpose: Speech-evoked neurophysiological responses are often collected to answer clinically and theoretically driven questions concerning speech and language processing. Here, we highlight the practical application of machine learning (ML)-based approaches to analyzing speech-evoked neurophysiological responses. Method: Two categories of ML-based…
Descriptors: Speech Language Pathology, Intervention, Communication Problems, Speech Impairments
Maeda, Yukiko; Harwell, Michael R. – Mid-Western Educational Researcher, 2016
The "Q" test is regularly used in meta-analysis to examine variation in effect sizes. However, the assumptions of "Q" are unlikely to be satisfied in practice prompting methodological researchers to conduct computer simulation studies examining its statistical properties. Narrative summaries of this literature are available but…
Descriptors: Meta Analysis, Q Methodology, Effect Size, Research Methodology
Polanin, Joshua R.; Pigott, Terri D. – Research Synthesis Methods, 2015
Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…
Descriptors: Meta Analysis, Statistical Significance, Error Patterns, Research Methodology
Hubbard, Aleata – Grantee Submission, 2017
The results of educational research studies are only as accurate as the data used to produce them. Drawing on experiences conducting large-scale efficacy studies of classroom-based algebra interventions for community college and middle school students, I am developing practice-based data cleaning procedures to support scholars in conducting…
Descriptors: Educational Research, Mathematics Education, Algebra, Intervention
Bishara, Anthony J.; Hittner, James B. – Educational and Psychological Measurement, 2015
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…
Descriptors: Research Methodology, Monte Carlo Methods, Correlation, Simulation
Davis, Alexander L.; Fischhoff, Baruch – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2014
Four experiments examined when laypeople attribute unexpected experimental outcomes to error, in foresight and in hindsight, along with their judgments of whether the data should be published. Participants read vignettes describing hypothetical experiments, along with the result of the initial observation, considered as either a possibility…
Descriptors: Evidence, Vignettes, Error Patterns, Error of Measurement
Gelman, Andrew; Hill, Jennifer; Yajima, Masanao – Journal of Research on Educational Effectiveness, 2012
Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian…
Descriptors: Intervals, Comparative Analysis, Inferences, Error Patterns
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Bernard, Robert M.; Borokhovski, Eugene; Schmid, Richard F.; Tamim, Rana M. – Journal of Computing in Higher Education, 2014
This article contains a second-order meta-analysis and an exploration of bias in the technology integration literature in higher education. Thirteen meta-analyses, dated from 2000 to 2014 were selected to be included based on the questions asked and the presence of adequate statistical information to conduct a quantitative synthesis. The weighted…
Descriptors: Meta Analysis, Bias, Technology Integration, Higher Education
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George – Structural Equation Modeling: A Multidisciplinary Journal, 2012
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
Descriptors: Structural Equation Models, Home Management, Drug Abuse, Research Methodology
Brino, Ana Leda F., Barros, Romariz S., Galvao, Ol; Garotti, M.; Da Cruz, Ilara R. N.; Santos, Jose R.; Dube, William V.; McIlvane, William J. – Journal of the Experimental Analysis of Behavior, 2011
This paper reports use of sample stimulus control shaping procedures to teach arbitrary matching-to-sample to 2 capuchin monkeys ("Cebus apella"). The procedures started with identity matching-to-sample. During shaping, stimulus features of the sample were altered gradually, rendering samples and comparisons increasingly physically dissimilar. The…
Descriptors: Followup Studies, Computation, Teaching Methods, Sample Size

Peer reviewed
Direct link
