Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 10 |
Descriptor
| Hypothesis Testing | 17 |
| Sample Size | 17 |
| Simulation | 17 |
| Statistical Analysis | 8 |
| Robustness (Statistics) | 5 |
| Correlation | 4 |
| Sampling | 4 |
| Comparative Analysis | 3 |
| Effect Size | 3 |
| Error of Measurement | 3 |
| Monte Carlo Methods | 3 |
| More ▼ | |
Source
Author
| Bonett, Douglas G. | 2 |
| Algina, James | 1 |
| Anderson, Richard B. | 1 |
| Broadbooks, Wendy J. | 1 |
| Cho, Sun-Joo | 1 |
| Choi, In-Hee | 1 |
| Cohen, Allan S. | 1 |
| Coombs, William T. | 1 |
| Doherty, Michael E. | 1 |
| Elmore, Patricia B. | 1 |
| Fan, Weihua | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 13 |
| Reports - Research | 13 |
| Reports - Evaluative | 4 |
| Speeches/Meeting Papers | 2 |
Education Level
| High Schools | 1 |
Audience
Location
| Pennsylvania | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Xiao Liu; Zhiyong Zhang; Lijuan Wang – Grantee Submission, 2024
In psychology, researchers are often interested in testing hypotheses about mediation, such as testing the presence of a mediation effect of a treatment (e.g., intervention assignment) on an outcome via a mediator. An increasingly popular approach to testing hypotheses is the Bayesian testing approach with Bayes factors (BFs). Despite the growing…
Descriptors: Sample Size, Bayesian Statistics, Programming Languages, Simulation
Choi, In-Hee; Paek, Insu; Cho, Sun-Joo – Journal of Experimental Education, 2017
The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Simulation
Ryan, Wendy L.; St. Iago-McRae, Ezry – Bioscene: Journal of College Biology Teaching, 2016
Experimentation is the foundation of science and an important process for students to understand and experience. However, it can be difficult to teach some aspects of experimentation within the time and resource constraints of an academic semester. Interactive models can be a useful tool in bridging this gap. This freely accessible simulation…
Descriptors: Research Design, Simulation, Animals, Animal Behavior
de Winter, J. C .F. – Practical Assessment, Research & Evaluation, 2013
Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…
Descriptors: Sample Size, Statistical Analysis, Hypothesis Testing, Simulation
Tipton, Elizabeth; Pustejovsky, James E. – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Descriptors: Randomized Controlled Trials, Sample Size, Effect Size, Hypothesis Testing
Schoemann, Alexander M.; Miller, Patrick; Pornprasertmanit, Sunthud; Wu, Wei – International Journal of Behavioral Development, 2014
Planned missing data designs allow researchers to increase the amount and quality of data collected in a single study. Unfortunately, the effect of planned missing data designs on power is not straightforward. Under certain conditions using a planned missing design will increase power, whereas in other situations using a planned missing design…
Descriptors: Monte Carlo Methods, Simulation, Sample Size, Research Design
Fan, Weihua; Hancock, Gregory R. – Journal of Educational and Behavioral Statistics, 2012
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
Descriptors: Robustness (Statistics), Hypothesis Testing, Monte Carlo Methods, Simulation
Wells, Craig S.; Cohen, Allan S.; Patton, Jeffrey – International Journal of Testing, 2009
A primary concern with testing differential item functioning (DIF) using a traditional point-null hypothesis is that a statistically significant result does not imply that the magnitude of DIF is of practical interest. Similarly, for a given sample size, a non-significant result does not allow the researcher to conclude the item is free of DIF. To…
Descriptors: Test Bias, Test Items, Statistical Analysis, Hypothesis Testing
Anderson, Richard B.; Doherty, Michael E.; Friedrich, Jeff C. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2008
In 4 studies, the authors examined the hypothesis that the structure of the informational environment makes small samples more informative than large ones for drawing inferences about population correlations. The specific purpose of the studies was to test predictions arising from the signal detection simulations of R. B. Anderson, M. E. Doherty,…
Descriptors: Simulation, Statistical Analysis, Inferences, Population Trends
Peer reviewedBonett, Douglas G.; Seier, Edith – Journal of Educational and Behavioral Statistics, 2003
Derived a confidence interval for a ratio of correlated mean absolute deviations. Simulation results show that it performs well in small sample sizes across realistically nonnormal distributions and that it is almost as powerful as the most powerful test examined by R. Wilcox (1990). (SLD)
Descriptors: Correlation, Equations (Mathematics), Hypothesis Testing, Sample Size
Peer reviewedCoombs, William T.; Algina, James – Journal of Educational and Behavioral Statistics, 1996
Type I error rates for the Johansen test were estimated using simulated data for a variety of conditions. Results indicate that Type I error rates for the Johansen test depend heavily on the number of groups and the ratio of the smallest sample size to the number of dependent variables. Sample size guidelines are presented. (SLD)
Descriptors: Group Membership, Hypothesis Testing, Multivariate Analysis, Robustness (Statistics)
Mecklin, Christopher J. – 2002
Whether one should use null hypothesis testing, confidence intervals, and/or effect sizes is a source of continuing controversy in educational research. An alternative to testing for statistical significance, known as equivalence testing, is little used in educational research. Equivalence testing is useful in situations where the researcher…
Descriptors: Educational Research, Effect Size, Hypothesis Testing, Sample Size
Peer reviewedWilcox, Rand R. – Multivariate Behavioral Research, 1995
Five methods for testing the hypothesis of independence between two sets of variates were compared through simulation. Results indicate that two new methods, based on robust measures reflecting the linear association between two random variables, provide reasonably accurate control over Type I errors. Drawbacks to rank-based methods are discussed.…
Descriptors: Analysis of Covariance, Comparative Analysis, Hypothesis Testing, Robustness (Statistics)
Broadbooks, Wendy J.; Elmore, Patricia B. – 1983
This study developed and investigated an empirical sampling distribution of the congruence coefficient. The effects of sample size, number of variables, and population value of the congruence coefficient on the sampling distribution of the congruence coefficient were examined. Sample data were generated on the basis of the common factor model and…
Descriptors: Factor Analysis, Goodness of Fit, Hypothesis Testing, Research Methodology
Finch, W. Holmes; French, Brian F. – Educational and Psychological Measurement, 2007
Differential item functioning (DIF) continues to receive attention both in applied and methodological studies. Because DIF can be an indicator of irrelevant variance that can influence test scores, continuing to evaluate and improve the accuracy of detection methods is an essential step in gathering score validity evidence. Methods for detecting…
Descriptors: Item Response Theory, Factor Analysis, Test Bias, Comparative Analysis
Previous Page | Next Page ยป
Pages: 1 | 2
Direct link
