Publication Date
| In 2026 | 0 |
| Since 2025 | 15 |
| Since 2022 (last 5 years) | 170 |
| Since 2017 (last 10 years) | 410 |
| Since 2007 (last 20 years) | 1010 |
Descriptor
Source
Author
| Kromrey, Jeffrey D. | 21 |
| Fan, Xitao | 18 |
| Barcikowski, Robert S. | 16 |
| DeSarbo, Wayne S. | 14 |
| Donoghue, John R. | 12 |
| Ferron, John M. | 12 |
| Finch, W. Holmes | 12 |
| Zhang, Zhiyong | 11 |
| Cohen, Allan S. | 10 |
| Finch, Holmes | 10 |
| Kim, Seock-Ho | 10 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 49 |
| Practitioners | 22 |
| Teachers | 20 |
| Students | 4 |
| Administrators | 2 |
Location
| Germany | 10 |
| Australia | 7 |
| United Kingdom | 7 |
| Canada | 6 |
| Netherlands | 6 |
| United States | 6 |
| Belgium | 5 |
| California | 5 |
| Hong Kong | 5 |
| South Korea | 5 |
| Spain | 5 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 4 |
| Pell Grant Program | 2 |
| Aid to Families with… | 1 |
| American Recovery and… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Peer reviewedFleishman, Allen I. – Psychometrika, 1978
A method of introducing a controlled degree of skew and kurtosis for Monte Carlo studies was derived. The form of such a transformation on normal deviates is given. Analytic and empirical validation of the method is demonstrated. (Author/JKS)
Descriptors: Computer Programs, Monte Carlo Methods, Statistical Analysis, Technical Reports
Peer reviewedHummel, Thomas J.; Johnston, Charles B. – Journal of Educational Statistics, 1979
Stochastic approximation is suggested as a useful technique in areas where individuals have a goal firmly in mind, but lack sufficient knowledge to design an efficient, more traditional experiment. One potential area of application for stochastic approximation is that of formative evaluation. (CTM)
Descriptors: Monte Carlo Methods, Research Design, Statistical Analysis, Technical Reports
Peer reviewedBartfay, Emma – International Journal of Testing, 2003
Used Monte Carlo simulation to compare the properties of a goodness-of-fit (GOF) procedure and a test statistic developed by E. Bartfay and A. Donner (2001) to the likelihood ratio test in assessing the existence of extra variation. Results show the GOF procedure possess satisfactory Type I error rate and power. (SLD)
Descriptors: Goodness of Fit, Interrater Reliability, Monte Carlo Methods, Simulation
Peer reviewedSchneider, Pamela J.; Penfield, Douglas A. – Journal of Experimental Education, 1997
A Monte Carlo simulation was conducted to study the Type I error rate and power of the 1994 approximation developed by R. A. Alexander and D. M. Govern as an alternative to the analysis of variance "F" test. Conditions under which this test is the best approach are discussed. (SLD)
Descriptors: Analysis of Variance, Monte Carlo Methods, Power (Statistics), Simulation
Peer reviewedDuan, Bin; Dunlap, William P. – Educational and Psychological Measurement, 1997
A Monte Carlo study compared the accuracy of different estimates of the standard error of correlations corrected for restriction in range. The procedure suggested by P. Bobko and A. Rieck (1980) generated the most accurate estimates of the standard error. Aspects of accuracy are discussed. (SLD)
Descriptors: Correlation, Error of Measurement, Estimation (Mathematics), Monte Carlo Methods
Peer reviewedHuitema, Bradley E.; And Others – Journal of Educational and Behavioral Statistics, 1996
Monte Carlo study results show that the runs test yields markedly asymmetrical error rates in the two tails and that neither directional nor nondirectional tests are satisfactory with respect to Type I errors. The test is not recommended for evaluating the independence of errors in time-series regression models. (SLD)
Descriptors: Correlation, Error of Measurement, Monte Carlo Methods, Regression (Statistics)
Peer reviewedBolt, Daniel M. – Applied Measurement in Education, 2002
Compared two parametric procedures for detecting differential item functioning (DIF) using the graded response model (GRM), the GRM-likelihood ratio test and the GRM-differential functioning of items and tests, with a nonparametric DIF detection procedure, Poly-SIBTEST. Monte Carlo simulation results show that Poly-SIBTEST showed the least amount…
Descriptors: Comparative Analysis, Item Bias, Monte Carlo Methods, Nonparametric Statistics
Peer reviewedFerron, John; Sentovich, Chris – Journal of Experimental Education, 2002
Estimated statistical power for three randomization tests used with multiple-baseline designs using Monte Carlo methods. For an effect size of 0.5, none of the tests provided an adequate level of power, and for an effect size of 1.0, power was adequate for the Koehler-Levin test and the Marascuilo-Busk test only when the series length was long and…
Descriptors: Effect Size, Monte Carlo Methods, Power (Statistics), Research Design
Peer reviewedMuthen, Linda K.; Muthen, Bengt O. – Structural Equation Modeling, 2002
Demonstrates how substantive researchers can use a Monte Carlo study to decide on sample size and determine power. Presents confirmatory factor analysis and growth models as examples, conducting these analyses with the Mplus program (B. Muthen and L. Muthen 1998). (SLD)
Descriptors: Monte Carlo Methods, Power (Statistics), Research Methodology, Sample Size
Peer reviewedSilver, N. Clayton; Dunlap, William P. – Educational and Psychological Measurement, 1989
A Monte Carlo simulation examined the Type I error rates and power of four tests of the null hypothesis that a correlation matrix equals the identity matrix. The procedure of C. J. Brien and others (1984) was found to be the most powerful test maintaining stable empirical alpha values. (SLD)
Descriptors: Correlation, Hypothesis Testing, Monte Carlo Methods, Power (Statistics)
Peer reviewedKromrey, Jeffrey D.; La Rocca, Michela A. – Journal of Experimental Education, 1995
The Type I error rates and statistical power of nine selected multiple comparison procedures were compared in a Monte Carlo study. The Peretz, Ryan, and Fisher-Hayter tests were the most powerful, and differences among these procedures were consistently small. Choosing among these procedures might be based on their calculational complexity. (SLD)
Descriptors: Comparative Analysis, Computation, Monte Carlo Methods, Power (Statistics)
Peer reviewedBang, Jung W.; Schumacker, Randall E.; Schlieve, Paul L. – Educational and Psychological Measurement, 1998
The normality of number distributions generated by various random-number generators were studied, focusing on when the random-number generator reached a normal distribution and at what sample size. Findings suggest the steps that should be followed when using a random-number generator in a Monte Carlo simulation. (SLD)
Descriptors: Monte Carlo Methods, Sample Size, Simulation, Statistical Distributions
Peer reviewedHutchinson, Susan R. – Journal of Experimental Education, 1998
The problem of chance model modifications under varying levels of sample size, model size, and severity of misspecification in confirmatory factor analysis models was examined through Monte Carlo simulations. Findings suggest that practitioners should exercise caution when interpreting modified models unless sample size is quite large. (SLD)
Descriptors: Change, Mathematical Models, Monte Carlo Methods, Sample Size
Peer reviewedCoenders, Germa; Saris, Willem E.; Batista-Foguet, Joan M.; Andreenkova, Anna – Structural Equation Modeling, 1999
Illustrates that sampling variance can be very large when a three-wave quasi simplex model is used to obtain reliability estimates. Also shows that, for the reliability parameter to be identified, the model assumes a Markov process. These problems are evaluated with both real and Monte Carlo data. (SLD)
Descriptors: Estimation (Mathematics), Markov Processes, Monte Carlo Methods, Reliability
Peer reviewedMcKenzie, Dean P.; Onghena, Patrick; Hogenraad, Robert; Martindale, Colin; MacKinnon, Andrew J. – Journal of Experimental Education, 1999
Explains a situation in which the standard nonparametric one-sample runs test gives anomalous results and describes a procedure that allows the maximum run length to be determined empirically through a Monte Carlo permutation test. Illustrates the new procedure with examples from suicide research and psycholinguistics. (SLD)
Descriptors: Monte Carlo Methods, Nonparametric Statistics, Psycholinguistics, Statistical Analysis


