Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 19 |
Descriptor
| Computation | 22 |
| Intervals | 22 |
| Simulation | 7 |
| Statistical Analysis | 5 |
| Comparative Analysis | 4 |
| Correlation | 4 |
| Sample Size | 4 |
| Factor Analysis | 3 |
| Foreign Countries | 3 |
| Monte Carlo Methods | 3 |
| Scores | 3 |
| More ▼ | |
Source
Author
| Preacher, Kristopher J. | 2 |
| Brennan, Robert L. | 1 |
| Chan, Daniel W.-L. | 1 |
| Chan, Wai | 1 |
| Decady, Yves J. | 1 |
| Essid, Hedi | 1 |
| Finkelman, Matthew David | 1 |
| Holzer, D. | 1 |
| Jator, S. N. | 1 |
| Kelley, Ken | 1 |
| Kolen, Michael J. | 1 |
| More ▼ | |
Publication Type
| Reports - Evaluative | 22 |
| Journal Articles | 21 |
| Speeches/Meeting Papers | 1 |
Education Level
| High Schools | 1 |
| Higher Education | 1 |
Audience
| Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Peabody Picture Vocabulary… | 1 |
What Works Clearinghouse Rating
Rrita Zejnullahi; Larry V. Hedges – Research Synthesis Methods, 2024
Conventional random-effects models in meta-analysis rely on large sample approximations instead of exact small sample results. While random-effects methods produce efficient estimates and confidence intervals for the summary effect have correct coverage when the number of studies is sufficiently large, we demonstrate that conventional methods…
Descriptors: Robustness (Statistics), Meta Analysis, Sample Size, Computation
Valentina Gliozzi – Cognitive Science, 2024
We propose a simple computational model that describes potential mechanisms underlying the organization and development of the lexical-semantic system in 18-month-old infants. We focus on two independent aspects: (i) on potential mechanisms underlying the development of taxonomic and associative priming, and (ii) on potential mechanisms underlying…
Descriptors: Infants, Computation, Models, Cognitive Development
Ramsay, James; Wiberg, Marie; Li, Juan – Journal of Educational and Behavioral Statistics, 2020
Ramsay and Wiberg used a new version of item response theory that represents test performance over nonnegative closed intervals such as [0, 100] or [0, n] and demonstrated that optimal scoring of binary test data yielded substantial improvements in point-wise root-mean-squared error and bias over number right or sum scoring. We extend these…
Descriptors: Scoring, Weighted Scores, Item Response Theory, Intervals
Wagler, Amy E. – Journal of Educational and Behavioral Statistics, 2014
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Descriptors: Hierarchical Linear Modeling, Cluster Grouping, Heterogeneous Grouping, Monte Carlo Methods
Kelley, Ken; Preacher, Kristopher J. – Psychological Methods, 2012
The call for researchers to report and interpret effect sizes and their corresponding confidence intervals has never been stronger. However, there is confusion in the literature on the definition of effect size, and consequently the term is used inconsistently. We propose a definition for effect size, discuss 3 facets of effect size (dimension,…
Descriptors: Intervals, Effect Size, Correlation, Questioning Techniques
Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J. – Psychometrika, 2011
In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…
Descriptors: Intervals, Simulation, Statistical Significance, Factor Analysis
Romano, Jeanine L.; Kromrey, Jeffrey D.; Owens, Corina M.; Scott, Heather M. – Journal of Experimental Education, 2011
In this study, the authors aimed to examine 8 of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions wherein the underlying item…
Descriptors: Intervals, Monte Carlo Methods, Rating Scales, Computation
Jator, S. N. – International Journal of Mathematical Education in Science and Technology, 2010
A continuous representation of a hybrid method with three "off-step" points is developed via interpolation and collocation procedures, and used to obtain initial value methods (IVMs) for solving initial value problems. The IVMs are assembled into a single block matrix equation which is convergent and A-stable. We note that accuracy is improved by…
Descriptors: Intervals, Calculus, Mathematics Instruction, Matrices
Wang, Jianjun – Online Submission, 2010
The widely-used Tukey's HSD index is not produced in the current version of SPSS (i.e., PASW Statistics, version 18), and a computer program named "HSD Calculator" has been chosen to amend this problem. In comparison to hand calculation, this program application does not require table checking, which eliminates potential concern on the size of a…
Descriptors: Computers, Computer Software, Social Studies, Comparative Analysis
Tryon, Warren W.; Lewis, Charles – Journal of Educational and Behavioral Statistics, 2009
Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…
Descriptors: Statistical Analysis, Intervals, Differences, Computation
Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane – Economics of Education Review, 2010
The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…
Descriptors: High Schools, Intervals, Statistical Data, Foreign Countries
Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong – Multivariate Behavioral Research, 2010
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…
Descriptors: Intervals, Sample Size, Factor Analysis, Least Squares Statistics
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length
Thomas, D. Roland; Zhu, PengCheng; Decady, Yves J. – Journal of Educational and Behavioral Statistics, 2007
The topic of variable importance in linear regression is reviewed, and a measure first justified theoretically by Pratt (1987) is examined in detail. Asymptotic variance estimates are used to construct individual and simultaneous confidence intervals for these importance measures. A simulation study of their coverage properties is reported, and an…
Descriptors: Intervals, Simulation, Regression (Statistics), Computation
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
Krishnamoorthy, K.; Xia, Yanping – Multivariate Behavioral Research, 2008
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Descriptors: Statistical Analysis, Intervals, Sample Size, Testing
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
