NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 2,401 to 2,415 of 2,831 results Save | Export
Shaver, James P. – 1992
A test of statistical significance is a procedure for determining how likely a result is assuming a null hypothesis to be true with randomization and a sample of size n (the given size in the study). Randomization, which refers to random sampling and random assignment, is important because it ensures the independence of observations, but it does…
Descriptors: Educational Research, Evaluation Problems, Hypothesis Testing, Probability
Neel, John H. – 1993
Induced probabilities have been largely ignored by educational researchers. Simply stated, if a new or random variable is defined in terms of a first random variable, then induced probability is the probability or density of the new random variable that can be found by summation or integration over the appropriate domains of the original random…
Descriptors: Educational Research, Elementary Secondary Education, Equations (Mathematics), Mathematical Models
PDF pending restoration PDF pending restoration
Bush, M. Joan; Schumacker, Randall E. – 1993
The feasibility of quick norms derived by the procedure described by B. D. Wright and M. H. Stone (1979) was investigated. Norming differences between traditionally calculated means and Rasch "quick" means were examined for simulated data sets of varying sample size, test length, and type of distribution. A 5 by 5 by 2 design with a…
Descriptors: Computer Simulation, Item Response Theory, Norm Referenced Tests, Sample Size
De Ayala, R. J. – 1993
Previous work on the effects of dimensionality on parameter estimation was extended from dichotomous models to the polytomous graded response (GR) model. A multidimensional GR model was developed to generate data in one-, two-, and three-dimensions, with two- and three-dimensional conditions varying in their interdimensional associations. Test…
Descriptors: Computer Simulation, Correlation, Difficulty Level, Estimation (Mathematics)
Chou, Tungshan; Wang, Lih-Shing – 1992
P. O. Johnson and J. Neyman (1936) proposed a general linear hypothesis testing procedure for testing the null hypothesis of no treatment difference in the presence of some covariates. This is generally known as the Johnson-Neyman (JN) technique. The need for the hypothesis testing step (often omitted) as originally presented and the…
Descriptors: Computer Simulation, Equations (Mathematics), Foreign Countries, Hypothesis Testing
Hsiung, Tung-Hsing; Olejnik, Stephen – 1994
This study investigated the robustness of the James second-order test (James 1951; Wilcox, 1989) and the univariate F test under a two-factor fixed-effect analysis of variance (ANOVA) model in which cell variances were heterogeneous and/or distributions were nonnormal. With computer-simulated data, Type I error rates and statistical power for the…
Descriptors: Analysis of Variance, Computer Simulation, Estimation (Mathematics), Interaction
Brown, Mary M.; Brown, Scott W. – 1990
An issue facing researchers who study very select populations is how to obtain reliability estimates on instruments. When the populations and resulting samples are very small and select, the ability to obtain reliability estimates becomes very difficult. As a result, many researchers ignore reliability concerns and forge ahead with data…
Descriptors: Estimation (Mathematics), Higher Education, Likert Scales, Measurement Techniques
Sawilowsky, Shlomo S.; Hillman, Stephen B. – 1991
Psychology studies often have low statistical power. Sample size tables, as given by J. Cohen (1988), may be used to increase power, but they are based on Monte Carlo studies of relatively "tame" mathematical distributions, as compared to psychology data sets. In this study, Monte Carlo methods were used to investigate Type I and Type II…
Descriptors: Mathematical Models, Monte Carlo Methods, Power (Statistics), Psychological Studies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; von Davier, Alina A.; Haberman, Shelby – ETS Research Report Series, 2006
This study addresses the sample error and linking bias that occur with small and unrepresentative samples in a non-equivalent groups anchor test (NEAT) design. We propose a linking method called the "synthetic function," which is a weighted average of the identity function (the trivial equating function for forms that are known to be…
Descriptors: Equated Scores, Sample Size, Test Items, Statistical Bias
Carifio, James; And Others – 1990
Possible bias due to sampling problems or low response rates has been a troubling "nuisance" variable in empirical research since seminal and classical studies were done on these problems at the beginning of this century. Recent research suggests that: (1) earlier views of the alleged bias problem were misleading; (2) under a variety of fairly…
Descriptors: Data Collection, Evaluation Methods, Research Problems, Response Rates (Questionnaires)
Olejnik, Stephen F.; Algina, James – 1983
Parametric analysis of covariance was compared to analysis of covariance with data transformed using ranks. Using a computer simulation approach the two strategies were compared in terms of the proportion of Type I errors made and statistical power when the conditional distribution of errors were: (1) normal and homoscedastic, (2) normal and…
Descriptors: Analysis of Covariance, Control Groups, Data Collection, Error of Measurement
Hwang, Chi-en; Cleary, T. Anne – 1986
The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…
Descriptors: Computer Simulation, Equated Scores, Latent Trait Theory, Mathematical Models
Cummings, Corenna C. – 1982
The accuracy and variability of 4 cross-validation procedures and 18 formulas were compared concerning their ability to estimate the population multiple correlation and the validity of the sample regression equation in the population. The investigation included two types of regression, multiple and stepwise; three sample sizes, N = 30, 60, 120;…
Descriptors: Correlation, Error of Measurement, Mathematical Formulas, Multiple Regression Analysis
Fuchs, Lynn; And Others – 1981
Three related studies were conducted to examine the effects of variations in procedures used for curriculum-based assessment of reading proficiency: the first addressed the question of the influence of sample duration on the concurrent validity of the measure; the second addressed the question of the influence of sample duration on the level,…
Descriptors: Elementary Education, Item Banks, Learning Disabilities, Reading Ability
Maxwell, Scott E. – 1979
Arguments have recently been put forth that standard textbook procedures for determining the sample size necessary to achieve a certain level of power in a completely randomized design are incorrect when the dependent variable is fallible because they ignore measurement error. In fact, however, there are several correct procedures, one of which is…
Descriptors: Hypothesis Testing, Mathematical Formulas, Power (Statistics), Predictor Variables
Pages: 1  |  ...  |  157  |  158  |  159  |  160  |  161  |  162  |  163  |  164  |  165  |  ...  |  189