Descriptor
| Computer Simulation | 8 |
| Error of Measurement | 8 |
| Statistical Studies | 8 |
| Equated Scores | 3 |
| Estimation (Mathematics) | 3 |
| Mathematical Models | 3 |
| Correlation | 2 |
| Goodness of Fit | 2 |
| Monte Carlo Methods | 2 |
| Power (Statistics) | 2 |
| Sample Size | 2 |
| More ▼ | |
Source
| Educational and Psychological… | 2 |
| Evaluation Review | 1 |
| Journal of Educational… | 1 |
| Multivariate Behavioral… | 1 |
Author
Publication Type
| Journal Articles | 5 |
| Reports - Research | 5 |
| Reports - Evaluative | 2 |
| Speeches/Meeting Papers | 2 |
Education Level
Audience
| Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedRasmussen, Jeffrey Lee – Evaluation Review, 1985
A recent study (Blair and Higgins, 1980) indicated a power advantage for the Wilcoxon W Test over student's t-test when calculated from a common mixed-normal sample. Results of the present study indicate that the t-test corrected for outliers shows a superior power curve to the Wilcoxon W.
Descriptors: Computer Simulation, Error of Measurement, Hypothesis Testing, Power (Statistics)
Peer reviewedJamieson, John – Educational and Psychological Measurement, 1995
Computer simulations indicate that the correlation between baseline and change, by itself, does not invalidate the use of gain scores to measure change, but when the negative correlation is accompanied by decrease in variance from pretest to posttest, covariance is a superior measure of change. (SLD)
Descriptors: Analysis of Covariance, Change, Computer Simulation, Correlation
Morrison, Carol A.; Fitzpatrick, Steven J. – 1992
An attempt was made to determine which item response theory (IRT) equating method results in the least amount of equating error or "scale drift" when equating scores across one or more test forms. An internal anchor test design was employed with five different test forms, each consisting of 30 items, 10 in common with the base test and 5…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Error of Measurement
Peer reviewedJarjoura, David; Kolen, Michael J. – Journal of Educational Statistics, 1985
An equating design in which two groups of examinees from slightly different populations are administered a different test form with a subset of common items is widely used. This paper presents standard errors and a simulation that verifies the equation for large samples for an equipercentile equating procedure for this design. (Author/BS)
Descriptors: Computer Simulation, Equated Scores, Error of Measurement, Estimation (Mathematics)
Skaggs, Gary; Lissitz, Robert W. – 1985
This study examined how four commonly used test equating procedures (linear, equipercentile, Rasch Model, and three-parameter) would respond to situations in which the properties or the two tests being equated were different. Data for two tests plus an external anchor test were generated from a three parameter model in which mean test differences…
Descriptors: Computer Simulation, Equated Scores, Error of Measurement, Goodness of Fit
Ackerman, Terry A. – 1986
The purpose of this paper is to present two new alternative methods to the current goodness of fit methodology. With the increase use of computerized adaptive test (CAT), the ability to determine the accuracy of calibrated item parameter estimates is paramount. The first method applies a normalizing transformation to the logistic residuals to make…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Educational Research
Peer reviewedCornwell, John M.; Ladd, Robert T. – Educational and Psychological Measurement, 1993
Simulated data typical of those from meta analyses are used to evaluate the reliability, Type I and Type II errors, bias, and standard error of the meta-analytic procedures of Schmidt and Hunter (1977). Concerns about power, reliability, and Type I errors are presented. (SLD)
Descriptors: Bias, Computer Simulation, Correlation, Effect Size
Peer reviewedFarley, John U.; Reddy, Srinivas K. – Multivariate Behavioral Research, 1987
In an experiment manipulating artificial data in a factorial design, model misspecification and varying levels of error in measurement and in model structure are shown to have significant effects on LISREL parameter estimates in a modified peer influence model. (Author/LMO)
Descriptors: Analysis of Variance, Computer Simulation, Error of Measurement, Estimation (Mathematics)


