Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Simulation | 5 |
| Statistical Analysis | 5 |
| Comparative Analysis | 3 |
| Goodness of Fit | 3 |
| Item Analysis | 3 |
| Mathematical Models | 3 |
| Achievement Tests | 2 |
| Sampling | 2 |
| Teacher Effectiveness | 2 |
| Test Items | 2 |
| Academic Rank (Professional) | 1 |
| More ▼ | |
Author
| Reckase, Mark D. | 5 |
| Guarino, Cassandra M. | 2 |
| Wooldridge, Jeffrey M. | 2 |
| Maxfield, Michelle | 1 |
| Stacy, Brian W. | 1 |
| Thompson, Paul | 1 |
Publication Type
| Reports - Research | 4 |
| Journal Articles | 2 |
| Reports - General | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Guarino, Cassandra M.; Reckase, Mark D.; Stacy, Brian W.; Wooldridge, Jeffrey M. – Journal of Research on Educational Effectiveness, 2015
We study the properties of two specification tests that have been applied to a variety of estimators in the context of value-added measures (VAMs) of teacher and school quality: the Hausman test for choosing between student-level random and fixed effects, and a test for feedback (sometimes called a "falsification test"). We discuss…
Descriptors: Teacher Effectiveness, Educational Quality, Evaluation Methods, Tests
Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul; Wooldridge, Jeffrey M. – Education Policy Center at Michigan State University, 2014
Empirical Bayes' (EB) estimation is a widely used procedure to calculate teacher value-added. It is primarily viewed as a way to make imprecise estimates more reliable. In this paper we review the theory of EB estimation and use simulated data to study its ability to properly rank teachers. We compare the performance of EB estimators with that of…
Descriptors: Teacher Evaluation, Bayesian Statistics, Comparative Analysis, Teacher Effectiveness
Reckase, Mark D. – 1978
Five comparisons were made relative to the quality of estimates of ability parameters and item calibrations obtained from the one-parameter and three-parameter logistic models. The results indicate: (1) The three-parameter model fit the test data better in all cases than did the one-parameter model. For simulation data sets, multi-factor data were…
Descriptors: Comparative Analysis, Goodness of Fit, Item Analysis, Mathematical Models
Peer reviewedReckase, Mark D. – Journal of Educational Statistics, 1979
Since all commonly used latent trait models assume a unidimensional test, the applicability of the procedure to obviously multidimensional tests is questionable. This paper presents the results of the application of latent trait, traditional, and factor analyses to a series of actual and hypothetical tests that vary in factoral complexity.…
Descriptors: Achievement Tests, Factor Analysis, Goodness of Fit, Higher Education
Reckase, Mark D. – 1977
Latent trait model calibration procedures were used on data obtained from a group testing program. The one-parameter model of Wright and Panchapakesan and the three-parameter logistic model of Wingersky, Wood, and Lord were selected for comparison. These models and their corresponding estimation procedures were compared, using actual and simulated…
Descriptors: Achievement Tests, Adaptive Testing, Aptitude Tests, Comparative Analysis

Direct link
