Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 1 |
Descriptor
Source
| Journal of Educational… | 13 |
Author
| Bradshaw, Laine | 1 |
| Conklin, Jonathan E. | 1 |
| David, Jane L. | 1 |
| Forsyth, Robert A. | 1 |
| Fournier, Deborah M. | 1 |
| Gierl, Mark J. | 1 |
| Katz, Lilian G. | 1 |
| Lane, Suzanne | 1 |
| Leighton, Jacqueline P. | 1 |
| Leinhardt, Gaea | 1 |
| Long, John V. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 7 |
| Reports - Research | 6 |
| Book/Product Reviews | 1 |
| Reports - Evaluative | 1 |
Education Level
Audience
Location
| Rhode Island | 1 |
Laws, Policies, & Programs
| Elementary and Secondary… | 2 |
Assessments and Surveys
| Comprehensive Tests of Basic… | 1 |
What Works Clearinghouse Rating
Madison, Matthew J.; Bradshaw, Laine – Journal of Educational Measurement, 2018
The evaluation of intervention effects is an important objective of educational research. One way to evaluate the effectiveness of an intervention is to conduct an experiment that assigns individuals to control and treatment groups. In the context of pretest/posttest designed studies, this is referred to as a control-group pretest/posttest design.…
Descriptors: Intervention, Program Evaluation, Program Effectiveness, Control Groups
Peer reviewedLeinhardt, Gaea; Seewald, Andrea Mar – Journal of Educational Measurement, 1981
In studying the effectiveness of different instructional programs, a criterion measure can favor one of the programs because there is greater overlap between the content covered on the test and in that program. This overlap can be measured using teacher estimates or teacher estimates combined with curriculum analysis. (Author/BW)
Descriptors: Criterion Referenced Tests, Curriculum, Elementary School Mathematics, Learning Disabilities
Peer reviewedTallmadge, G. Kasten – Journal of Educational Measurement, 1982
In assessing the validity of a norm-referenced model used in evaluating large-scale federal educational programs for disadvantaged children, gain estimates were shown as approximately equal with randomized control group model estimates compared by retrospective analyses of two databases. (Author/CM)
Descriptors: Comparative Analysis, Educational Assessment, Elementary Secondary Education, Federal Programs
Peer reviewedRaths, James; Katz, Lilian G. – Journal of Educational Measurement, 1975
Reviews CIRCUS, a comprehensive battery of assessment devices designed for use in nursey schools and kindergartens to diagnose the instructional needs of individual children and evaluate educational programs. (EVH)
Descriptors: Diagnostic Tests, Preschool Children, Preschool Tests, Program Evaluation
Peer reviewedFournier, Deborah M. – Journal of Educational Measurement, 1994
The "Program Evaluation Standards" supplies a useful framework for generating questions to raise about any evaluation plan or evaluation report to assess its pros and cons. It is a valuable "how-to" for graduate students and professionals. This second edition incorporates changes in the field in the last decade. (SLD)
Descriptors: Evaluation Methods, Evaluation Research, Graduate Students, Guides
Peer reviewedForsyth, Robert A. – Journal of Educational Measurement, 1973
Article is concerned with a model for school system evaluation. The usefulness of the indices from this model depend on their stability, and this study presents evidence related to their stability when pupils and factors related to time are considered as sources of error. (Author/RK)
Descriptors: Correlation, Educational Quality, Models, Multiple Regression Analysis
Peer reviewedLong, John V.; And Others – Journal of Educational Measurement, 1977
Estimates of reading ability for second through fourth grade Title I pupils were obtained through grade-level and out-of-level testing. Results show that the method of testing affects the students' scores, the identification of pupils needing remedial reading instruction, and evaluation of the reading program. (JKS)
Descriptors: Compensatory Education, Educationally Disadvantaged, Grade Equivalent Scores, Primary Education
Peer reviewedStallings, William M.; Smock, H. Richard – Journal of Educational Measurement, 1971
Although the pass-fail option proved to be popular with students, the faculty sample felt that it was being abused, and had somewhat negative attitudes toward the system. (AG)
Descriptors: Academic Standards, Grades (Scholastic), Longitudinal Studies, Participant Satisfaction
Peer reviewedDavid, Jane L.; Pelavin, Sol H. – Journal of Educational Measurement, 1978
Gains of students in compensatory education programs are contrasted using fall-to-spring gains as opposed to fall-to-fall gains. Results indicate substantial loss for compensatory education students over the summer. Using fall-to-fall scores is recommended. (JKS)
Descriptors: Academic Achievement, Achievement Gains, Compensatory Education, Elementary Secondary Education
Peer reviewedConklin, Jonathan E.; And Others – Journal of Educational Measurement, 1979
Three methods are presented for interpolating fall norms from data derived for tests administered in spring. One method used the midpoint between two spring administrations, the second adjusted for date of fall testing, while the third method used experimental data that showed slower growth rates during the summer months. (CTM)
Descriptors: Educational Testing, Growth Patterns, Norm Referenced Tests, Norms
Peer reviewedRakow, Ernest A.; And Others – Journal of Educational Measurement, 1978
This study estimated a component of within-school variance which has been overlooked in policy research studies and program evaluations performed to date: the effect on pupil performance of different teachers within a single school or program. Results indicated that teacher effects accounted for 30 to 40 percent of within-school variance. (Author/…
Descriptors: Academic Achievement, Analysis of Variance, Program Evaluation, Research Design
Gierl, Mark J.; Leighton, Jacqueline P.; Tan, Xuan – Journal of Educational Measurement, 2006
DETECT, the acronym for Dimensionality Evaluation To Enumerate Contributing Traits, is an innovative and relatively new nonparametric dimensionality assessment procedure used to identify mutually exclusive, dimensionally homogeneous clusters of items using a genetic algorithm ( Zhang & Stout, 1999). Because the clusters of items are mutually…
Descriptors: Program Evaluation, Cluster Grouping, Evaluation Methods, Multivariate Analysis
Peer reviewedLane, Suzanne; And Others – Journal of Educational Measurement, 1996
Evidence from test results of 3,604 sixth and seventh graders is provided for the generalizability and validity of the Quantitative Understanding: Amplifying Student Achievement and Reasoning (QUASAR) Cognitive Assessment Instrument, which is designed to measure program outcomes and growth in mathematics. (SLD)
Descriptors: Achievement Tests, Cognitive Processes, Elementary Education, Elementary School Students

Direct link
