NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Kidder, Louise H.; Fine, Michelle – New Directions for Program Evaluation, 1987
The use of qualitative measures in a quantitative framework can result in triangulation. Independently conducting qualitative and quantitative evaluations is less likely to cause triangulation and allows for greater discovery. Three usages of triangulation, research as story telling, and problems with qualitative methods and phenomenological…
Descriptors: Evaluation Methods, Qualitative Research, Statistical Analysis
Peer reviewed Peer reviewed
Reichardt, Charles S.; Gollob, Harry F. – New Directions for Program Evaluation, 1987
Four principles of taking statistical uncertainty into account when estimating and reporting effects are discussed. Means of implementing the principles, ways in which the principles are violated in practice, implications for the use of multiple methods, effect size, estimation techniques, and random and nonrandom certainty are described. (TJH)
Descriptors: Effect Size, Estimation (Mathematics), Program Evaluation, Statistical Analysis
Peer reviewed Peer reviewed
Campbell, Donald T. – New Directions for Program Evaluation, 1986
Confusion about the meaning of validity in quasi-experimental research can be addressed by carefully relabeling types of validity. Internal validity can more aptly be termed "local molar causal validity." More tentatively, the "principle of proximal similarity" can be substituted for the concept of external validity. (Author)
Descriptors: Definitions, Quasiexperimental Design, Sampling, Social Science Research
Peer reviewed Peer reviewed
Rindskopf, David – New Directions for Program Evaluation, 1986
Modeling the process by which participants are selected into groups, rather than adjusting for preexisting group differences, provides the basis for several new approaches to the analysis of data from nonrandomized studies. Econometric approaches, the propensity scores approach, and the relative assignment variable approach to the modeling of…
Descriptors: Effect Size, Experimental Groups, Intelligence Quotient, Mathematical Models
Peer reviewed Peer reviewed
Reichardt, Charles; Gollob, Harry – New Directions for Program Evaluation, 1986
Causal models often omit variables that should be included, use variables that are measured fallibly, and ignore time lags. Such practices can lead to severely biased estimates of effects. The discussion explains these biases and shows how to take them into account. (Author)
Descriptors: Effect Size, Error of Measurement, High Schools, Mathematical Models
Peer reviewed Peer reviewed
Hedges, Larry V. – New Directions for Program Evaluation, 1984
The adequacy of traditional effect size measures for research synthesis is challenged. Analogues to analysis of variance and multiple regression analysis for effect sizes are presented. The importance of tests for the consistency of effect sizes in interpreting results, and problems in obtaining well-specified models for meta-analysis are…
Descriptors: Analysis of Variance, Effect Size, Mathematical Models, Meta Analysis
Peer reviewed Peer reviewed
Sechrest, Lee, Ed. – New Directions for Program Evaluation, 1993
Two chapters of this issue consider critical multiplism as a research strategy with links to meta analysis and generalizability theory. The unifying perspective it can provide for quantitative and qualitative evaluation is discussed. The third chapter explores meta analysis as a way to improve causal inferences in nonexperimental data. (SLD)
Descriptors: Causal Models, Evaluation Methods, Generalizability Theory, Inferences
Peer reviewed Peer reviewed
Fortune, Jim C.; McBee, Janice K. – New Directions for Program Evaluation, 1984
Twenty-nine steps necessary for data file preparation for secondary analysis are discussed. Data base characteristics and planned use vary the complexity of the preparation. Required techniques (file verification, sample verification, file merger, data aggregation, file modification, and variable controls) and seven associated pitfalls are defined…
Descriptors: Computer Storage Devices, Data Analysis, Data Collection, Data Processing
Peer reviewed Peer reviewed
Myers, David E.; Rockwell, Richard C. – New Directions for Program Evaluation, 1984
Large-scale databases are an underutilized source of data for secondary analysis by researchers. Major government and private databases available to the public, the types of agencies developing them, and how to obtain and utilize them are described. A catalogue of 20 databases, their characteristics and sources is included. (BS)
Descriptors: Census Figures, Computers, Data Collection, Databases
Peer reviewed Peer reviewed
McCleary, Richard; Riggs, James E. – New Directions for Program Evaluation, 1982
Time series analysis is applied to an assessment of the temporary and permanent impact of the 1975 Australian Family Law Act and its effect on number of divorces. The application and construct validity of the model is examined. (Author/PN)
Descriptors: Court Litigation, Demography, Divorce, Evaluation Methods
Peer reviewed Peer reviewed
Reichardt, Charles S., Ed.; Rallis, Sharon F., Ed. – New Directions for Program Evaluation, 1994
The eight articles of this issue examine the nature of differences that arise between qualitative and quantitative researchers in program evaluation in terms of goals and epistemologies. The origins of these differences and their consequences are explored. Authors represent both perspectives but do not defend their ideological turfs. (SLD)
Descriptors: Conflict, Epistemology, Evaluation Methods, Ideology
Peer reviewed Peer reviewed
Conrad, Kendon J., Ed. – New Directions for Program Evaluation, 1994
The 9 articles of this theme issue stem from a project on alcohol and drug abuse that involved 14 projects, 10 of which began with randomized clinical trials. These papers describe implementation problems associated with experimentation in field research and the focus on ensuring internal validity. (SLD)
Descriptors: Alcohol Abuse, Drug Abuse, Evaluation Methods, Experiments
Peer reviewed Peer reviewed
Bowering, David J. – New Directions for Program Evaluation, 1984
This case study describes how path analysis and causal modeling were used to assess the impact of federal research and development spending on Ph.D production in science and engineering at leading research universities. The nature of existing data, integrated into a single data base from seven surveys, influenced the research methodology. (BS)
Descriptors: College Science, Databases, Doctoral Degrees, Engineering Education