Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 10 |
Descriptor
| Evaluation Methods | 16 |
| Hypothesis Testing | 16 |
| Program Effectiveness | 16 |
| Statistical Analysis | 7 |
| Intervention | 5 |
| Program Evaluation | 5 |
| Correlation | 4 |
| Statistical Significance | 4 |
| Computation | 3 |
| Educational Research | 3 |
| Effect Size | 3 |
| More ▼ | |
Source
Author
| Porter, Kristin E. | 3 |
| Anderson, Judith I. | 1 |
| Blair, Kwang-Sun Cho | 1 |
| Cox, Pamela L. | 1 |
| Eagles, Munroe | 1 |
| Estes, Gary D. | 1 |
| Ewan, Eric E. | 1 |
| Eyman, James R. | 1 |
| Ferro, Jolenea B. | 1 |
| Friedman, Barry A. | 1 |
| Frye, Ann W. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 10 |
| Reports - Research | 9 |
| Guides - Non-Classroom | 3 |
| Reports - Descriptive | 3 |
| Reports - Evaluative | 2 |
| Speeches/Meeting Papers | 2 |
| Information Analyses | 1 |
Education Level
| Higher Education | 3 |
| Elementary Education | 1 |
| Postsecondary Education | 1 |
Audience
| Researchers | 3 |
Laws, Policies, & Programs
| Elementary and Secondary… | 1 |
Assessments and Surveys
| Defining Issues Test | 1 |
What Works Clearinghouse Rating
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Ganesh, Gopala; Paswan, Audhesh; Sun, Qin – Marketing Education Review, 2015
Using data from a unique undergraduate marketing math course offered in both traditional and online formats, this study looks at four dimensions of course evaluation: overall evaluation, perceived competence, perceived communication, and perceived challenge. Results indicate that students rate traditional classes better on all four dimensions.…
Descriptors: Teaching Methods, Program Effectiveness, Conventional Instruction, Online Courses
Popa, Nicoleta Laura; Pauc, Ramona Loredana – Acta Didactica Napocensia, 2015
Dynamic assessment is currently discussed in educational literature as one of the most promising practices in stimulating learning among various groups of students, including gifted and potentially gifted students. The present study investigates effects of dynamic assessment on mathematics achievement among elementary school students, with…
Descriptors: Academically Gifted, Alternative Assessment, Student Evaluation, Mathematics Achievement
Riegle, Sandra E.; Frye, Ann W.; Glenn, Jason; Smith, Kirk L. – Online Submission, 2012
Teachers tasked with developing moral character in future physicians face an array of pedagogic challenges, among them identifying tools to measure progress in instilling the requisite skill set. One validated instrument for assessing moral judgment is the Defining Issues Test (DIT-2). Based on the work of Lawrence Kohlberg, the test's main…
Descriptors: Medical Education, Medical Students, Physicians, Statistical Analysis
Halderman, Brent L.; Eyman, James R.; Kerner, Lisa; Schlacks, Bill – Suicide and Life-Threatening Behavior, 2009
A three-stage paradigm for telephonically assessing suicidal risk and triaging suicidal callers as practiced in an Employee Assistance Program Call Center was investigated. The first hypothesis was that the use of the procedure would increase the probability that callers would accept the clinician's recommendations, evidenced by fewer police…
Descriptors: Employee Assistance Programs, Suicide, Probability, Telecommunications
Wood, Brenna K.; Blair, Kwang-Sun Cho; Ferro, Jolenea B. – Topics in Early Childhood Special Education, 2009
Using the five intervention elements described by Dunlap et al. (2006) as a guide, the authors of this article reviewed the functional behavioral assessment (FBA) and function-based intervention research of the past 17 years (1990-2007), focusing on a component analysis of FBA and function-based intervention procedures. Thirty-five studies were…
Descriptors: Intervention, Testing, Young Children, Functional Behavioral Assessment
Friedman, Barry A.; Cox, Pamela L.; Maher, Larry E. – Journal of Management Education, 2008
Group projects are an important component of higher education, and the use of peer assessment of students' individual contributions to group projects has increased. The researchers employed an expectancy theory approach and an experimental design in a field setting to investigate conditions that influence students' motivation to rate their peers'…
Descriptors: Research Design, Peer Evaluation, Student Motivation, Program Effectiveness
Madden, Gregory J.; Smethells, John R.; Ewan, Eric E.; Hursh, Steven R. – Journal of the Experimental Analysis of Behavior, 2007
This experiment was conducted to test the predictions of two behavioral-economic approaches to quantifying relative reinforcer efficacy. The normalized demand analysis suggests that characteristics of averaged normalized demand curves may be used to predict progressive-ratio breakpoints and peak responding. By contrast, the demand analysis holds…
Descriptors: Positive Reinforcement, Correlation, Prediction, Evaluation Methods
Peer reviewedHowe, Holly L.; Hoff, Margaret B. – Evaluation and the Health Professions, 1981
The sensitivity and simplicity of Wald's sequential analysis test in monitoring a preventive health care program are discussed. Data exemplifying the usefulness and expedience of employing sequential methods are presented. (Author/GK)
Descriptors: Evaluation Methods, Formative Evaluation, Hypothesis Testing, Preventive Medicine
Hanes, John C.; Hail, Michael – 1999
Many program evaluations involve some type of statistical testing to verify that the program has succeeded in accomplishing initially established goals. In many cases, this takes the form of null hypothesis significance testing (NHST) with t-tests, analysis of variance, or some form of the general linear model. This paper contends that, at least…
Descriptors: Change, Educational Indicators, Evaluation Methods, Hypothesis Testing
Estes, Gary D.; Anderson, Judith I. – 1978
An empirical study was conducted in order to obtain treatment effect estimates with the Special Regression model for groups in which there was no treatment. General mathematics test scores were obtained from 730 ninth graders in city schools somewhat similar to Title I schools, but in which no special treatments were given. Hypothetical…
Descriptors: Achievement Gains, Arithmetic, Compensatory Education, Control Groups
Peer reviewedKatz, Richard S.; Eagles, Munroe – PS: Political Science and Politics, 1996
Constructs a model that explains a large fraction of the variance in political science departmental rankings. Divides the objective predictors into two sets: one reflecting faculty quality ratings of department members, the other the effects of circumstances beyond a department's control. This model works well with most social science disciplines.…
Descriptors: Achievement Rating, Analysis of Variance, Causal Models, Credentials
Peer reviewedJackman, Robert W.; Siverson, Randolph M. – PS: Political Science and Politics, 1996
Analyzes the National Research Council's rating of political science departments and discovers the ratings reflect two general sets of characteristics, the size and productivity of the faculty. Reveals that the quality and impact of faculty research is more important than the overall output. Includes tables of statistical data. (MJP)
Descriptors: Achievement Rating, Analysis of Variance, Credentials, Departments
Previous Page | Next Page ยป
Pages: 1 | 2
Direct link
