NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)1
Since 2007 (last 20 years)23
What Works Clearinghouse Rating
Showing 1 to 15 of 51 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Merchie, Emmelien; Tuytens, Melissa; Devos, Geert; Vanderlinde, Ruben – Research Papers in Education, 2018
Evaluating teachers' professional development initiatives (PDI) is one of the main challenges for the teacher professionalisation field. Although different studies have focused on the effectiveness of PDI, the obtained effects and evaluative methods have been found to be widely divergent. By means of a narrative review, this study provides an…
Descriptors: Program Evaluation, Program Effectiveness, Faculty Development, Teacher Education Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Levin, Henry M.; Belfield, Clive – Journal of Research on Educational Effectiveness, 2015
Cost-effectiveness analysis is rarely used in education. When it is used, it often fails to meet methodological standards, especially with regard to cost measurement. Although there are occasional criticisms of these failings, we believe that it is useful to provide a listing of the more common concerns and how they might be addressed. Based upon…
Descriptors: Cost Effectiveness, Comparative Analysis, Validity, Educational Policy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stoneberg, Bert D. – Practical Assessment, Research & Evaluation, 2015
Public school critics often point to rising expenditures and relatively flat test scores to justify their school reform agendas. The claims are flawed because their analyses fail to account for the difference in data types between dollars (ratio) and test scores (interval). A cost-benefit analysis using dollars as a common metric for both costs…
Descriptors: Public Education, Cost Effectiveness, Input Output Analysis, Educational Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Fryer, Marilyn – Creativity Research Journal, 2012
This article explores a number of key issues with regard to the measurement of creativity in the course of conducting psychological research or when applying various evaluation measures. It is argued that, although creativity is a fuzzy concept, it is no more difficult to investigate than other fuzzy concepts people tend to take for granted. At…
Descriptors: Creativity, Educational Research, Psychological Studies, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Bearman, Margaret; Smith, Calvin D.; Carbone, Angela; Slade, Susan; Baik, Chi; Hughes-Warrington, Marnie; Neumann, David L. – Higher Education Research and Development, 2012
Systematic review methodology can be distinguished from narrative reviews of the literature through its emphasis on transparent, structured and comprehensive approaches to searching the literature and its requirement for formal synthesis of research findings. There appears to be relatively little use of the systematic review methodology within the…
Descriptors: Higher Education, Literature Reviews, Research Methodology, Performance Factors
Ackerman, Matthew; Egalite, Anna J. – Program on Education Policy and Governance, 2015
When lotteries are infeasible, researchers must rely on observational methods to estimate charter effectiveness at raising student test scores. Considerable attention has been paid to observational studies by the Stanford Center for Research on Education Outcomes (CREDO), which have analyzed charter performance in 27 states. However, the…
Descriptors: Charter Schools, Observation, Special Education, Lunch Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Farmer, Sybil E.; Wood, Duncan; Swain, Ian D.; Pandyan, Anand D. – International Journal of Rehabilitation Research, 2012
Systematic reviews are used to inform practice, and develop guidelines and protocols. A questionnaire to quantify the risk of bias in systematic reviews, the review paper assessment (RPA) tool, was developed and tested. A search of electronic databases provided a data set of review articles that were then independently reviewed by two assessors…
Descriptors: Outcome Measures, Interrater Reliability, Questionnaires, Literature Reviews
Raudenbush, Stephen – Carnegie Foundation for the Advancement of Teaching, 2013
This brief considers the problem of using value-added scores to compare teachers who work in different schools. The author focuses on whether such comparisons can be regarded as fair, or, in statistical language, "unbiased." An unbiased measure does not systematically favor teachers because of the backgrounds of the students they are…
Descriptors: Educational Research, Achievement Gains, Teacher Effectiveness, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Mislevy, Robert J.; Haertel, Geneva; Cheng, Britte H.; Ructtinger, Liliana; DeBarger, Angela; Murray, Elizabeth; Rose, David; Gravel, Jenna; Colker, Alexis M.; Rutstein, Daisy; Vendlinski, Terry – Educational Research and Evaluation, 2013
Standardizing aspects of assessments has long been recognized as a tactic to help make evaluations of examinees fair. It reduces variation in irrelevant aspects of testing procedures that could advantage some examinees and disadvantage others. However, recent attention to making assessment accessible to a more diverse population of students…
Descriptors: Testing Accommodations, Access to Education, Testing, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Camilli, Gregory – Educational Research and Evaluation, 2013
In the attempt to identify or prevent unfair tests, both quantitative analyses and logical evaluation are often used. For the most part, fairness evaluation is a pragmatic attempt at determining whether procedural or substantive due process has been accorded to either a group of test takers or an individual. In both the individual and comparative…
Descriptors: Alternative Assessment, Test Bias, Test Content, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Nowakowski, Jeri Ridings – Journal of MultiDisciplinary Evaluation, 2011
This article presents an interview with Ralph Tyler. This interview will be of interest to those entering the field of education as well as for those who have made their home within the field for some time now. In the interview, Dr. Tyler discusses work in education and educational evaluation that spans over a half a century. He describes issues…
Descriptors: Evaluation Methods, Interviews, Educational Research, Profiles
Peer reviewed Peer reviewed
Direct linkDirect link
Bartram, Dave – International Journal of Testing, 2012
Internationalization is possible, but the objectives need careful consideration. It is noted that the majority of countries do not have any form of test quality procedure and that only a small number have reviews, registration, certification, or some combination of these approaches. Internationalization could provide benefits at the least by…
Descriptors: Test Reviews, Educational Research, Evaluation Criteria, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Hsin-Ling; Altschuld, James W.; Lee, Yi-Fang – Evaluation and Program Planning, 2008
Although the Delphi is widely used, research on certain methodological issues is somewhat limited. After a brief introduction to the strengths, limitations, and methodological challenges of the technique, we share our experiences (as well as problems encountered) with an electronic Delphi of educational program evaluation (EPE) in the Asia-Pacific…
Descriptors: Delphi Technique, Program Evaluation, Research Methodology, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Wong, Pia Lindquist; Glass, Ronald David – Yearbook of the National Society for the Study of Education, 2011
A central commitment for professional development schools (PDSs) is to link preservice teacher preparation and in-service teacher professional development with improved learning outcomes for pupils. PDSs are expected to improve student achievement in two primary ways: (1) by enriching and intensifying the learning environment through professional…
Descriptors: Student Teachers, Professional Development Schools, Mentors, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Hagermoser Sanetti, Lisa M.; Kratochwill, Thomas R. – School Psychology Review, 2009
Treatment integrity (also referred to as "treatment fidelity," "intervention integrity," and "procedural reliability") is an important methodological concerning both research and practice because treatment integrity data are essential to making valid conclusions regarding treatment outcomes. Despite its relationship to validity, treatment…
Descriptors: Intervention, Research Methodology, Models, Validity
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4