NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: ED656951
Record Type: Non-Journal
Publication Date: 2021-Sep-27
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Measuring the Fidelity of Implementation of Instructional Coaching: Current Approaches and New Directions
Jane Coggshall; Debbie Davidson-Gibbs; Andrew Wayne
Society for Research on Educational Effectiveness
Background: To understand and apply the results of multisite impact studies, it is necessary to know the extent to which the components of an intervention were implemented as they were designed in each context (e.g., Carroll et al., 2007; Hill, Corey, & Jacob, 2018). Rigorous and cost-effective measures of implementation fidelity can help researchers achieve four goals: 1) identify essential aspects of the intervention design, 2) rule out poor implementation rather than poor design as a reason for null effects, 3) conduct replication studies of effective programs, and 4) provide a road map for program leaders and practitioners to implement the intervention in a variety of settings. Unfortunately, published impact studies rarely describe their fidelity measures in sufficient detail (Hill & Erikson, 2019). This gap must be addressed in studies of instructional coaching interventions, in particular, for at least three reasons: First, there is accumulating evidence that coaching can address urgent student needs (Kraft, Blazar, & Hogan, 2017; Garrett, Citkowicz, & Williams, 2019; Allen et al., 2015). Second, coaching is increasingly used on its own and in tandem with other professional development (PD) to improve instructional practices (Rebora, 2019). Third, coaching is inherently difficult to scale with fidelity, as it involves individualization and situational judgment (e.g., to gain teachers' trust, ensure useful reflection). In fact, in their meta-analysis of studies of the impact of instructional coaching, Kraft et al. (2017) noted that the studies that provided coaching to 100 or fewer teachers yielded double the impact on achievement of those with more than 100 teachers. Purpose and Overview: This paper advances methods for solving the problem of measuring fidelity of instructional coaching, so that future impact studies can better inform policy and practice. We purposively select four coaching models that have been studied in RCTs in the past five years yet differ in key ways (e.g., content focus, teacher experience level, etc.). For each study we explain how fidelity was assessed and what was found. We then identify gaps that limit the measures' usefulness. Finally, based on this analysis, we outline a framework for assessing fidelity of instructional coaching at scale in ways that better meet the aforementioned four key goals. The Sample of Four Programs: The first model reviewed is the instructional coaching program MyTeachingPartner-Secondary (MTP-S). Building on two earlier trials led by the program designers at University of Virginia (e.g., Allen et al, 2011), we launched an independent IES-funded replication trial in 2019. This paper reports original fidelity results from that trial. The other three programs are the coaching portion of the Mathematics Knowledge for Teaching program (MKT-Video) (reported in Garet, et. al. 2016), New Teacher Center's (NTC) mentoring model (Young, et. al., 2017) and NTC's coaching model (Laguarda, et. al., 2020). The Data About Each Program: For this abstract, we use the MTP-S study as an example to illustrate the data compiled from each study. *Setting of MTP-S study: 109 secondary teachers in 3 districts in 3 states (MD, TX, VA) participated, with 58 teachers and 12 coaches in the treatment group. Districts represented diversity in size, urbanicity, poverty levels, and student racial backgrounds. *Participants in the MTP-S study: Teacher participants taught either ELA or mathematics. Coaches were district- or school-based. All had taught for 5+ years and had some supervisory or coaching experience. Nearly all had served in the district for 2+ years. *MTP-S Intervention: For this trial, coaches and teachers complete 6-10 video-enabled coaching cycles each year for two years (see Exhibit 1 for an overview of each cycle). Coaches participate in two days of pre-service training focused on the program's instructional framework, and three days on the coaching process. Coaches were trained, monitored, and supported by a "coach specialist" from the provider, Teachstone. *Fidelity Measures for the MTP-S study: Based on the theory of action (see Exhibit 2), we measured MTP-S coaching fidelity by identifying the key components of the intervention and Teachtone's supports for local coaches (i.e., training, monitoring, etc) and developed indicators of adequate fidelity for each component, developed measures for each indicator and determined a priori thresholds of fidelity, as shown in the resulting fidelity matrix in Exhibit 3. A key data source was the online video-sharing platform for coaches and teachers. The platform captured coaching work products--specifically classroom videos, coaches' prompts, teachers' written responses, and coaches' summary and action plans. The MTP-S study's fidelity findings appear in Exhibit 4. Findings: We contrast the MTP-S study fidelity measures with those used in the three coaching studies across 5 fidelity domains described by Dane & Schneider (1998) and Dusenbury, et al (2003), among others: dosage (or exposure), adherence, participant responsiveness, quality of delivery, and differentiation. For a descriptive summary of the data, see Exhibit 5. We find that each RCT employed measures of coaching dosage and adherence. Only the MTP-S study included a fidelity measure of teacher responsiveness to coaching, and just the two NTC studies gauged the quality of delivery of coaching. However, these were based on coach and teacher perceptions captured in surveys. Dosage and adherence also comprised the primary fidelity measures for supports for coaches. Conclusion: Using the evidence gathered, the paper illustrates several useful takeaways for those trying to find the best way to measure the fidelity of coaching interventions. As an example, the most important is that one can complement measures of coaching with measures of the supports for coaches. Such measures are easily implemented, and the fidelity of the supports for coaches is believed to undergird the effectiveness of instructional coaching, as it relies on situational judgment of individuals more so than other forms of PD. Another important takeaway is that researchers may improve their methods and gain additional insight by adding measures of participant responsiveness, quality of delivery, and differentiation -- three fidelity constructs noted in the literature as important yet receiving little attention in the four studies. These and other takeaways provide the basis for a robust and useful conceptualization of the fidelity of coaching, as well as practical measures researchers can draw upon.
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: Secondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Identifiers - Location: Maryland; Texas; Virginia
Grant or Contract Numbers: N/A
Author Affiliations: N/A