Developing Human Capital through Instructional Leadership: Learning to Coach during Principal Preparation

Abstract

Purpose: Aspiring leaders need to be equipped to transform schools with both the understanding and skills to purposefully increase implementation of a rigorous curriculum in every classroom as they learn to lead diverse schools. To provide a rigorous curriculum, principals need effective skills to grow teachers’ instructional and pedagogical competencies. The development of these necessary skills should begin during the principal preparation program with the candidate’s ability to demonstrate instructional coaching competencies. Methods: The purpose of this mixed methods research study was to determine whether there was an improvement in principal candidates’ performance on the linear combinations of the indicators across the four examined Evaluation Cycles and in a qualitative phase, to pinpoint the significant improvements in principal candidates’ performances on observed indicators. Findings: The quantitative data showed that as candidates received ongoing coaching specific to their needs, their ratings improved. The qualitative results furthered the finding. This article argues that transforming schools must include developing human capital through instructional leadership and it must begin during principal preparation. Implications: Principal preparation programs and school districts should emphasize the importance for the principal’s instructional leadership skills as they are responsible for hiring, growing, and retaining effective teachers.

Share and Cite:

Almager, I. , Cumby, S. and Almekdash, M. (2021) Developing Human Capital through Instructional Leadership: Learning to Coach during Principal Preparation. Open Journal of Leadership, 10, 169-192. doi: 10.4236/ojl.2021.102012.

1. Introduction

As diversity in schools continues to grow, campus leaders need skills that will enhance teacher’s competencies and advance the learning for all students. The definition of an effective principal has changed and requires leaders to be able to “manage staffing and buildings, serve as an instructional leader, teacher evaluator, data guru, and to be strategically positioned to daily support professional development to teachers, and to create safe, supportive learning environments,” (NYC Leadership Academy, 2016: p. 2). Northouse (2019) defines leadership as “a process whereby an individual influences a group of individuals to achieve a common goal,” (p. 5). For schools, a common goal is student outcomes.

In some areas of the United States there is a shortage of qualified principals and assistant principals where the problem is not the quantity of candidates, but the quality (Cordeiro & Cunningham, 2014). Often, quality experienced principals retire or move to higher administrative roles. Fuller and Young’s (2009) thirteen-year study determined there was an annual average of 1504 principals who were newly employed in Texas public schools. From the 2009-10 to the 2014-15 school year, the enrollment in Texas public schools increased by 384,221 students (Texas Education Agency, 2016). During this same time period, 335 new principal positions were created. The dilemma, as Eller (2008) states, is that fewer quality candidates are in line to fill the shoes of the experienced and effective principal. Additionally, school districts are seeking leaders who can demonstrate multiple competencies including content and pedagogical knowledge with the ability to use data to drive instruction. To ensure maximum productivity and efficiency in administrative roles and duties, a plan for building capacity can provide readily available and qualified educators for vacant principal positions.

Changes in the campus leader’s expectation require changes in principal preparation programs. Qualifications to become a principal at all levels involve university coursework to obtain a certification or master’s degree. Levine (2005) found that university programs are lacking in their preparation of effective school leaders. Levine’s findings also suggested that principal candidates become qualified through the university coursework but are not prepared to be effective as school leaders in reaching high levels of student performance. Barnett et al. (2012) emphasized the importance of a principal’s role in facilitating instructional improvements which require skills in building teacher capacity. However, many principal preparation programs are not addressing the teaching and learning component in their coursework (Barnett et al., 2012).

2. Statement of the Problem

Effective instructional leadership can and does occur in schools. Some schools have broken the trend for student achievement and are performing at exceptionally high levels. Specifically, there are schools with high poverty populations, where more than 80% of students identified are economically disadvantaged, yet, they are reaching high levels of student performance (Texas Education Agency, 2015). The principals of these high performing schools can apply leadership practices which include setting direction, developing people, and redesigning the organization (Leithwood et al., 2004).

The U.S. Department of Education (2010) identified a school as high poverty when 76 to 100 percent of students qualify for free or reduced priced lunch. There is evidence that low socioeconomic status is correlated with low achievement (Berliner, 2006; Maleck & Demaray, 2006; Milne & Plourde, 2006). Socioeconomic status (SES) is a major predictor of academic achievement (Caldwell & Ginther, 1996; Sirin, 2005). Through instructional leadership aspiring leaders need to be equipped to transform schools with both the understanding and skills to purposefully increase student achievement through the implementation of a rigorous curriculum in every classroom (Demos, 2009). To provide a rigorous curriculum, principals need effective skills to grow teachers’ instructional and pedagogical competencies. The development of these necessary skills should begin during the principal preparation program with the candidate’s ability to demonstrate instructional coaching competencies. Therefore, it is crucial for principal preparation programs to provide course work that includes instructional coaching for campus leaders that it is applicable in the field. If indeed the goal is to graduate aspiring leaders who can transform underperforming schools, then there needs to be a connection to instructional leadership (Cordeiro & Cunningham, 2014). This paper is focused on building instructional leaders who can coach and grow teachers during principal preparation. The purpose of this mixed methods research study was to determine whether there was 1) an improvement in principal candidates’ performance on the linear combinations of the indicators across the four examined Evaluation Cycles (EvCs) and 2) to pinpoint the significant improvements in principal candidates’ performances on each of the observed indicators using qualitative data.

3. Relevant Literature

3.1. Transformational Leadership

Transformational leadership provides leaders with the opportunity to set high expectations for student outcomes by transforming cultures and refocusing the school’s mission (Leithwood et al., 1999). However, the theory of transformational leadership lacks an explicit focus on curriculum and instruction (Cordeiro & Cunningham, 2014). Through the framework of transformational leadership, leaders can improve student outcomes via the coaching of teachers using instructional leadership to facilitate growth and build human capital by effectively monitoring instructional practices as they work to create a community of learners who can then distribute instructional leadership. According to Marks and Printy (2003), when transformational leadership and instructional leadership coexist in a blended form of leadership, the effect on schools as measured by the quality of its instructional practices improves students’ academic success.

Bennis and Nanus (1985) posited that transformational leadership is a process where goals and abilities are advanced to achieve significant improvements through common interests and collective actions. Transformational school leaders continually pursue three fundamental goals, 1) Helping staff members develop and maintain a collaborative, professional school culture; 2) Fostering teacher development; and 3) Helping teachers solve problems together more effectively (Leithwood et al., 1999).

Studies have found that principal leadership has a significant effect on student achievement in high-needs schools (Klar & Brewer, 2013; Knoeppel & Rinehart, 2010; Robinson et al., 2008; Davis et al., 2005). The relationship between the principal and his/her staff is important in goal orientation because higher performing schools have shown to have faculty members who have collective goals and more communication of expectations between both the teacher and the principal, especially in curriculum and instruction. Principals of successful schools structure the school in a way that allows for more collaboration between stakeholders such as students, parents, teachers, and administration, than lower performing schools (Robinson et al., 2008; Klar & Brewer, 2013). Given (2008) posited that trust in leadership had the greatest correlation between self-efficacy and was the difference between organizational success and failure.

3.2. Instructional Leadership

“To improve teaching and learning successful leadership is a prerequisite,” (Cordeiro & Cunningham, 2014: p. 127). School leaders need to be able to translate the district’s curriculum into effective classroom practice (Giles et al., 2005). According to Demos (2009), principals are responsible for the implementation of the curriculum. Leaders of successful schools use student achievement data to modify, supplement or eliminate instruction and curriculum approaches that are not working for students (Picucci et al., 2004; Leithwood et al., 2004).

A meta-analysis conducted by Waters, Marzano and McNulty (2003) found a significant relationship between leadership and student achievement stating that a quarter of the total school effects on student learning can be attributed to leadership. They also asserted that principals can negatively impact student outcomes when they “concentrate on the wrong school and/or classroom practices or miscalculate the magnitude or ‘order’ of the change they are attempting to implement,” (p. 5). Often, principals have little knowledge of the curriculum (Davis et al., 2005; Barnett et al., 2012). Marks and Printy (2003) claimed that successful principals seek out expertise of teachers in developing their curriculum, thus not making principals the sole leader of curriculum and instruction, but rather the “leaders of instructional leaders”. Klar and Brewer (2013) stated that principals of successful schools bring in assistant principals and instructional coaches that formally visit teacher’s classes and review lesson plans alongside the principals. Recent research has found that doing so gives teachers flexibility in their curriculum and provides them leadership opportunities (Klar & Brewer, 2013). They also pointed out that in high performing schools, principals often motivate by attending required workshops alongside teachers, not only to promote positive behaviors but to gain more knowledge themselves of the curriculum. To improve student achievement, instructional leaders must integrate their knowledge of the curriculum to effectively grow teachers (Klar & Brewer, 2013; Timperley, 2008). Maintaining student focus and identifying student’s needs along with knowledge of pedagogical content are vital components to effectively modifying instruction and curriculum for instructional leaders (Timperley, 2008).

Marks and Printy (2003) asserted that combining transformational and instructional leadership would significantly impact instructional practices and student achievement.They characterized effective instructional leaders as those who can improve instructional practices, facilitate teacher growth, capitalize on teachers’ knowledge as skills, set high expectations, monitor student progress, and create communities of learners who can then share instructional leadership duties.

3.3. Instructional Coaching

For the purpose of this research study, the following definition of an instructional coach was used. A coach is an individual who is well versed in content and pedagogy and who works directly with classroom teachers to improve student learning (Hull et al., 2009). Additionally, a coach takes on the role of instructional leader and must possess the ability to provide honest, timely, and factual feedback to advance teachers toward improving reflective practice and student outcomes (Almager, 2013). In a space where the goal is to improve teachers’ instructional practices, the educator delivering the coaching session should have a combination of content, pedagogical, and leadership skills in order to address adult behaviors that have not previously improved student achievement. Within this space, teachers’ knowledge of content, pedagogy, and culture is crucial as they learn to reflect on why students are not performing academically in their classrooms. This study includes the aspiring leader’s “ability to implement observation and feedback cycles and support teacher development through the evaluation process,” (Kraft & Gilmore, 2016). School principals should be involved in growing and coaching teachers and this building of capacity should not be separate from the evaluation process. It is important for instructional leaders to have the ability to provide honest and factual feedback during the lesson planning stage (Pre-Conference) and timely feedback after a lesson is delivered (Post-Conference) because this is where the coaching and guidance occurs. Knight (2006) posited that instructional coaches validate but also encourage change that focuses on quantifiable student achievement.

Waters et al. (2003) found there were certain practices associated with greater student outcomes. One example is the drilling down and needing to understand not only what instructional strategies are being used, but what the research says about the impact of certain instructional strategies to certain groups of students like English Learners or those with educational gaps. Additionally, defining the role of the principal depending on school size and level from the instructional leader to overseeing teacher leaders and assistant principals who may provide most of the instructional feedback to teachers. Moreover, knowledge of the instrument/s used to provide lesson feedback is crucial to develop and sustain effective structures for feedback. Some campus leaders believe that the coaching and growing of teachers cannot be conjoined with teacher evaluation. In some states, the evaluation system has expanded the role of the principal as evaluator to include observations using detailed rubrics, specific and explicit written feedback, and post-observations meetings to deliver feedback which lends itself specifically to the purpose of growing teachers (Danielson, 2007).

Instructional coaching requires ongoing professional development along with data literacy and progress monitoring that is many times left to teacher leaders who have minimal authority to create change. Therefore, it is crucial for campus principals to take the lead and the responsibility for teachers’ instructional growth. To guide this effort, school leaders need to not only gain the necessary skills for coaching, but also the recognition instructional leadership has for transforming schools. Therefore, it is crucial to begin this learning during principal preparation programs.

3.4. Texas Teacher Evaluation System

This study used The Texas Teacher Evaluation and Support System’s (T-TESS) structure to provide instructional feedback to new or struggling teachers who received coaching from aspiring leaders participating in a principal preparation program. TTESS is a system used by Texas and was designed to support teachers through instructional leadership coaching. The instrument’s purpose is to capture teaching holistically with constant feedback between teachers and students while gauging the effectiveness of teachers through a consistent focus on how students respond to their teacher’s instructional practices (TEA, 2018). Each of the system’s four domains (Planning, Instruction, Learning Environment, and Professional Practices and Responsibilities) focuses on teachers and students together and includes 16 different dimensions with specific descriptors for practice within five performance levels (Distinguished, Accomplished, Proficient, Developing, and Improvement Needed). T-TESS seeks to develop habits of continuous improvement leading to positive outcomes when appraisers and teachers focus on evidence-based feedback and professional development decisions based on that feedback through ongoing dialogue and collaboration (TEA, 2018). In addition, the Texas’ evaluation system includes three components for teacher growth: goal-setting and professional development plan; the evaluation cycle (including: pre-conference, observation, post-conference); and student growth measure.

4. Methods

4.1. Research Design

Due to the stated purpose of this study, a mixed methods research design was selected to integrate analysis of both statistical and naturalistic data in an explanatory sequential design predetermined to implement two distinct phases (Caruth, 2013; Creswell, 2014; Creswell & Clark, 2011). According to Creswell and Creswell (2018) explanatory sequential design is “one in which the researcher first conducts quantitative research, analyzes the results and then builds on the results to explain them in more detail with qualitative research,” (p. 15). In the first phase of this sequential design, the quantitative data was given priority by collecting and analyzing data at the onset. The second phase of qualitative data collection and analysis then followed to provide further explanation, to elaborate, and to clarify the significant results from the quantitative research phase (Creswell & Clark, 2011; Greene et al., 1989; Subedi, 2016). A connection was made through the research process of participant selection where the data source of the quantitative phase guided the second phase of qualitative participant selection. A visual model for this explanatory sequential design procedures can be viewed in Figure 1. This design was built on the strengths of both research methods while offsetting their weaknesses that would be visible utilized as a single research design (Bryman, 2006; Caruth, 2013; Creswell & Clark, 2011).

Figure 1. Visual model for explanatory sequential design procedures.

During the first phase, the quantitative data was utilized in determining whether there was an improvement in the principal candidates’ performance on the linear combinations of the indicators across the four examined Evaluation Cycles (EvCs). Additionally, the qualitative data in the second phase pinpointed the significant improvements in principle candidates’ performance on each of the observed indicators using qualitative data. The qualitative results further explained the quantitative findings by offering richer insights. This was provided through specific evidence in greater depth and detail (Patton, 1990). Both the quantitative and qualitative results were integrated during the discussion of the outcomes.

4.2. Quantitative: First Phase

4.2.1. Sampling Procedures

The sampling in the quantitative phase of this study was derived using data collected from two cohorts totaling 54 cases of job-embedded principal candidate who were evaluated by trained raters based on observing their post conference coaching practices with teachers in their schools. Each candidate teacher coaching interaction was considered a case. Each principal candidate was required to select two teachers that could benefit from coaching due to the teacher either being alternatively certified, a new teacher, teaching outside of certified content or new content/grade level, or teaching students with previous low performance. The principal candidate and each teacher were to participate in four Evaluation Cycles (EvCs) during an internship to improve classroom instruction.

4.2.2. Data Collection

Ratings were obtained from four EvCs utilizing a rubric that measured the coaching performance of a principal candidate during the post conference. The rubric ratings were based on a five-point scale with a score of 1 being the lowest rating of Approaching, 3 being Proficient, and 5 being the highest rating with an Exemplary. There were seven measures indicators: Organization/Pacing, Rapport, Self-Reflection, Lesson Questions, Previous Reinforcement, Previous Refinement, and Closure. The raters of the rubric were trained in the rubric and then participated in rater inter-reliability on two occasions during the obtained data. Each rater scored all four of the EvCs for their assigned principal candidates.

4.2.3. Data Analysis

For the quantitative phase, descriptive statistics and correlations of the rubric indicators were examined. To determine if there was improvement in principal candidates’ performance on the linear combinations of the rubric indicators across the EvCs, A Multivariate Analysis of Variance (MANOVA) was used where each of the four EvCs constituted a level of the independent variable, and the observed rubric indicators constituted the dependent variable. The univariate F statistics were examined to detect statistically significant changes in candidates’ performance on each of the rubric indicators. Finally, Scheffe post hoc, a conservative approach of multiple comparisons of means, was used to infer mean changes and their significance across the EvCs for each of the rubric indicators.

4.2.4. Quantitative Results

Table 1 shows the descriptive statistics of the examined EvCs on the observed evaluation indicators.

Table 1. Descriptive statistics of the examined evaluation indicators per EvC.

Table 2 shows the correlations among the observed evaluation indicators. The significant positive correlations among all the observed indicators suggested that an increase on candidates’ performance on one indicator was associated with increase in candidates’ performance on other indicators. This systematic relationship implied that as the candidates received ongoing coaching specific to their needs, the ratings improved.

MANOVA test showed a statistically significant effect of the levels of the independent variable (EvCs) on the observed dependent variable, Wilks’ Λ = 0.77, F (21, 585) = 2.54, p < 0.05, η2 = 0.08. These findings suggested that there was a statistically significant difference in candidate performances on the linear combinations of the indicators across the examined EvCs. The univariate F tests (ANOVAs) of between subjects’ effects were all significant as shown in Table 3, which in turn means that there was a statistically significant change in candidates’ performance on each of the indicators across the examined EvCs.

4.2.5. Scheffe Post Hoc Multiple Comparisons

Organization & Pacing. There was a mean score improvement of 0.43 in candidates’ performance on this evaluation indicator between EvC 1 and EvC 2, yet this improvement was not statistically significant. However, there was a statistically significant mean score improvement of 0.61 (Scheffe test, p < 0.05) between

Table 2. Evaluation indicators correlations.

Note. ** p < 0.01.

Table 3. Tests of between subjects effects.

EvC 1 and EvC 3, and a statistically significant improvement of a mean score of 0.72 (Scheffe test, p < 0.05) between EvC 1 and EvC 4. These findings suggested that even though some improvement could be detected in-Organization and Pacing between EvC 1 and EvC 2, noticeable improvement in candidates’ performance can be detected on EvC 3, and it even gets better on EvC 4 in comparison with the first EvC. All other comparisons on this evaluation indicator were not statistically significant (EvCs 2 and 3, 3 and 4, and 2 and 4).

Rapport. There was a mean score improvement of .18 in candidates’ performance on this indicator between EvC 1 and EvC 2, yet this improvement was not statistically significant. However, there was a statistically significant mean score improvement of 0.56 (Scheffe test, p < 0.05) between EvC 1 and EvC 3, and a statistically significant improvement of 0.72 (Scheffe test, p < 0.05) in the mean score of this indicator between EvC 1 and EvC 4. These findings suggested that even some improvement could be detected between EvC 1 and EvC 2, noticeable improvement in candidate’s performance on Rapport can be noted in EvC 3, and it even gets better in EvC 4 in comparison with EvC 1. Additionally, there was a statistically significant improvement of 0.54 (Scheffe test, p < 0.05) in the mean score of this indicator between EvC 2 and EvC 4. No statistically significant differences were detected between EvC 2 and 3, and EvC 3 and 4 on this indicator.

Self-reflection. There was a mean score improvement of 0.49 in candidates’ performance on this indicator between EvC 1 and EvC 2, yet this improvement was not statistically significant. However, there was a statistically significant mean score improvement of 0.98 (Scheffe test, p < 0.05) between EvC 1 and EvC 3, and a statistically significant mean score improvement of 0.94 (Scheffe test, p < 0.05) between EvC 1 and EvC 4. These findings suggested that even though some improvement could be detected between EvC 1 and EvC 2, noticeable improvement in candidates’ performance on this indicator can be detected in EvC 3 and 4 in comparison with the first EvC. It was also noted that among the examined EvCs, candidates had the highest mean score on this indicator in EvC 3. All other comparisons of this evaluation indicator were not statistically significant (EvCs 2 and 3, 3 and 4, 2 and 4).

Questions. There was a mean score improvement of 0.13 in candidates’ performance on this indicator between EvC 1 and EvC 2, yet this improvement was not statistically significant. However, there was a statistically significant mean score improvement of 0.80 (Scheffe test, p < 0.05) between EvC 1 and EvC 3. There was also a mean score improvement of 0.59 between EvC 1 and EvC 4 on this indicator, yet this improvement was not statistically significant. There was also a statistically significant mean improvement of 0.66 (Scheffe test, p < 0.05) on this indicator between EvC 2 and EvC 3. No statistically significant difference was detected between EvC 2 and 4, and EvC 3 and 4.

Reinforcement. There was a mean score improvement of 0.19 in candidates’ performance on this indicator between EvC 1 and EvC 2, yet this improvement was not statistically significant. However, there was a statistically significant mean score improvement of 0.69 (Scheffe test, p < 0.05) between EvC 1 and EvC 3, and a statistically significant mean score improvement of .70 (Scheffe test, p < 0.05) between EvC 1 and EvC 4. These findings suggested that even though some improvement could be detected between EvC 1 and EvC 2, noticeable improvement in candidates’ performance on “Reinforcement” can be noted in EvC 3, and it even gets better in EvC 4 in comparison with the first EvC. All other comparisons on this evaluation indicator were not statistically significant (EvCs 2 and 3, 3 and 4, and 2 and 4).

Refinement. There was a mean score improvement of 0.35 in candidates’ performance on this indicator between EvC 1 and EvC 2, yet this improvement was not statistically significant. However, there was a statistically significant mean score improvement of 0.72 (Scheffe test, p < 0.05) between EvC 1 and EvC 3, and a statistically significant mean score improvement of 1.02 (Scheffe test, p < 0.05) between EvC 1 and EvC 4. These findings suggest that even some improvement could be detected between EvC 1 and EvC 2, noticeable improvement in candidates’ performance in “Refinement” can be detected on EvC 3, and it even gets better in EvC 4 in comparison with the first EvC. There was a statistically significant mean score improvement of 0.66 (Scheffe test, p < 0.05) between EvC 2 and EvC 4 on this indicator. All other comparisons on this evaluation indicator were not statistically significant (EvC s 2 and 3, 3 and 4).

Closure. There was a mean score improvement of 0.09 in candidates’ performance on this indicator between EvC 1 and EvC 2, yet this improvement was not statistically significant. However, there was a statistically significant mean score improvement of 0.96 (Scheffe test, p < 0.05) between EvC 1 and EvC 3, and a statistically significant mean score improvement of 0.89 (Scheffe test, p < 0.05) between EvC 1 and EvC 4. Additionally there was a 1.05 mean score improvement in candidates performance on “Closure” between EvC 2 and EvC 3 , and a statistically significant difference of 0.98 between EvC 2 and EvC 4. These findings suggested that even though some improvement could be detected between EvCs1 and 2; noticeable improvement in candidates’ performance on closure can be detected in EvC s 3 and 4. It was also noted that among all the examined EvC s, candidates had the highest mean score on this indicator in EvC 3. No statistically significant mean score difference was detected in candidates’ performance on this indicator between EvC 3 and EvC 4.

4.3. Qualitative: Second Phase

4.3.1. Sampling Procedures

The second phase of qualitative research utilized purposeful sampling in which the researchers intentionally selected a sample size of two case studies based on maximal variation sampling (Creswell & Clark, 2011) from the 54 cases in the quantitative phase. The selection included the two participants with the highest mean ratings. These two case studies were selected to demonstrate differentiation and varying perspectives.

4.3.2. Data Collection

A qualitative multiple case study design was utilized in the second phase of the mixed methods (Stake, 1995; Yin, 2012). A multiple case study strengthened the validity of the findings by utilizing both within-case analysis and cross-case analysis (Creswell & Poth, 2018; Denzin & Lincoln, 2018; Merriam & Tisdell, 2016). Case studies have been described as an in-depth study bounded by time and activity (Creswell, 2014; Merriam & Tisdell, 2016). Each case was bounded by time with each cohort completing their four EvCs within fifteen months with two post-conferences in the fall semester and two in the spring semester. Both participants were given pseudonyms and any physical or electronic formatted data were given codes to facilitate the creation of an audit trail (Erlandson et al., 1993). Along with the findings of the quantitative phase, collected data for this qualitative research consisted of verbatim transcriptions of videoed post-conferences, participant electronic self-reflections, and documents were utilized for analysis in the qualitative phase and conducted in each case’s natural context (Denzin & Lincoln, 2018).

Transcriptions. Each post-conference included the videotaping of the principal candidate coaching a teacher, which was then shared virtually with the assigned rater and course professors. The raters were doctoral students hired and trained in the rating process by the leadership professor who developed the coaching model. The raters then participated in inter-rater reliability once each year where each rater observed a video of a pre and a post-conference and rated independently to avoid bias. The raters then compared ratings and evidence to support the rating for a 90% accuracy among all raters. After reliability had been measured, the raters began rating by transcribing each EvC using verbatim quotes and written descriptions as evidence for the seven indicators. After the transcription, each of the seven indicators was scored between a 1 (Approaching) to a 5 (Exemplary). The ratings then accessed member-checking for accuracy as they were shared with the principal candidate. The scored transcriptions were then blinded, given a pseudonym and specified Teacher 1 or 2. Four transcriptions were obtained for each of the four cases in the qualitative phase of this study (totaling 16 transcriptions) for the purpose of elaborating on the results of the quantitative phase.

Self-Reflections. As written assignments, the principal candidates were given bi-monthly open-ended questions throughout the fifteen-month period in which self-reflection was required. These written assignments were matched with a pseudonym and collected as related to the two case studies and their personal experiences coaching teachers.

Documents. Another data source included documents related to the leadership professors coaching the principal candidates regarding the post-conference with struggling teachers. These documents included agendas and materials from a summer institute that all principal candidates were required to attend at the beginning of their graduate work. Other documents included course meeting agendas and video recordings of the leadership professors instructing in coaching techniques to improve teachers. Records of visitation agendas between principal candidates and their assigned leadership professors which including coaching were utilized as documents. Finally, a researchers reflective journal was kept throughout the data collection and analysis.

Post conference rubrics were developed and aligned to the Texas Teacher Educator Support System’s (TTESS) rubric language. Language specific to the TTESS was expected to be used during the delivered conferences as the principal candidates coached teachers to improve their instructional skills and delivery while learning to reflect on the lessons’ outcomes based on students’ performance. All candidates were provided training on how to use the TTESS rubric to grow and coach teachers during the summer institute. They were also instructed on how to deliver the coaching post conference based on the developed model and rubric.

4.3.3. Data Analysis

In the second phase of the explanatory sequential design, the qualitative analysis followed to further explain the significant improvements in principal candidates’ performances on each of the observed indicators (Creswell & Clark, 2011; Creswell, 2014). The data analysis first utilized within-case analysis of data relating to each case; cross-case analysis was then conducted to develop themes found across all cases (Denzin & Lincoln, 2018; Miles et al., 2014). Steps conducted in the qualitative analysis included: 1) reading through the data and writing notes in researcher’s reflective journal, 2) uploading the data sources in Dedoose, 3) coding the data according to themes, and 4) conducting cross-case thematic analysis.

To assure trustworthiness in the qualitative phase of this study, Lincoln and Guba’s (1985) four strategies were applied to evaluate rigor through credibility, transferability, dependability, and confirmability. Prolonged engagement assisted in establishing credibility as the data for the four cases were collected over a two-year period. Triangulation was achieved through multiple sources of data and member checking (Denzin & Lincoln, 2018; Erlandson et al., 1993). Transferability was established through detailed and thick descriptions and purposive sampling with maximum variation (Erlandson et al., 1993). The details of this study were written so that others might be able to repeat the results, which established dependability. Finally, an audit trail and a researcher’s reflexive journal assisted with both dependability and confirmability (Erlandson et al., 1993; Merriam & Tisdell, 2016).

5. Results

5.1. Within-Case Analysis

Nicole. Nicole was a female principal candidate that was selected for Cohort 2 due to not only her master teaching abilities, but also for her two years of experience as an instructional coach. Nicole was assigned to a struggling elementary school in a rural community in Southern Texas where she coached two teachers that needed instructional assistance. Her coaching techniques were improved through coursework and an assigned professor serving as a coach. It was discovered that Nicole’s rating of her Teacher 1 was the highest scoring post-conference in Cohort 2. Nicole’s overall rating for the four EvCs demonstrated a decline from EvC 1 (3.7) to the EvC 2 (2.5). Her overall rating for EvC 3 increased to a 3.1, which was maintained in EvC 4. Nicole demonstrated a higher mean in ratings in six of the seven rubric indicators when compared to the mean scores of all four cases. The four areas of difference where the higher achievement was observed included the indicators of Self-Reflection, Questions, and Reinforcement and the lower achievement was found in the indicator of Closure.

David. Due to his master teaching abilities, David was selected as a male principal candidate for Cohort 3 and received his coaching experience through coursework and the coaching of an assigned professor. David was assigned to a struggling elementary school in a North Texas metropolitan area where he coached two teachers that needed improvement in instruction. David’s rating of his Teacher 1 was found to be the highest mean of not only Cohort 3, but of Cohort 2 as well. The mean of the overall rating of the four cases was a 3.1 and David’s was a 4.7. David rated higher than all other cases in all areas; however, the greatest difference was observed in the rubric indicators of Self-Reflection, Questions, and Closure. His rating in Closure was a 5.0 in all four of the EvCs and was two points greater than the four-case mean of 3.0.

5.2. Cross-Case Analysis

The qualitative cross-case analysis revealed three possible areas that could further explain quantitative findings. One area may provide underlying effects that contributed to all principal candidates demonstrating growth in all indicators at the same time. A second area may provide tentative explanations to three common themes found regarding the seven indicators through the in-depth analysis of the four case studies. Finally, there were explorative measures taken to further explain possible results related to no correlation between EvC 1 & EvC 2 but significance between EvC 1 and EvC 3 as well as EvC 4.

Continued Growth in Coaching

As demonstrated in Table 2 of the quantitative matrix, there was a strong correlation indicating that all seven indicators improved at the same time among all principal candidates. The qualitative results furthered this finding to possible underlying effects of intensive training in coaching, continual training in coaching through coursework, and individual coaching provided by the assigned leadership professor coach.

All participants in this study began their first week as principal candidates through a summer institute where they received intensive training on coaching. Few principal candidates had been trained on coaching teachers and those who had experience coaching were trained in the specific model developed by the lead curriculum and instruction professor who is also a part of the principal preparation program. Not only did the intensive training include specific training in the seven rubric indicators aligned to TTESS, but the stem questions were also provided to assist for successful coaching sessions. Video examples were provided as visuals of expectations along with alumni from a previous cohort sharing their experiences of obstacles and strengths when coaching. The raters assigned to the principal candidates were utilized in explaining the specifics of expectations for the ratings of each indicator. One candidate stated in a written self-reflection that she was “able to use the rubric as a tool to guide my feedback and be a point of reference with scripts from the lesson observed.” Another stated that he “quickly learned that many skills are essential to being an effective coach.”

Continual training in coaching throughout the school year was observed through various data sources. The coursework was job-embedded and class meetings were conducted through the modeling of Professional Learning Communities that met bi-monthly. The agendas of these meetings and recorded videos were observed and revealed coaching as a reoccurring topic webbed through all coursework. The mean scores of each EvC were shared with all principal candidates along with trends in weak coaching areas and coaching areas of strength. There were also observed conversations occurring between the principal candidates as they provided issues and possible solutions through personal experiences.

The third possible underlying effect could possibly be through the individual coaching each principal candidate received from an assigned leadership professor coach. This model could be termed as “the leader coaching the coach.” Each leadership professor coach had been trained in the coaching model and provided personal assistance for improvement through emails, phone calls, and personal visits. These professor coaches also participated in all bi-monthly class meetings and provided expertise. One candidate stated the importance of being coached in this manner:

I love to be coached. I dont know which one I like better yet: to be the coach or to be coached …. I think to be an instructional coach, you have to have that balance: you have to want to coach and you have to want to be coached.

It is believed that all three of these areas could have possible underlying effects as there are also quotes from principal candidates to support. One of the participants stated, “I do believe that this fellowship really prepared me for instructional coaching and provided through conferencing.” Another participant that had previous coaching experience shared, “I feel comfortable providing instructional coaching to teachers because of my previous position … however, my instructional coaching this year has changed and adapted.” This same participant contributed her growth in coaching to the components of the principal preparation program.

5.3. Common Themes among Coaching Indicators

The common themes found in the cross-case analysis included the rubric indicators of Self-Reflection, Questions, and Closure. These three rubric indicators demonstrated the highest means for the two cases rating the highest in both cohorts. These three identified themes required higher level coaching skills and are further discussed.

Self-Reflection. The self-reflection indicator was found as one of the three identified as requiring higher level coaching. The coaching model developed by the lead leadership professor was created to assist teachers receiving coaching to move from being coached to being able to coach themselves through a gradual release process. The two principal candidates who rated well on this indicator provided the teacher with ample opportunities to reflect on the successes and non-successes of the observed lesson. The required areas to be discussed and analyzed included grouping strategies, activities and materials, and student results of the measured lesson objective, and these areas were directed at improving classroom instruction. An example of such a probing question was, “So was your grouping strategy successful and what’s the evidence to support that?”

Questions. This indicator also required a higher level of coaching due to the needed ability to continually ask open-ended engaging questions throughout the post conference and to also assist the teacher in analyzing his or her own questions from the lesson. The two principal candidates who rated well on this indicator were observed in a discussion regarding the teacher’s planned questions, including the exit ticket question and student results of that question. Utilizing effective question strategies themselves, these two also asked questions regarding student work that was not successful and how the lesson could have been adjusted to address such misconceptions. An example of such a question included, “If you were going to teach this same lesson tomorrow and knowing those students struggled, what would you do differently?” Engaging questions were also deemed important in coaching as explained, “I have grown in my ability to explain the process and use reflective questions to get teachers to see trends and areas of strengths and weaknesses.” An example of an engaging question asked by one of these two included, “Why do you think they grasped the understanding of collecting the data?”

Closure. The importance of closure in this coaching model included the principal candidate requiring the teacher to summarize how the identified strength of the lesson would be continued in future lessons and how the identified weakness of the lesson would be improved upon. The principal candidate was also to ensure accountability through the statement that both the continuation of the strength and improvement of the weakness would be a focus during future observations. The two principal candidates who rated well on this indicator required the teacher to reflect with specific detail and asked probing questions to obtain the reflection as needed.

5.4. Further Qualitative Explanation

The quantitative results demonstrated that generally there was no statistically significant improvement in candidates’ performance on the observed indicators between EvC 1 & 2, and significant improvement between EvC 1 and EvC 3 as well as EvC 4. The qualitative analysis further investigated for additional possible explanations for this finding. Through the four principal candidates’ self-reflection assignments, there was one probable discovery. As the principal candidates were required to complete their first EvC, not only was their campus at the beginning of the school year, but the candidates’ position was new and responsibilities limited while they worked on building relationships with teachers one and two. This was sometimes referred to as the “honeymoon stage” by leadership professors in class meeting recordings. The leadership professors would also warn that the responsibilities of being a full-time graduate student and full-time principal candidate would become very difficult as the fall semester progressed. As the second EvC arrived, there were greater responsibilities placed upon the candidate that may have caused the ratings to regress.

Several quotes were found in the principal candidates’ self-reflection assignments that support this possible explanation. One well described the situation in the following manner:

The most challenging part of this experience is balancing time …. I first began with scheduling and prioritizing my work but it always seemed like I would take one thing off my list and there would be three more …. Coursework is very important …. deadlines for the work tend to sneak up on you once you begin taking on more tasks for the campus.

Another candidate described her challenges during this time as well as she stated, “I worked as the main contact to our campus Parent Liaison, PTA President, YMCA Representative & Parkland Health Representative. I was essential in planning and organizing over 10 Parent workshops.” This data supports the possible underlying result of the underperformance of EvC 2 as being related to the increased responsibilities of being a graduate student and job-embedded principal candidate.

6. Discussion

The purpose of this mixed methods research study was to determine whether there was 1) an improvement in principal candidates’ performance on the linear combinations of the indicators across the four examined Evaluation Cycles (EvCs) and 2) to pinpoint the significant improvements in principal candidates’ performances on each of the observed indicators using qualitative data. The quantitative data analysis demonstrated that there was no significant improvement in the EvC ratings for principal candidates who received ongoing coaching specific to their needs by their leadership professor instructors and their leadership professor coaches. However, the qualitative results explained that the intensive training in coaching, continual professional development in coaching through coursework and class instruction, and individual coaching provided by the assigned professor coach did improve the candidates’ confidence in delivering the post conference which in turn improved the teacher’s acceptance for the coaching.

As principal candidates’ coaching skills improved, their confidence also improved. One stated that

As far as future growth for myself as a coach,I know that I need to seek more opportunities for training to become more skilled.I feel very confident in helping teachers plan and execute solid,purposeful lessons,but feel that I should continue to grow in the follow up of discussed strategies,and my coaching in general.

The reflection indicated personal ambition to continue to improve coaching skills after finishing the program. Another candidate also stated that “instructional coaching and growing teachers is a vital part of leadership. We need to grow our teachers and build their capacity because they are serving our ultimate customer, our students.”

Additionally, the findings of this study were critical to the researchers’ principal preparation program which allowed for continued curriculum and course realignment and the importance for faculty and leadership professor coaches to maintain knowledge of current leadership best practices along with school district needs. This finding impacts all principal preparation programs as instructional leadership is key to transforming underperforming schools (Cordeiro & Cunningham, 2014). As Darling-Hammond et al. (2007) stated, “the critical part principals play in developing successful schools has been well established by researchers over the last two decades: committed leaders who understand instruction and can develop the capacities of teachers and of schools are key to improving educational outcomes for all students,” (p. 1) and that “exemplary programs should offer visible evidence that they affect principals’ knowledge, skills, and practices, as well as success in their challenging jobs,” (p. 5). Principal candidates rely on preparation programs to provide them with the needed competencies for effectively leading schools as most school leaders do not give credit to higher education for gaining the applicable preparation to lead schools. These competencies must go beyond the preparation for certification examinations. In addition, as stated below, school districts need to support the growth of aspiring leaders if they want to change the status quo.

· My current coaching style, just like my leadership style, is ever evolving. I believe in a “growth” mindset with all teachers who are willing to put in the work, as well as open communication and frequent, honest feedback all while keeping the main goal in mind: student achievement.

· It takes a strong leadership team to grow a teacher … that tackles coaching in a systematic way.

· As an instructional coach, I have the opportunity to impact so many more students through my coaching … you get the opportunity to touch so many lives and to make a difference for so many students and teachers.

· Student growth is the biggest benefit of instructional coaching.

Overall, the study revealed that aspiring leaders’ coaching skills grew based both on quantitative data from their post conference ratings and from their own reflective statements as they moved through the program. Additionally, their assigned teachers became more open to the coaching and therefore instructional practices were improved including teacher attitudes. As Knight (2006) posited, one of the most important skills for instructional coaching is building relationships and it takes time.

7. Conclusion

Principal preparation programs and school districts should emphasize the importance of principal’s instructional leadership skills as the latter is responsible for hiring and retaining effective teachers. According to Chang (2009) 25 percent of beginning teachers leave teaching before the third year and almost 40 percent leave the profession within the first five. Research also reveals that working conditions are important predictors of teacher attrition (Borman & Dowling, 2008) including administrator support.

This study is important to campus principals who have a responsibility to develop human capital which includes coaching and growing teachers. The responsibility for student outcomes also falls on the leader but requires assistance from a well-equipped group of teacher leaders who can support the growth of struggling teachers due to being new, alternatively certified, teaching outside of their content area, or who have a history of low student performance. However, the principal must be able to lead this cadre of teacher leaders without having to completely defer all responsibility for effective instructional practices due to his or her own lack of skills or desire to participate.

The findings of this study are also important for principal preparation programs which must include competencies for growing and coaching teachers instructionally if they are to partake in building aspiring leaders who can address current needs in schools. In Texas, the new Principal as Instructional Leader state certification examination is heavy on instructional leadership which has begun to address gaps in higher education programs. However, the necessary competencies for instructional leadership are determined by each preparation program and do not always include local school district input. To ensure programs are providing aspiring leaders with the necessary skills to lead local schools, partnerships to create leadership pipelines should be created.

The principal candidates involved in the study understood, upon graduation, that instructional leadership was crucial to transforming schools. The conflict between theory and practice became a reality as they grew into contextual change agents recognizing campus principal norms and best practice demands were not always aligned. However, without the instructional leadership skills to develop human capital, transformational leadership halts due to gaps in adult learning.

As the candidates worked within their job-embedded residencies they were able to learn by taking risks, building relationships, and failing in a safer space due to the coaching of the professor coaches and their campus principal mentors. As they developed skills for building teachers’ instructional skills, the principal candidates also learned how to deliver best practices within their specific context by having a better understanding of how theoretical frameworks served as foundations for the work in practice.

As Muhammad and Cruz (2019) stated, “leadership is not a position, it is a set of actions that positively shape the climate and culture of the working environment,” (p. 2). The process of growing human capital is crucial to hiring and sustaining teachers. Assumptions are often made that successful classroom teachers, successful instructional coaches, or those with a principal certification can grow and coach others instructionally. However, developing adults for effective schools is not an easy task and the building of these specific skills should begin during principal preparation. To produce the healthy school culture where all teachers believe students can achieve academically as Muhammad and Cruz (2019) discussed, we need transformational leaders who can distribute effective coaching to keep the focus on the teaching required for improving or sustaining student learning (Aguilar, 2013). We believe the model used in this study to coach aspiring leaders on how to deliver post conference coaching sessions is crucial to developing instructional leaders who can transform schools and impact achievement for all students.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Aguilar, E. (2013). The Art of Coaching: Effective Strategies for School Transformation. San Francisco, CA: Jossey-Bass.
[2] Almager, I. (2013). Development of the Pre and Post Conference Rubrics for the i3 Grant Coaching Model [White Paper]. Lubbock, TX: College of Education, Texas Tech University.
[3] Barnett, B. G., Shoho, A. R., & Oleszewski, A. M. (2012). The Job Realities of Beginning and Experienced Assistant Principals. Leadership and Policy in Schools, 11, 92-128.
https://doi.org/10.1080/15700763.2011.611924
[4] Bennis, W., & Nanus, B. (1985). The Strategies for Taking Charge (pp. 41). Leaders, NY: Harper Row.
[5] Berliner, D. C. (2006). Our Impoverished View of Education Reform. Teachers College Record, 108, 949-995.
[6] Borman, G. D., & Dowling, N. M. (2008). Teacher Attrition and Retention: A Meta-Analytic and Narrative Review of the Research. Review of Educational Research, 78, 367-409.
https://doi.org/10.3102%2F0034654308321455
[7] Bryman, A. (2006). Integrating Quantitative and Qualitative Research: How Is It Done? Qualitative Research, 6, 97-113.
https://doi.org/10.1177%2F1468794106058877
[8] Caldwell, G. P., & Ginther, D. W. (1996). Differences in Learning Styles of Low Socioeconomic Status for Low and High Achievers. Education, 117, 141-147.
[9] Caruth, G. D. (2013). Demystifying Mixed Methods Research Design: A Review of the Literature. Mevlana International Journal of Education, 3, 112-122.
[10] Chang, M. L. (2009). An Appraisal Perspective of Teacher Burnout: Examining the Emotional Work of Teachers. Educational Psychology Review, 21, 193-218.
https://doi.org/10.1007/s10648-009-9106-y
[11] Cordeiro, P., & Cunningham, W. (2014). Educational Leadership: A Bridge to Improved Practice (5th ed.). Upper Saddle River, NJ: Pearson Education.
[12] Creswell, J. W., & Clark, V. L. (2011). Designing and Conducting Mixed Methods Research (2nd ed.). Thousand Oaks, CA: Sage.
[13] Creswell, J. W. (2014). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches (4th ed.). Thousand Oaks, CA: Sage.
[14] Creswell, J. W., & Creswell J. D. (2018). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks, CA: Sage.
[15] Creswell, J. W., & Poth, C. N. (2018), Qualitative Inquiry and Research Design: Choosing among Five Approaches. (4th ed.). Thousand Oaks, CA: Sage
[16] Danielson, C. (2007). Enhancing Professional Practice: A Framework for Teaching. Alexandria, VA: Association for Supervision and Curriculum Development.
[17] Darling-Hammond, L., LaPointe, M., Meyerson, D., & Orr, M. (2007). Preparing School Leaders for a Changing World: Executive Summary. Stanford, CA: Stanford Educational Leadership Institute.
[18] Davis, S., Darling-Hammond, L., LaPointe, M., & Meyerson, D. (2005). School Leadership Study: Developing Successful Principals, a Review of Research. Stanford, CA: Stanford Educational Leadership Institute. Retrieved February, 20, 2009.
[19] Demos, E. S. (2009). An Interview with David Berliner: On Leadership and High-Stakes Testing. New England Reading Association Journal, 45, 1-6.
[20] Denzin, N. K., & Lincoln, Y. S. (Eds.). (2018). The SAGE Handbook of Qualitative Research (5th ed.). Los Angeles, CA: Sage.
[21] Eller, J. (2008). An Assessment of the Recently Appointed Administrators’ Program: Lessons Learned for Supporting New Principals. International Journal of Educational Leadership Preparation, 3, 1-8.
[22] Erlandson, D. A., Harris, E. L., Skipper, B. L., & Allen, S. D. (1993). Doing Naturalistic Inquiry: A Guide to Methods. Newbury Park, CA: Sage.
[23] Fuller, E. J., & Young, M. D. (2009). Tenure and Retention of Newly Hired Principals in Texas. Austin, TX: University Council for Educational Administration, Department of Educational Administration, University of Texas at Austin.
[24] Giles, C., Johnson, L. A. U. R. I., Brooks, S., & Jacobson, S. L. (2005). Building Bridges, Building Community: Transformational Leadership in a Challenging Urban Context. Journal of School Leadership, 15, 519-545.
https://doi.org/10.1177%2F105268460501500503
[25] Given, R. J. (2008). Transformational Leadership: The Impact on Organizational and Person Outcomes. Emerging Leadership Journeys, 1, 4-24.
[26] Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a Conceptual Framework for Mixed-Method Evaluation Designs. Educational Evaluation and Policy Analysis, 11, 255-274.
https://doi.org/10.3102%2F01623737011003255
[27] Hull, T. H., Balka, D. S., & Miles, R. H. (2009). A Guide to Mathematics Coaching: Processes for Increasing Student Achievement. Thousand Oaks, CA: Corwin Press
[28] Klar, H. W., & Brewer, C. A. (2013). Successful Leadership in High-Needs Schools An Examination of Core Leadership Practices Enacted in Challenging Contexts. Educational Administration Quarterly, 49, 768-808.
https://doi.org/10.1177%2F0013161X13482577
[29] Knight, J. (2006). Instructional Coaching. School Administrator, 63, 36-40.
[30] Knoeppel, R. C., & Rinehart, J. S. (2010). A Canonical Analysis of Successful and Unsuccessful High Schools: Accommodating Multiple Sources of Achievement Data in School Leadership. Educational Considerations, 38, 24-32.
https://doi.org/10.4148/0146-9282.1123
[31] Kraft, M. A., & Gilmour, A. (2016). Can Principals Promote Teacher Development as Evaluators? A Case Study of Principals’ Views and Experiences. Educational administration quarterly, 52, 711-753.
https://doi.org/10.1177%2F0013161X16653445
[32] Leithwood, K., Jantzi, D., & Steinbach, R. (1999). Changing Leadership for Changing Times. UK: McGraw-Hill Education.
[33] Leithwood, K., Seashore Louis, K., Anderson, S., & Wahlstrom, K. (2004). Review of Research: How Leadership Influences Student Learning.
https://www.wallacefoundation.org/knowledge-center/Documents/How-Leadership-Influences-Student-Learning.pdf
[34] Levine, A. (2005). Educating School Leaders. New York, NY: The Education School Project.
[35] Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage.
[36] Maleck, C. K., & Demaray, M. K. (2006). Social Support as a Buffer in the Relationship between Socioeconomic Status and Academic Performance. School Psychology Quarterly, 21, 375-395.
https://doi.apa.org/doi/10.1037/h0084129
[37] Marks, H. M., & Printy, S. M. (2003). Principal Leadership and School Performance: An Integration of Transformational and Instructional Leadership. Educational Administration Quarterly, 39, 370-397.
https://doi.org/10.1177%2F0013161X03253412
[38] Merriam, S. B., & Tisdell, E. J. (2016). Qualitative Research: A Guide to Design and Implementation (4th ed.). San Francisco, CA: Jossey-Bass.
[39] Miles, M. B., Huberman, A. M., & Saldana, J. (2014). Qualitative Data Analysis: A Methods Sourcebook (3rd ed.). Thousand Oaks, CA: Sage.
[40] Milne, A., & Plourde, L. A. (2006). Factors of a Low-SES Household: What Aids Academic Achievement? Journal of Instructional Psychology, 33, 183-193.
[41] Muhammad, A., & Cruz, L. F. (2019). Time for Change: 4 Essential Skills for Transformational School and District Leaders. Bloomington, IN: Solution Tree Press.
[42] Northouse, P. (2019). Leadership: Theory and Practice (8th ed.). Thousand Oaks, CA: Sage.
[43] NYC Leadership Academy (2016). Ready to Lead: Designing Residencies for Better Principal Preparation. Long Island City, NY: NYC Leadership Academy.
https://www.nycleadershipacademy.org/wp-content/uploads/2018/06/ready-to-lead-executive-summary.pdf
[44] Patton, M. Q. (1990). Qualitative Evaluation and Research Methods. Newbury Park, CA: Sage.
[45] Picucci, A. C., Brownson, A., Kahlert, R., & Sobel, A. (2004). Middle School Concept Helps High-Poverty Schools Become High-Performing Schools. Middle School Journal, 36, 4-11.
https://doi.org/10.1080/00940771.2004.11461458
[46] Robinson, V. M., Lloyd, C. A., & Rowe, K. J. (2008). The Impact of Leadership on Student Outcomes: An Analysis of the Differential Effects of Leadership Types. Educational Administration Quarterly, 44, 635-674.
https://doi.org/10.1177%2F0013161X08321509
[47] Sirin, S. R. (2005). Socioeconomic Status and Academic Achievement: A Meta-Analytic Review of Research. Review of Educational Research, 75, 417-453.
https://doi.org/10.3102%2F00346543075003417
[48] Stake, R. E. (1995). The Art of Case Study Research. Thousand Oaks, CA: Sage.
[49] Subedi, D. (2016). Explanatory Sequential Mixed Method Design as the Third Research Community of Knowledge Claim. American Journal of Educational Research, 4, 570-577.
[50] Texas Education Agency (TEA) (2015). STAAR Student Performance Reports. Austin, TX: Texas Education Agency.
[51] Texas Education Agency (TEA) (2016). Enrollment in Texas Public Schools (Document No. GE16 601 09). Austin, TX: Texas Education Agency.
[52] Texas Education Agency (TEA) (2018). Texas Teacher Evaluation and Support System (TTESS).
[53] Timperley, H., Wilson, A., Barrar, H., & Fung, I. (2008). Teacher Professional Learning and Development. Wellington: The Ministry of Education.
[54] U.S. Department of Education (2010). ESEA Blueprint for Reform. Washington DC: Office of Planning, Evaluation and Policy Development.
[55] Waters, T., Marzano, R. J., & McNulty, B. (2003). Balanced Leadership. Aurora, CO: McREL.
http://crss.org/wp-content/uploads/2012/11/McRel-Study.pdf
[56] Yin, R. K. (2012). Applications of Case Study Research (3rd ed.). Thousand Oaks, CA: Sage.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.