The Five Essential Reasons for the Failure of School Reforms
Joseph Murphy
Vanderbilt University, Nashville, TN, USA.
DOI: 10.4236/jhrss.2020.81001   PDF    HTML   XML   1,783 Downloads   6,272 Views   Citations

Abstract

We unearth and examine the most critical reasons for the failure of school reforms in the United States over the last century. More especially, we look beyond and below traditional explanations provided by the ministries of school reform since the ESEA in 1965. We call into question the most ascendant and most universal beliefs about improvement that are in play at given periods of school reform (e.g., “scientific evidence” currently). We also observe that when knowledge transforms into understanding, it still lacks the power to explain the failure of school reform efforts.

Share and Cite:

Murphy, J. (2020) The Five Essential Reasons for the Failure of School Reforms. Journal of Human Resource and Sustainability Studies, 8, 1-17. doi: 10.4236/jhrss.2020.81001.

1. Introduction

Over the last 100 years, claims that schools are failing have become deeply woven into the fabric of society (Cremin, 1961; Ellis & Fouts, 1993). This “scorching and unrelenting criticism” (Cuban & Usdan, 2003: p. 3) can be seen most dramatically in periods of calls for profound changes in the prek-12 school system.

Indeed, efforts at reform since the end of WWII have been nearly a complete failure (Sarason, 1990). Things today are much the same in 2020 as they were in 1970 and 1950 “and in some respects not as good as they were in 1930” when a significant period of school reform was at its peak (Silberman, 1970: p. 159).

Seven outcome measures have often been employed to document unsatisfactory school performance: 1) academic achievement in basic subject areas compared to suggestions of what is needed for success in the current time period; 2) academic achievement in basic subject areas compared to historical data about the United States and to student performance in other nations; 3) the holding power of schools (i.e., dropout rates); 4) preparation for employment and/or increased levels of schooling; 5) knowledge of specific subject areas; 6) mastery of higher-order skills; and 7) personal character and citizenship. What is reported less frequently is that failure is a dominant element of schooling and that reforms do not address this reality, that “school reform” initiatives developed to address troubled schools have accomplished very little (Brady, 2003; Fullan, 2010; Herman, 2012; Kowal & Hassel, 2005; Reynolds, Harris, Clarke, Harris, & James, 2006; Sarason, 1990; Smarick, 2010; Wolcott, 1997), “that they are not the rule, but the rare, often fragile exception” (Little, 1987: p. 493; Reynolds et al., 2006)—that “improving education in the United States represents a large egregious example of failed reform” (Fullan, 2010: p. 15).

What we do know about reform initiatives is that they are almost always constructed on two pillars. First, there is strong language about the failure of schools (Cochran-Smith & Lytle, 2006). Second, criticisms are accompanied by optimistic language and “highly touted” (Fullan, 2010: p. 22) claims about the power of proposed reforms—claims that often are strands of deeply held beliefs and convictions, “seeming unassailable common sense” (Wise, 1979: p. 9) —ungrounded assumptions, fads, opinions, self-promotion, doubts, anecdotes, overly simplified cause and effect explanations, romanticized and ideological accounts, misapplied beliefs, can-do enthusiasm, heuristics, the non-critical adoption of business ideology, and pocket theories of effectiveness (Bryk, Sebring, Allensworth, Luppescu, & Easton, 2010; Meyers & Hitt, 2017), most of which have been documented to have failed in the past. Such assertions generally sprout from three sources: government agencies, corporations, and universities.

We used process evaluation (Putt & Springer, 1989) and qualitative content analysis (Elo & Kyngas, 2008; Hsieh & Shannon, 2005; Zhang & Wildemuth, 2017). More specifically, we employed an adapted “framework method” within this content analysis approach (Elo & Kyngas, 2008; Gale, Heath, Cameron, Rashid, & Redwood, 2013). Bothinductive and deductive approaches were employed in identifying categories and themes (Gale et al., 2013; Zhang & Wildemuth, 2017).

We followed three pathways to determine the key reasons school reforms largely fail. We analyzed accounts from the past 100 years of effort—with most of the focus beginning with the Elementary and Secondary Education Act of 1965—to help significantly underperforming schools, based on externally generated targets of success. We began by analyzing the small number of empirical articles portraying outcomes of turnaround efforts. We analyzed books and published articles from researchers who had explored reform efforts over extended periods of time employing historical, sociological, educational, and psychological frames, with a focus on both internal conditions and external states in which schools were ensconced. We also investigated case studies of individual school turnaround efforts. Second, we applied the same three-part strategy to the wider body of research on school improvement. Here, there were considerably more empirical studies and scholarly reviews of school improvement efforts. Third, we examined analysis of turnaround work outside the field of education. The great bulk of the insights culled here were from for-profit corporations. We also found insights from studies in the non-profit sector (e.g., police departments and religious organizations). Concept mapping was applied to “produce an interpretable and pictorial view of ideas and concepts and how these are interrelated” (Fornes, Rocco, & Wollard, 2008: p. 342) .

To complete our analysis of large-scale school reforms, we conducted a qualitative narrative synthesis. According to Popay and colleagues (2006), narrative synthesis is “an approach to the systematic review and synthesis of findings from multiple studies that relies primarily on the use of words and text to summarize and explain the findings of the synthesis... the defining characteristic is a textual approach to the process of synthesis to ‘tell the story’ of the findings from the included studies” (p. 5). According to Rodgers and team (2009), the defining characteristic of a narrative synthesis is the use of a narrative rather than a statistical summary to the process of synthesis.

More specifically, we employed an adapted “framework method” within this narrative synthesis approach (Elo & Kyngas, 2008; Gale et al., 2013). An inductive approach was employed in coding and identifying themes and patterns in the school reform research (Zhang & Wildemuth, 2017). Coding occurred on a line-by-line basis (Creswell, 2013). We also employed a “directed approach” to analysis, allowing coding categories from our previous research to help guide the formation of initial categories for the current work (Hsieh & Shannon, 2005: p. 1277).

3. The Assumption of Implementation

Over the last 55 years some well-intended reforms at the federal and state levels have simply not been implemented (Mintrop, MacLellan, & Quintero, 2001; Borman, Rachuba, Datnow, Alberg, MacIver, Stringfield, & Ross, 2000). More have been weakly implemented. That is, most reforms die because of flawed implementation (Cuban, 1990). Specifically, school reforms have routinely been undercut by the “assumption of implementation.” Indeed, “most large-scale school change efforts reach only initial or partial implementation and then petrify, eventually becoming targets of future reform” (Evans, 1996: p. 129) and platforms for more suggestions from “well-intentioned policy makers, practitioners, and researchers” (Cuban, 1990: p. 71).

We are witnessing that today “in the national movement to base educational policy and practice on school research evidence” (Borman et al., 2000: p. 162; Coburn, Penuel, & Geil, 2013), scientifically based evidence to help address the failure of previous reform efforts and the “widespread recognition of the research-policy gap” (Neal, Neal, Kornbluh, Mills, & Lawlor, 2015: p. 1; Lubienski, Scott, & DeBray, 2014)—“the idea that education should be or become an evidence-based practice and that teaching should become an evidenced-based profession” (Biesta, 2007: p. 1). That is, “there is a strong push for experimental research that, according to the proponents of evidence-based education, is the only model of securing evidence about what works” (Biesta, 2007: p. 3) , a position that is not uniformly accepted in the scientific community (Giere, 2006). This movement was reinforced with the passage of NCLB in 2002 (Asen, Gurke, Solomon, Conners, & Gumm, 2013) “which significantly raised the importance of research-based programs” (Coburn, Honig, & Stein, 2009: p. 67). It touched off a demand that evidence be used by states (Dynarski, 2015), as well as “policy demands for administrators to use ‘evidence’ in their decision making” (Honig & Coburn, 2008: p. 578; Barnes, Goertz, & Massell, 2014). The Every Student Succeeds Act of 2015 added additional demands for the use of evidence-based interventions (Dynarski, 2015).

4. Lack of Attention to Internal and External Contexts and Change Processes

The difficulty here is that most federal and state reform efforts rarely start with what is known about the internal and external contexts and the needed change processes of reform (Ellis & Fouts, 1993; Hampel, 1986). In so doing, they continue a pattern with very deep roots in efforts to turn around (Saksvik, Nytro, Dahl-Jorgensen, & Mikkelsen, 2002: p. 38; Cusick, 1973). The reform literature often presents schools as homogeneous places (Burch, 2007) where any reform effort can be expected to materialize and be expected to produce desired outcomes. When they are addressed, failures are analyzed according to the “solution to success” idea embedded in the reform (e.g., extra time for tutoring). The overwhelming cause of problems is laid at the feet of teachers (Cusick, 1983). The other external partners, “most of whom have little or no knowledge of the institutional context of schools” (Sarason, 1990: p. 26) and “only the most superficial and distorted conception of the culture of schools” (p. 120), move on with clear consciences. The residue of failure is added to the mound of failure already at the school. Perhaps more surprising, there is little targeting of reform suggestions based on types of schools, student enrollment, level of schooling, existing linkages (either strong or weak, either positive or negative, either hopeful or despondent), and so forth (Bell & Pirtle, 2012; Lauer, Akiba, Wilkerson, Apthorp, Snow, & Martin-Glenn, 2006; Rosenberg, Christianson, & Angus, 2015). Neither is there usually any fresh analysis on the core technology in a given school—subject content and instructional practices (Koyama, 2015) . In short, inside the school, “government efforts to impose... standardization in education... is complexified by the inter-related, the local, the specific, and the idiosyncratic” (p. 552).

From a long history of failed and occasionally mixed (Nunnery, 1998) efforts in school reform, we know that no “strategy stand[s] out as universally effective or sufficiently robust to overcome the power of local context”, i.e., context is critical to recovery efforts (Mintrop & Sunderman, 2009: p. 356). There is no one best approach; “context matters greatly” (Zavadsky, 2013: p. 7; Appleton, Christenson, & Furlong, 2008). Each school has a unique internal and external context and factors contributing to its chronic underperformance (Knudson, Shambaugh, & O’Day, 2011; Smylie, 1995). More specifically, we know that “prior history as well as existing routines, beliefs, and culture of the school will influence how interventions are interpreted and implemented,” and that the results depend on both the internal and external contexts (Aladjem, Birman, Orland, Harr-Robins, Heridia, Parish, & Ruffini, 2010: p. 69; Giere, 2006). “A single approach will not be appropriate for every environment; turnaround efforts must be customized to the needs of a given school” (Knudson et al., 2011: p. 22; Ashton & Webb, 1986).

The logic of the process runs as follows: research-based scientific seeds can be identified, harvested, transported to other seedbeds where they will be planted, grow, and bloom. What we know is that a good part of this reform process has more often than not failed to actualize, to overcome the forces of local context (Koyama, 2015; Zavadsky, 2013). Transported seeds were often carried with a minimum of analysis to places where growth was extremely difficult (e.g., to schools with quite different histories, cultures, norms, and structures). Transported evidence-based reforms were often scattered on the ground to grow, but quite often on rock-hard and unreceptive soil, i.e., on poor seedbeds for reform. Particularly damaging here was the already noted “assumption of implementation,” an assumption that required little work beyond getting the reform ideas to the school. Equally troubling was that three critical aspects of creating successful reforms—change processes and internal and external contexts—were often not considered, only content was put into play when new reform ideas appeared. More damage here came from the fact that the new seeds contained elements—cultural, organization, and professional norms, and systems that were likely to kill off reform that did not fit “what was,” i.e., the existing seedbed (Smylie, 1995). “The characteristics, traditions, and organizational dynamics of the school system were more or less lethal obstacles” (Sarason, 1990: p. 12; Cuban, 1993). The process of reform has been rarely used to help make reforms fit that seedbed (Saksvik et al., 2002) . And even when understood, meshing content, change processes, and context was work to which most educators were unaccustomed—either from their development or from previous reform work. And the professional “assumption of implementation” only compounded inadequate attention to the context and inaccurate understandings of change processes.

Most of the reform seeds died on the ground. Those that worked their way into the soil were often left unattended. At times, they were dug up and replaced with new scientifically based ideas. Very few of the seeds ever blossomed. This story has dominated efforts at school reform for over 50 years.

Reformers of all types (professors, corporate leaders, educators, and politicians) have consistently relied on the “content” (i.e., the seeds) and have routinely left the change processes and contexts of reform unattended. This has contributed to, if not caused, reform failure because processes and contexts areas, if not more, important than the content (Cohen, Peurach, Glazer, Gates, & Goldin, 2014; Fullan, 2010). Or, to turn to the seminal Rand Change Agent study, “what a project was mattered less than how it was carried out” (McLaughlin, 1990: p. 12) . We move now to a third essential reason for the failure of large-scale reform.

5. Travel Limitations of Scientific Evidence

The recurring message that the use of research-based evidence provides a “new pathway” to school improvement is only marginally valid (see Cremin, 1961). Indeed, scholars have traced a focus on scientific research back at least a century, to the “scientific management” revolution in school management in the early years of the 20th century (Callahan, 1962; Tyack, 1974) and “the movement to found teaching on a new science of learning” (Cremin, 1955: p. 304; Ellis & Fouts, 1993; Giere, 2006) in the early years of the 20th century as well (Cuban, 2010). And findings reveal that the impact of those efforts were largely unproductive, and sometimes damaging (Fullan, 1982; Sarason, 1990) and at other times disastrous (Ellis & Fouts, 1993). As was the case with its predecessors, “new” scientific evidence has been grounded on the fundamental, misguided belief that the scientifically discovered DNA of this new reform would work even though the same hopeful belief has appeared regularly, with regular confirmation of disappointment and, at best, small insignificantly positive effects (see Biesta, 2007; Nunnery, 1998). That is, there is not much evidence that the current reform engine of new “scientific evidence” brought to education is working and some reasons to suggest that it may be even less productive than value and practice-based evidence (Player & Katz, 2016; Vanderlinde & van Braak, 2010). Indeed, “the aura of certainty” developing around scientific research-based knowledge is somewhat troubling (Cochran-Smith & Lytle, 2006: p. 689) .

In education, where powerful and deeply ingrained values and structures and procedures in schools and communities routinely drown competing, or simply different, ideas, the power of the return to the status quo routinely undermines the success of reform travel, pushing discordant reforms resonating with a good deal of scientific acclaim into the background. If we have learned anything about the process of change in the last 50 years, it is that simply putting research-anchored reforms into the hands of school people does little to help them understand ways to use that information to improve their work (Boser & McDaniels, 2018; Cuban, 2013).

The providers of research-based knowledge routinely assume that scientific evidence holds the high ground in school reform, that it will drive school change forward in a robust fashion—an ill-informed position that fails to discern the significance of research-based evidence in the struggles of school reform, currently and for the past century (see Cuban, 1990, 2010). Research acknowledges the reality that values, existing arrangements, and the wisdom of practice are valued more highly by educators, business leaders, and politicians than scientific evidence. Researchers in efforts to help teachers and children are often perplexed when educators, corporate executives, and policy makers bend or combine the DNA of research-based knowledge toward or with the DNA from other sources of understanding (e.g., politically culled evidence, practice-based evidence) (McDonnell & Weatherford, 2013) . And they are perplexed again when they observe that their evidence is simply ignored because teachers see limited or no meaningful connections between what researchers tell them is important and their “own daily work as teachers” (Cuban, 2010: p. 1) , a situation that compounds with each failed reform effort.

Somewhat paradoxically, reforms have been known to fail because reformers often maintain an ahistorical outlook (Sarason, 1990; Silberman, 1970). “They look forward rather than back; and when they do need a history, they frequently prefer the fashioning of ideal ancestry to the acknowledgment of mortals” (Cremin, 1961: p. 8) . Reformers have been routinely “unaware that the fact that they say has been said before. It has almost always been tried before and almost has failed” (Silberman, 1970: p. 179, Murphy & Bleiberg, 2019). Considerably more troubling is reform agents pushing ideas from their own worlds that have failed or accomplished much less than promised (Peck & Reitzug, 2014) —e.g., the recent Turnaround Failing Schools packages sent by corporate leaders with the backing of policy makers, a reform in which fewer than three in ten companies have been able to exit turnaround status (Smarick, 2010) . These examples of reformers’ inability to help move schooling forward carries us to our fourth, and a less visible reason for the regular failure of state and federal school reform efforts.

6. Unlinked, Discordant, and Silenced Partners

There are seven important players engaged in the quest for school reform: teachers and school administrators, business executives, politicians, parents, researchers, children, and corporations. One of the first things one learns in the examination of the reform literature is that “education [is often] an adjunct to politics” (Cremin, 1961: p. 88) and corporations. A second is that partners are generally unconnected and often at odds with one another. They tend to work in their individual silos. They each develop school reform ideologies and change strategies anchored in the values, wisdom of practice, research, and self interests of their own worlds—generally, “precluding understanding of any other perspective” (Sarason, 1990: p. 25) . It is also unusual for differently powered reforms to be examined collectively to ascertain where there is agreement and where there is not, and where support is likely to develop or where it is not. The belief that understandings from scientifically anchored research in education do not hold the high ground, i.e., is not privileged, is not because it has been unavailable but because it is often at odds with discordant “knowledge” from different partners which is more esteemed. For example, scientific evidence about the poor results from retaining students is robust and hardly new. Yet calls from corporate leaders and politicians for holding back students who fail to meet well-developed academic targets are not uncommon and often prevail in the face of scientific knowledge.

Relatedly, research is often discounted because four of the critical partners (researchers, policy makers, educators, and corporate executives and foundational leaders) know little about education and schooling (Fullan, 1982). The most distinct and troublesome pattern here is that the partners that surround schools have only a foggy understanding of what actually unfolds in these organizations (Cuban, 2010; Fullan, 2010). “They have only the most superficial conception of the education system” (Sarason, 1990: p. 13) and “no experiential basis for the tasks they were asked to accept. Having little or no hands-on experience in schools is a very serious limitation on those with responsibility to make recommendations for improving them” (p. 24). Most have never personally visited a school to see education in real time. Not surprisingly, teachers and school administrators routinely judge solution strategies developed by their partners as discordant and unhelpful, and as such, they are “doomed to failure” (Silberman, 1970: p. 18). And even when forced to do so, often pay little attention to those strategies.

The second damaging insight regularly uncovered and deeply embedded is that the voices of the two most important reform partners in the work, teachers and children, been and are routinely marginalized (Biesta, 2007) . Knowledge from students is almost always unvalued: “Students in school are not treated as people whose opinions matter” (Fullan, 1982: p. 154) . It is rarely even considered in federal and state reform efforts. We know that, as has generally been the case with earlier reform strategies put into play by politicians and corporate leaders (e.g., the most recent business practices) (Blanton, 1920; School Administration and Teachers, 1918), strategy added to the arsenal of school reform, research-based scientific evidence, has continued a century of the marginalization of teachers (Fullan, 2003, 2010). “They are often held in low regard by universities” (Cochran-Smith & Lytle, 1990: p. 7) , politicians, and corporate leaders. Indeed, it is difficult to examine reform initiatives across time and fail to notice that teachers are sometimes devalued and treated as second class participants, as subjects, technicians, and as hired hands “doing what they have been told to do” (Sarason, 1990: p. 50) . They are “excluded from project development and often provided a ‘mechanistic role’” (McLaughlin, 1990: p. 12) .

Most critically, it increases strain (Lieberman, Saxl, & Miles, 1988) and it reduces teacher efficacy and motivation, helping make teaching “an imperiled profession” (Ashton & Webb, 1986: p. 2). Ashton and Webb label the loss as “the single greatest impediment to school improvement” (p. 1). How much has occurred in the last century since Blanton discovered that “the iron of school discipline has so deeply entered the souls of the great mass of teachers that... in regard to school affairs, they are as dumb [voiceless] as the bricks or stones of their own school buildings” (Blanton, 1920: pp. 156-157).

This pattern is of concern for a number of reasons. “To suggest that research about ‘what works’ can replace normative professional judgment is not only to make an unwarranted leap from ‘is’ to ‘ought’; it is also to deny practitioners the right not to act according to the evidence about what works if they judge a line of action would be educationally undesirable” (Biesta, 2007: p. 11). Also, “to the extent that the effort of change identifies meaningfully all those who directly or indirectly will be affected by the change, to that extent the effort stands a chance to be successful” (Sarason, 1982: p. 79) . Or alternatively, “leaders who pre-suppose what the change should be and act in ways which preclude others’ realities are bound to fail” (Fullan, 1982: p. 82) .

This stance of silencing is troubling (Murphy, 2016; Lieberman et al., 1988). When “teachers are treated as hired hands” (Sizer, 1984: p. 84) , laborers who are “done to” (Cooper, 1988), compliant production-line laborers (Livingston, 1992: p. 12), “paid help doing what they have been told to do” (Sarason, 1990: p. 50), implementors of other people’s knowledge (Cochran-Smith & Lytle, 1990: p. 11) , “semi-skilled workers” (Rosenholtz, 1989: p. 24), which is almost always the case in reform efforts (Peck & Reitzug, 2014) , the possibility of success of those reform initiatives is quite low. Stances of “scapegoating,” “distancing,” and “blaming” by researchers, politicians, and corporate executives are not unusual (Silberman, 1970). For practitioner partners, rather than empowering them and allowing them to contribute to reform, silencing has proven unhelpful (Fullan, 2010) . So too is the practice of simply having them listen to descriptions of each new reform initiative, especially from outsiders such as the other players in the quest for improvement (e.g., corporate leaders, professors, researchers, and politicians) (Cochran-Smith & Lytle, 2006) . And on those occasions when reforms do get put into place, sustainability is unusual (Beishuizen, Hof, Van Putten, Bouwmeeter, & Asscher, 2010) .

We also learn that children and young people have almost no voice in significant reforms (Murphy, 2016; Kirshner & Jefferson, 2015). “Students are not treated as people whose opinions matter” (Fullan, 1982: p. 154). Students are often viewed as part of the background or “products of school” (Sarason, 1990: p. 113; Livingston, 1992), objects to be worked upon, “or a form of raw material” (Rist, 1973: p. 4) or outcome data (Cusick, 1973; Kirshner & Jefferson, 2015)— “almost entirely as objects of reform” (Levin, 2000: p. 156). “The interests of children are seen as interferences in learning” (Sarason, 1982: p. 123). They are often viewed as “incompetent and incomplete” (Holloway & Valentine, 2004: p. 5)—as “unhealthy (or diseased) patients” (Cochran-Smith & Lytle, 2006, p. 681), “as dependent and incapable” (Flutter & Rudduck, 2004: p. 3). The concept of student perspective “runs counter to reform efforts which have been based on adult ideas about conceptualization and practice of education” (Cook-Sather, 2002: p. 38). And Fullan and other scholars have reminded us of the cost of not listening to students: “Students are people too. Unless they have some meaningful (to them) role... most educational change will fail” (Fullan, 1993: p. 147). “It is what the student sees that counts” (Maehr & Midgley, 1996: p. 87). “We need to try to understand where young people are coming from and how such understanding can help us with the task of school improvement” (Murphy, 2016). Or as Mergendoller and Packer (1985: p. 581) capture it, “thorough understanding of these [students’] perceptions is necessary if appropriate interventions are to be made in school organization and classroom instruction.” “Young people have unique perspectives on learning, teaching and schooling; their insights warrant not only the attention but also the responses of adults; they should be afforded opportunities to actively shape their education” (Cook-Sather, 2006: p. 359).

7. The De-Educationalization of Turnaround: A Lack of Focus on the DNA of Effective Schools

Over the last 50 years, we have learned that productive schools are defined by two elements “academic press” (Murphy, Weil, Hallinger, & Mitman, 1982) and a “culture of care”. We also know that both are required for schools to be effective (Bryk et al., 2010). A robust focus on one element does not provide a platform for student success (Crosnoe, 2011). Or, as Bryk and colleagues (2010: p. 60) have found, “a press toward higher academic standards must be coupled with ample personal support.” Ancess (2000: p. 595) refers to this as “a combination of nurture and rigor or affiliation and academic development.” Research also reveals that to be most effective academic press and care must be “braided together” (Antrop-González, 2006: p. 274).

Academic press in productive schools is defined in three domains: quality instruction, a significant amount of academic learning time, and robust curriculum content. With the exception of curricular content, which is addressed explicitly in state and professional standards, turnaround work has very little to say beyond the generalization that effective teachers who produce higher test scores are preferred over ineffective teachers who produce marginal or negative impacts on measures of student learning, a tautology that provides no insights about the elements of quality instruction—a finding that is consistent with the general pattern of school reform over the last 50 years (Diamond, 2012; Fullan, 2010). When they do not have voice, we often see “reduced interest and motivation, passionless conformity, and at worst rejection of learning” (Sarason, 1990: p. 83).

The other essential element of learning, academic learning time, (Carroll, 1963) is conspicuous by its absence. There are large-scale reform initiatives that attempt to increase time allotted to instruction. But the two more critical elements of academic learning time—the amount of engaged time spent in a student’s zone of development and success rates of 80% or higher—are nearly invisible in school reform efforts.

Given that we have known for 40 years that high quality instruction and concentrated time in the zone of student development are the two most critical elements in explaining student learning, it is not surprising that most reform efforts have failed.

If present at all, the essential elements of care (Murph & Torre, 2014)—teachers and managers: expressing a strong interest in children and youngsters as persons; revealing themselves as persons; challenging students and providing significant support; valuing students; treating students with respect; seeing through the eyes of students; trusting students, viewing students in a positive light, not as deficient or damaged; and molding schooling to children rather than bending children to schools. Or as Lambert and colleagues (2018) tell us, “while developing excellence in knowledge and skills, academic institutions have overlooked their obligation to instill wellbeing” (p. 15).

8. Conclusion

As is the case for a good deal of what unfolds in schooling, the story of reform is multi-faceted and complex. So too is the story of the failure of reforms in schools. When we step back and remind ourselves that these efforts often came to education whole cloth from another world, the conclusion of consistent failure should not be a surprise. The assumption that reforms will go to seed and blossom runs counter to the available knowledge about education adopting reforms. A more accurate assumption is that these reforms will fail. If they are to have a chance of success, it must be understood that the voyage from arrival to operation is not a smooth one. The reality is that the schooling industry has internal and external contexts that are, in many ways, quite different than those found elsewhere. This requires educators to bend reform lessons towards school rather than accepting lessons as they are or bending schools to those lessons. The reality is that schools are enmeshed in both external and internal contexts that often do not support initiatives. Even more critical is the reality that absent understanding of industry specific knowledge—guidance which has not come with large-scale reforms—laws and regulations designed to power improvements has little hope of success. The analysis presented above informs us that reform is only possible when lessons are shaped based on knowledge about actions in schools that is necessary for student success.

Note

A partial list of reform failure:

New American Schools

Career Ladder

Experimental Schools Program

School-Based Management (SBM)

No Child Left Behind (NCLB) (2002)

School Improvement Grants (SIG)

New Math (1960s)

Ungraded and Non-Graded Classes

School Restructuring

Whole School Reform

Large-Scale Curriculum Reform (1950s-1960s)

The Educational Technology Decade (1965-1975)

Life Adjustment Movement (1948-1950)

Every Student Succeeds Act (ESSA) (2015)

Tracking

American High School Reforms

American Recovery & Reinvestment Act

Learning Styles

Infiltration of Elective Classes

Ability Grouping

Program for Effective Teaching

Team Teaching

Instructional TV

Consolidation of Small Schools

Education for All American Youth

Comprehensive School Reform

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Aladjem, D. K., Birman, B. F., Orland, M., Harr-Robins, J., Heredia, A., Parrish, T. B., & Ruffini, S. J. (2010). Achieving Dramatic School Improvement: An Exploratory Study. A Cross-Site Analysis from the Evaluation of Comprehensive School Reform Program Implementation and Outcomes Study. Washington DC: Office of Planning, Evaluation and Policy Development, US Department of Education.
[2] Ancess, J. (2000). The Reciprocal Influence of Teacher Learning, Teaching Practice, School Restructuring, and Student Learning Outcomes. The Teachers College Record, 102, 590-619.
https://doi.org/10.1111/0161-4681.00069
[3] Antrop-González, R. (2006). Toward the “School as Sanctuary” Concept in Multicultural Urban Education: Implications for Small High School Reform. Curriculum Inquiry, 36, 273-301.
https://doi.org/10.1111/j.1467-873X.2006.00359.x
[4] Appleton, J. J., Christenson, S. L., & Furlong, M. J. (2008). Student Engagement with School: Critical Conceptual and Methodological Issues of the Construct. Psychology in the Schools, 45, 369-386.
https://doi.org/10.1002/pits.20303
[5] Asen, R., Gurke, D., Conners, P., Solomon, R., & Gumm, E. (2013). Lessons from Research Evidence and School Based Deliberations: Three Wisconsin School Districts. Educational Policy, 27, 33-63.
https://doi.org/10.1177/0895904811429291
[6] Ashton, P. T., & Webb, R. B. (1986). Making a Difference: Teachers’ Sense of Efficacy and Student Achievement. New York: Longman.
[7] Barnes, C. A., Goertz, M. E., & Massell, D. (2014). How State Education Agencies Acquire and Use Research Knowledge for School Improvement. In Using Research Evidence in Education (pp. 99-116). New York: Springer.
https://doi.org/10.1007/978-3-319-04690-7_8
[8] Beishuizen, J. J., Hof, E., Van Putten, C. M., Bouwmeeter, S., & Asscher, J. J. (2010). Students’ and Teachers’ Cognitions about Good Teachers. British Journal of Educational Psychology, 71, 185-201.
https://doi.org/10.1348/000709901158451
[9] Bell, S., & Pirtle, S. S. (2012). Transforming Low-Performing Rural Schools. Texas Comprehensive Center Briefing Paper, Austin, TX: Texas Comprehensive Center.
[10] Biesta, G. (2007). Why “What Works” Won’t Work: Evidence-Based Practice and the Democratic Deficit in Education Research. Educational Theory, 57, 1-22.
https://doi.org/10.1111/j.1741-5446.2006.00241.x
[11] Blanton, A. W. (1920). Democracy in School Administration. High School Quarterly, 8, 155-159.
[12] Borman, G. D., Rachuba, L., Datnow, A., Alberg, M., MacIver, M., Stringfield, S., & Ross, S. (2000). Four Models of School Improvement: Successes and Challenges in Reforming Low-Performing, High-Poverty Title I Schools. Baltimore, MD: Center for Research on the Education of Students Placed at Risk, Johns Hopkins University.
[13] Boser, U., & McDaniels, A. (2018). Addressing the Gap between Education Research and Practice: The Need for State Education Capacity Centers. Washington DC: Center for American Progress.
[14] Brady, R. C. (2003). Can Failing Schools Be Fixed? Washington DC: Thomas B. Fordham Foundation.
[15] Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S., & Easton, J. (2010). Organizing Schools for Improvement: Lessons from Chicago. Chicago, IL: University of Chicago Press.
https://doi.org/10.7208/chicago/9780226078014.001.0001
[16] Burch, P. (2007). Educational Policy and Practices from the Perspective of Institutional Theory: Crafting a Wider Lens. Educational Researcher, 36, 84-95.
https://doi.org/10.3102/0013189X07299792
[17] Callahan, R. E. (1962). Education and the Cult of Efficiency. Chicago, IL: University of Chicago Press.
[18] Carroll, J. (1963). A Model for School Learning. Teachers College Record, 64, 723-733.
[19] Coburn, C. E., Honig, M. I., & Stein, M. K. (2009). What’s the Evidence on Districts’ Use of Evidence? In J. Bradford, D. Stipek, L. Gomez, D. Lam, & N. Vye (Eds.), The Role of Research in Educational Improvement (pp. 67-87). Cambridge, MA: Harvard Educational Press.
[20] Coburn, C. E., Penuel, W. R., & Geil, K. E. (2013). Practice Partnerships: A Strategy for Leveraging Research for Educational Improvement in School Districts. New York: William T. Grant Foundation.
[21] Cochran-Smith, M., & Lytle, S. L. (1990). Teacher Research and Research on Teaching: The Issues That Divide. Educational Researcher, 19, 2-11.
https://doi.org/10.3102/0013189X019002002
[22] Cochran-Smith, M., & Lytle, S. L. (2006). Troubling Images of Teaching in No Child Left Behind. Harvard Education Review, 76, 668-697.
https://doi.org/10.17763/haer.76.4.56v8881368215714
[23] Cohen, D. K., Peurach, D. J., Glazer, J. L., Gates, K. E., & Goldin, S. (2014). Improvement by Design: The Promise of Better Schools. Chicago, IL: University of Chicago Press.
https://doi.org/10.7208/chicago/9780226089416.001.0001
[24] Cook-Sather, A. (2002). Authorizing Students’ Perspectives: Toward Trust, Dialogue, and Change in Education. Educational Researcher, 31, 36-59.
https://doi.org/10.3102/0013189X031004003
[25] Cook-Sather, A. (2006). Sound, Presence, and Power: “Student Voice” in Educational Research and Reform. Curriculum Inquiry, 36, 359-390.
https://doi.org/10.1111/j.1467-873X.2006.00363.x
[26] Cooper, M. (1988). School as a Place to Have a Career. In A. Lieberman (Ed.), Whose Culture Is It, Anyway? (pp. 45-54). New York: Teachers College Press.
[27] Cremin, L. A. (1955). The Revolution in American Secondary Education, 1893-1918. Teachers College Record, 56, 295-308.
[28] Cremin, L. A. (1961). The Transformation of the School: Progressivism in American Education 1876-1957. New York: Vintage.
[29] Creswell, J. R. (2013). Qualitative Inquiry and Research Design. Thousand Oaks, CA: Sage.
[30] Crosnoe, R. (2011). Fitting In, Standing Out: Navigating the Social Challenges of High School to Get an Education. Cambridge: Cambridge University Press.
https://doi.org/10.1017/CBO9780511793264
[31] Cuban, L. (1990). Reforming Again, Again, and Again. Educational Researcher, 19, 3-13.
https://doi.org/10.3102/0013189X019001003
[32] Cuban, L. (1993). How Teachers Taught: Constancy and Change in American Classrooms, 1890-1990. New York: Teachers College Press.
[33] Cuban, L. (2010). Evidence, Beliefs, and a Science of Education.
https://larrycuban.wordpress.com/2011/05/21/evidence-beliefs-and-a-science-of-education
[34] Cuban, L. (2013). Inside the Black Box of Classroom Practice: Change without Reform in American Education. Cambridge, MA: Harvard Education Press.
[35] Cuban, L., & Usdan, M. (2003). Learning from the Past. In L. Cuban, & M. Usdan (Eds.), Powerful Reforms with Shallow Roots (pp. 1-15). New York: Teachers College Press.
[36] Cusick, P. (1973). Inside High School. The Student’s World. New York: Holt, Rinehart and Winston.
[37] Cusick, P. (1983). The Egalitarian Ideal and the American High School: Studies of Three Schools. New York: Longman.
[38] Diamond, J. (2012). Accountability Policy, School Organization, and Classroom Practice: Partial Recoupling and Educational Opportunity. Education and Urban Sociology, 44, 151-182.
https://doi.org/10.1177/0013124511431569
[39] Dynarski, M. (2015). Using Research to Improve Education under the Every Student Succeeds Act. Evidence Speaks Reports, Vol. 1, No. 8, Brookings: Center on Children and Families.
[40] Ellis, A. K., & Fouts, J. T. (1993). Research on Educational Innovations. Princeton Junction, NJ: Eye on Education.
[41] Elo, S., & Kyngas, H. (2008). The Qualitative Content Analysis Process. Journal of Advanced Nursing, 62, 107-115.
https://doi.org/10.1111/j.1365-2648.2007.04569.x
[42] Evans, R. (1996). The Human Side of School Change: Reform, Resistance, and Real-Life Problems of Innovation. San Francisco, CA: Wiley.
[43] Flutter, J., & Rudduck, J. (2004). Consulting Pupils: What’s in It for Schools? London: Routledge.
https://doi.org/10.4324/9780203464380
[44] Fornes, S. L., Rocco, T. S., & Wollard, K. K. (2008). Workplace Commitment: A Conceptual Model Developed from Integrative Review of the Research. Human Resource Development Review, 7, 339-357.
https://doi.org/10.1177/1534484308318760
[45] Fullan, M. (1982). The Meaning of Educational Change. New York: Teachers College Press.
[46] Fullan, M. (1993). Change Forces: Probing the Depths of Educational Reform. London: Falmer.
[47] Fullan, M. (2010). All Systems Go: The Change Imperative for Whole System Reform. Thousand Oaks, CA: Corwin.
[48] Gale, N. K., Heath, G., Cameron, E., Rashid, S., & Redwood, S. (2013). Using the Framework Method for the Analysis of Qualitative Data in Multi-Disciplinary Health Research. BMC Medical Research Methodology, 13, 117.
https://doi.org/10.1186/1471-2288-13-117
[49] Giere, R. (2006). Scientific Perspectivism. Chicago, IL: University of Chicago Press.
https://doi.org/10.7208/chicago/9780226292144.001.0001
[50] Hampel, R. L. (1986). The Last Little Citadel: American High Schools since 1940. Boston, MA: Houghton Mifflin Company.
[51] Herman, R. (2012). Scaling School Turnaround. Journal of Education for Students Placed at Risk (JESPAR), 17, 25-33.
https://doi.org/10.1080/10824669.2012.637166
[52] Holloway, S. L., & Valentine, G. (2004). Children’s Geographies and the New Social Studies of Childhood. In S. L. Holloway, & G. Valentine (Eds.), Children’s Geographies: Playing, Living, Learning (pp. 1-26). London: Routledge.
https://doi.org/10.4324/9780203017524
[53] Honig, M. I., & Coburn, C. (2008). Evidence-Based Decision Making in School District Central Offices toward a Policy and Research Agenda. Educational Policy, 22, 578-608.
https://doi.org/10.1177/0895904807307067
[54] Hsieh, H. F., & Shannon, S. E. (2005). Three Approaches to Qualitative Content Analysis. Qualitative Health Research, 15, 1277-1288.
https://doi.org/10.1177/1049732305276687
[55] Kirshner, B., & Jefferson, A. (2015). Participatory Democracy and Struggling Schools: Making Space for Youth in School Turnarounds. Teachers College Record, 117, Article ID: 060303.
[56] Knudson, J., Shambaugh, L., & O’Day, J. (2011). Beyond the School: Exploring a Systemic Approach to School Turnaround Policy and Practice Brief. San Mateo, CA: California Collaborative on District Reform.
[57] Kowal, J. M., & Hassel, E. A. (2005). Turnarounds with New Leaders and Staff. Washington DC: The Center for Comprehensive School Reform and Improvement.
[58] Koyama, J. (2015). When Things Come Undone: The Promise of Dissembling Education Policy. Discourse: Studies in the Cultural Politics of Education, 36, 548-559.
https://doi.org/10.1080/01596306.2015.977012
[59] Lambert, L., Passmore, H. A., & Joshanloo, M. (2018). A Positive Psychology Intervention Program in a Culturally-Diverse University: Boosting Happiness and Reducing Fear. Journal of Happiness Studies, 20, 1141-1162.
https://doi.org/10.1007/s10902-018-9993-z
[60] Lauer, P. A., Akiba, M., Wilkerson, S. B., Apthorp, H. S., Snow, D., & Martin-Glenn, M. L. (2006). Out-of-School-Time Programs: A Meta-Analysis of Effects for At-Risk Students. Review of Educational Research, 76, 275-313.
https://doi.org/10.3102/00346543076002275
[61] Levin, B. (2000). Putting Students at the Centre in Education Reform. Journal of Educational Change, 1, 155-172.
https://doi.org/10.1023/A:1010024225888
[62] Lieberman, A., Saxl, E. R., & Miles, M. B. (1988). Teacher Leadership: Ideology and Practice. In A. Lieberman (Ed.), Building a Professional Culture in Schools (pp. 148-166). New York: Teachers College Press.
[63] Little, J. W. (1987). Teachers as Colleagues. In V. Richardson-Koehler (Ed.), Educators’ Handbook: A Research Perspective (pp. 491-518). White Plains, NY: Longman.
[64] Livingston, C. (1992). Teachers as Leaders: Evolving Roles. West Haven, CT: NEA Professional Library.
[65] Lubienski, C., Scott, J., & DeBray, E. (2014). The Politics of Research Production, Promotion, and Utilization in Educational Policy. Educational Policy, 28, 131-144.
https://doi.org/10.1177/0895904813515329
[66] Maehr, M. L., & Midgley, C. (1996). Transforming School Cultures. Boulder, CO: Westview Press.
[67] McDonnell, L. M., & Weatherford, M. S. (2013). Evidence Use and the Common Core State Standards Movement: From Problem Definition to Policy Adoption. American Journal of Education, 120, 1-25.
https://doi.org/10.1086/673163
[68] McLaughlin, M. W. (1990). The Rand Change Agent Study: Macro Perspectives and Micro Realities. Educational Researcher, 19, 11-16.
https://doi.org/10.3102/0013189X019009011
[69] Mergendoller, J. R., & Packer, M. J. (1985). Seventh Graders’ Conceptions of Teachers: An Interpretive Analysis. The Elementary School Journal, 84, 581-600.
https://doi.org/10.1086/461423
[70] Meyers, C. V., & Hitt, D. H. (2017). School Turnaround Principals: What Does Initial Research Literature Suggest They Are Doing to Be Successful? Journal of Education for Students Placed at Risk (JESPAR), 22, 38-56.
https://doi.org/10.1080/10824669.2016.1242070
[71] Mintrop, H., & Sunderman, G. L. (2009). Predictable Failure of Federal Sanctions-Driven Accountability for School Improvement—And Why We May Retain It Anyway. Educational Researcher, 38, 353-364.
https://doi.org/10.3102/0013189X09339055
[72] Mintrop, H., MacLellan, A. M., & Quintero, M. F. (2001). School Improvement Plans in Schools on Probation: A Comparative Content Analysis across Three Accountability Systems. Educational Administration Quarterly, 37, 197-218.
https://doi.org/10.1177/00131610121969299
[73] Murphy, J., Weil, M., Hallinger, P., & Mitman, A. (1982). Academic Press: Translating High Expectations into School Polices and Classroom Practices. Educational Leadership, 40, 22-26.
[74] Murphy, J., & Torre, D. (2014). Creating Productive Cultures in Schools: For Students, Teachers, and Parents. Thousand Oaks, CA: Corwin.
https://doi.org/10.4135/9781506342733
[75] Murphy, J. (2016). Understanding Schooling through the Eyes of Students. Thousand Oaks, CA: Corwin.
[76] Murphy, J., & Bleiberg, J. F. (2019). School Turnaround Policies and Practices in the US: Learning from Failed School Reform. Berlin: Springer.
https://doi.org/10.1007/978-3-030-01434-6
[77] Neal, J. W., Neal, Z. P., Kornbluh, M., Mills, K. J., & Lawlor, J. A. (2015). Brokering the Research Practice Gap: A Typology. American Journal of Community Psychology, 56, 422-435.
https://doi.org/10.1007/s10464-015-9745-8
[78] Nunnery, J. A. (1998). Reform Ideology and the Locus of Development Problem in Education Restructuring. Education and Urban Society, 30, 277-295.
https://doi.org/10.1177/0013124598030003002
[79] Peck, C., & Reitzug, U. C. (2014). School Turnaround Fever: The Paradoxes of a Historical Practice Promoted as a New Reform. Urban Education, 49, 8-38.
https://doi.org/10.1177/0042085912472511
[80] Player, D., & Katz, V. (2016). Assessing School Turnaround: Evidence from Ohio. The Elementary School Journal, 116, 675-698.
https://doi.org/10.1086/686467
[81] Popay, J., Roberts, H., Sowden, A., Petticrew, M., Arai, L., Rodgers, M., & Duffy, S. (2006). Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. A Product from the ESCR Methods Programme Version, 1, b92.
[82] Putt, A., & Springer, J. (1989). Policy Research: Concepts, Methods, and Applications. Englewood Cliffs, NJ: Prentice Hall.
[83] Reynolds, D., Harris, A., Clarke, P., Harris, B., & James, S. (2006). Challenging the Challenged: Developing an Improvement Programme for Schools Facing Exceptionally Challenging Circumstances. School Effectiveness and School Improvement, 17, 425-439.
https://doi.org/10.1080/09243450600743509
[84] Rist, R. C. (1973). The Urban School: A Factory for Failure. A Study of Education in American Society. Cambridge, MA: MIT Press.
[85] Rodgers, M., Sowden, A., Petticrew, M., Arai, L., Roberts, H., Britten, N., & Popay, J. (2009). Testing Methodological Guidance on the Conduct of Narrative Synthesis in Systematic Reviews: Effectiveness of Interventions to Promote Smoke Alarm Ownership and Function. Evaluation, 15, 49-73.
https://doi.org/10.1177/1356389008097871
[86] Rosenberg, L., Christianson, M. D., & Angus, M. H. (2015). Improvement Efforts in Rural Schools: Experiences of Nine Schools Receiving School Improvement Grants. Peabody Journal of Education, 90, 194-210.
https://doi.org/10.1080/0161956X.2015.1022109
[87] Rosenholtz, S. J. (1989). Teachers’ Workplace: The Social Organization of Schools. White Plains, NY: Longman.
[88] Saksvik, P. O., Nytro, K., Dahl-Jorgensen, C., & Mikkelsen, A. (2002). A Process Evaluation of Individual and Organizational Occupational Stress and Health Interventions. Work & Stress, 16, 37-57.
https://doi.org/10.1080/02678370110118744
[89] Sarason, S. B. (1982). The Culture of School and the Problem of Change (2nd ed.). Boston, MA: Allyn & Bacon.
[90] Sarason, S. B. (1990). The Predictable Failure of Educational Reform: Can We Change Course before It’s Too Late? San Francisco, CA: Jossey-Bass.
[91] (1918) School Administration and Teachers. School and Society, 8, 740-741.
[92] Silberman, C. E. (1970). Crisis in the Classroom: The Remaking of American Education. New York: Vintage Books.
[93] Sizer, T. R. (1984). Horace’s Compromise: The Dilemma of the American High School. Boston, MA: Houghton Mifflin.
[94] Smarick, A. (2010). The Turnaround Fallacy. Education Next, 10, 20-26.
[95] Smylie, M. A. (1995). New Perspectives on Teacher Leadership. The Elementary School Journal, 96, 3-7.
https://doi.org/10.1086/461811
[96] Tyack, D. B. (1974). The One Best System: A History of America Urban Education. Cambridge, MA: Harvard University Press.
[97] Vanderlinde, R., & van Braak, J. (2010). The Gap between Educational Research and Practice: Views of Teachers, School Leaders, Intermediaries and Researchers. British Educational Research Journal, 36, 299-316.
https://doi.org/10.1080/01411920902919257
[98] Wise, A. E. (1979). Legislated Learning: The Bureaucratization of the American Classroom. Berkeley, CA: University of California Press.
[99] Wolcott, H. F. (1997). Teachers versus Technocrats: An Educational Innovation in Anthropological Perspective. Eugene, OR: University of Oregon, Center for Educational Policy and Management.
[100] Zavadsky, H. (2013). Scaling Turnaround: A District-Improvement Approach. Washington DC: American Enterprise Institute for Public Policy Research.
[101] Zhang, Y., & Wildemuth, B. M. (2017). Qualitative Analysis of Content. In B. M. Wildemuth (Ed.), Applications of Social Research Methods to Applications to Question in Information and Library Science (2nd ed., pp. 318-329). Belmont, CA: Brooks/Cole.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.