Sheryl S. Lazarus, Diane L. Ryndak, Craig B. Howley, Patricia McDaid, Kristin K. Liu, Deborah Taub, Aimee Howley, Meghan Cosier, James Clifton, Deborah Telfer, Kara Holden, Martha L. Thurlow, and Terri Vandercook
All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:
Lazarus, S. S., Ryndak, D. L., Howley, C. B., McDaid, P., Liu, K. K., Taub, D., Howley, A., Cosier, M., Clifton, J., Telfer, D., Holden. K., Thurlow, M. L., & Vandercook, T. (2019). Using systems change efforts to implement and sustain inclusive education practices in general education settings for students with the most significant cognitive disabilities: A review of the literature (TIES Center Report 102). Minneapolis, MN: University of Minnesota, The TIES Center.
TIES Center is supported through a cooperative agreement between the University of Minnesota (# H326Y170004) and the Research to Practice Division, Office of Special Education Programs, U.S. Department of Education. The Center is affiliated with the National Center on Educational Outcomes (NCEO) which is affiliated with the Institute on Community Integration (ICI) at the College of Education and Human Development, University of Minnesota. The contents of this report were developed under the Cooperative Agreement from the U.S. Department of Education, but do not necessarily represent the policy or opinions of the U.S. Department of Education or Offices within it. Readers should not assume endorsement by the federal government.
Project Officer: Susan Weigert
The National Center on Educational Outcomes (NCEO) leads the TIES Center partnership. There are six additional collaborating partners: Arizona Department of Education, CAST, University of Cincinnati, University of Kentucky, University of North-Carolina–Charlotte, and University of North Carolina–Greensboro.
The least restrictive environment (LRE) clause of the Individuals with
Disabilities Education Act (IDEA) states that students with disabilities
should be included with their grade-level peers without disabilities in
general education classes and other settings “to the maximum extent
appropriate” (U.S. Department of Education, 2004). However, most
students with the most significant cognitive disabilities continue to be
placed in separate settings where their exposure to general education
classes, grade-level peers, and general education core curriculum is
limited. The purpose of this literature review is to present information
about how systems change efforts can guide initiatives to increase and
sustain the placement of students with the most significant cognitive
disabilities in inclusive general education settings, as well as
increase and sustain their opportunities to learn core academic
standards-based curriculum through the implementation of inclusive
Two sets of literature are reviewed in this report. First, we review the implementation science literature. This is followed by a review of the improvement science literature. We end with the Conclusions and Discussion of how systems change efforts can support the implementation and sustainability of inclusive education practices for students with the most significant cognitive disabilities. We identify several components associated with effective and sustainable systemic change efforts related to the implementation of inclusive practices. These include:
The findings from the systems change literature can guide initiatives to increase and sustain the placement of students with the most significant cognitive disabilities in inclusive general education settings in their home schools, as well as the implementation and sustained use of EBPs in those settings.
The least restrictive environment (LRE) clause of the Individuals with
Disabilities Education Act (IDEA) states that students with disabilities
should be included with their grade-level peers without disabilities in
general education classes and other settings “to the maximum extent
appropriate” (U.S. Department of Education, 2004). LRE has been included
in federal statute since 1975 (U.S. Department of Education, 1994);
however, the placement of students with the most significant cognitive
disabilities in general education settings has been an ongoing struggle
since the inception of the statute. Students with the most significant
cognitive disabilities represent about 1% of the student population in
schools (Thurlow & Lazarus, 2017). It is a frustrating reality that
these students continue to be placed in separate settings where their
exposure to general education classes, grade-level peers, and general
education core curriculum is limited (Kurth, Morningstar & Kozleski,
2014; Morningstar, Kurth, & Jackson, 2017).
The purpose of this literature review is to present information about how systems change efforts can guide initiatives to increase and sustain the placement of students with the most significant cognitive disabilities in inclusive general education settings, as well as increase and sustain opportunities for these students to learn core academic standards-based curriculum through the implementation of inclusive education practices. Although it draws on key insights from literature on business management, medicine, communications, and organizational sociology, this review primarily focuses on literature about educational systems. This literature review is divided into two sections—one on implementation science literature, and one on improvement science literature. Because there is a dearth of systems change literature that addresses students with the most significant cognitive disabilities, we start each section with a broad focus on extant systems change literature as applied to school improvement. When available, we then focus on special education in general and, when possible, literature on students with the most significant cognitive disabilities. Then, in the Conclusions and Discussion, we extrapolate from the presented information to discuss how systems change efforts can support increased placement in general education settings, as well as the implementation and sustainability of inclusive education practices for students with the most significant cognitive disabilities. We also identify several components associated with effective and sustainable systemic change efforts related to the implementation of inclusive practices.
This section starts with an overview of implementation science, which researchers have conceptualized in different ways. We then describe how implementation science has been used by educators to promote adoption of school improvement strategies. Finally, we consider how it has been used in the field of special education, including its use as it relates to the education of students with the most significant cognitive disabilities.
Implementation science has its roots in the fields of medicine and behavioral health, defining it as “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence to improve the quality and effectiveness of health services and care” (Eccles & Mittman, 2006, p. 1).
The National Implementation Research Network (NIRN) (Blase, Fixsen, & Duda, 2011) provided perhaps the most specific definition of implementation science for our purposes:
implementation science is the scientific study of variables and conditions that impact changes at practice, organization, and systems levels; changes that are required to promote the systematic uptake, sustainability and effective use of evidence-based programs and practices in typical service and social settings (presentation transcript, p. 1).
Implementation science efforts often start with an under-used evidence-based practice (EBP) that reflects a “theory to practice” gap, and then identify and address that gap across individual and administrative systems of responsibility (Bauer, Damschroder, Hagedorn, Smith, & Kilbourne, 2015). In school districts where the placement of students with the most significant cognitive disabilities in general education settings and the implementation of evidence-based inclusive education practices are the norm, the individual and administrative systems supporting these students have undergone a deliberate process of change (Ryndak, Reardon, Benner, & Ward, 2007). Implementation science can offer both theoretical and practical knowledge to support sustainable change in educational practices and systems.
In both the health care and education fields, research professionals were challenged by the fact that clinical research findings were not being used in the daily practice of professionals (Balas & Boren, 2000; Morris, Wooding, & Grant, 2011). For example, Balas and Boren (2000) found that it could take upwards of 17 years for a medical research finding to become common clinical practice, and that only half of EBPs were ever put into clinical use. In a 2005 synthesis of implementation literature, Fixsen, Naoom, Blase, Friedman, and Wallace (2005) found this same challenge noted in reports from several agencies, offices, and organizations, including the Surgeon General of the United States, United States Department of Health and Human Services, National Institute of Mental Health, National Advisory Mental Health Council Workgroup on Child and Adolescent Mental Health Intervention Development and Deployment, Institute of Medicine, and the President’s New Freedom Commission on Mental Health.
Early work in the area of implementation science described three categories of existing methodologies: letting it happen, helping it happen, and making it happen (Greenhalgh, Robert, MacFarlane, Bate, & Kyriakidou, 2004; Hall & Hord, 1987). Letting it happen refers to the dissemination of information to a professional group. In this scenario, “early adopters,” or “champions,” of a particular practice choose to implement it. Helping it happen refers to efforts that also include studying those communication and training factors that contribute to increased use of the practice. The making it happen paradigm focuses specifically on those factors that make it more likely that a practice will be implemented with fidelity and in an ongoing manner. When we make it happen, practitioners change their daily practice and measurable improvements are made in the lives of individuals the practice is meant to serve.
French et al. (2012) described the need for “theory-informed implementation intervention” and devised a generalizable four-step process to guide implementation efforts: (a) identify who needs to do what differently; (b) determine which barriers and enablers need to be addressed; (c) determine which intervention components could overcome the modifiable barriers and enhance the enablers; and (d) identify how desired outcomes could be measured and understood. In order to maximize measurable outcomes when carrying out implementation efforts it is necessary to operationalize the desired change into specific functions or behaviors expected of practitioners, and create an organizational infrastructure to support the desired behavioral change (Easterling & Metz, 2016).
As with any relatively new area of study, reporting on implementation science research has been complicated by the use of differing vocabulary and multifaceted strategies. Nilsen (2015) succinctly summarized three over-arching themes present across all models of implementation science: (a) describing or supporting the process of moving from theory to practice; (b) describing the factors that influence implementation outcomes; and (c) evaluating implementation efforts. He lauded the interdisciplinary approaches taken by implementation science researchers to review many fields of study for relevant theories and processes. Finally, he called for more empirical study of the extent to which implementation theories and models actually improve practices and their outcomes.
NIRN built on the steps identified in the systems change literature and identified two tools (e.g., practice profiles, implementation drivers) that it described as key to successful implementation efforts in educational settings. Practice profiles operationalize the responsibilities, functions, and behaviors that practitioners must carry out for systemic change efforts to result in behavioral changes (State Implementation and Scaling-up of Evidence-based Practices Center [SISEP], 2014). The profile is created by the designers of the systems change effort. They give thought to the context in which they wish the intervention to be implemented. The specificity of a practice profile is particularly helpful to prevent the confusion, uncertainty, and differences in opinion or perception that often occurs when long-standing procedures or protocols are changed (Hall & Hord, 2006).
Implementation drivers are those malleable attributes of individuals and systems that make it more likely that efforts to implement a new practice are successful and able to be maintained over time. NIRN describes three primary types of implementation drivers: (a) competency drivers, such as staff training and ongoing coaching; (b) organizational drivers, such as administrative support and data systems for decision making; and (c) leadership drivers, such as the flexibility and problem-solving skills of individuals in leadership roles (SISEP, 2014).
To identify the literature base supporting the use of implementation science in education, a search of two education databases—ERIC and EBSCO Education Source—was conducted for the years 2005-2018, searching on the exact phrase of “implementation science” in any field. The year range began at 2005 because that was the publication date of a seminal monograph by Fixsen et al. (2005) that put forth a synthesis of the literature on implementation science based primarily on its early use in the areas of medicine and behavioral health.
To refine the search and identify the most relevant documents, several additional searches were conducted that included both the exact phrase “implementation science” and a second term: school reform, educational reform, school improvement, special education, inclusive education, inclusion, severe disabilities, multiple disabilities, developmental disabilities, or severe cognitive disabilities.
This section presents the results of the implementation science literature review. First, we describe the documents that were identified. Then, we report the findings from the literature on the use of implementation science for school improvement, followed by studies addressing the use of implementation science in special education, including studies that focus on students with the most significant cognitive disabilities.
The preliminary search using just the term “implementation science”
identified 216 documents of all kinds (e.g., research, opinion, etc.)
written between 2005 and 2018. A search using the terms “implementation
science” and “school reform,” “education reform,” or “school
improvement” located 12 documents all of which were relevant
publications. A search using the terms “implementation science” and
“special education” located 38 documents of which 24 were relevant. A
search using the terms “implementation science” and “inclusive
education” or “inclusion” located nine documents of any kind, only two
of which were relevant documents. A search using the terms
“implementation science” and “severe disabilities,” “multiple
disabilities,” or “developmental disabilities” located 10 documents of
which eight were relevant. A search using the terms “implementation
science” and “severe cognitive disabilities” yielded zero results. There
was overlap in documents identified across these secondary searches.
Sixty-four studies were selected to inform this review of the
implementation science literature. See Appendix A for a list of these
As the research base surrounding implementation science has grown, more efforts have been made to bring this approach to the area of school improvement. For example, Bryk, Ladd, O’Day, and Smith (2016) described the potential of implementation science to “support a systematic and continuous improvement approach to find solutions for many of the major education problems facing the [United States]” (p. 2). The Carnegie Foundation for the Advancement of Teaching (2019) also recommended the use of implementation science as a method to “accelerate learning and address problems of practice” to improve education systems and outcomes in the United States. In a 2017 memorandum, Principles of Effective School Improvement, the Council of Chief State School Officers (CCSSO) recommended the use of implementation science as a driver for the design of school improvement plans for low-performing schools (CCSSO, 2017). This recommendation has been addressed through a partnership between The Center for School Turnaround and NIRN to apply the principles of implementation science to reform efforts in the neediest schools in the United States (Jackson, Fixsen, & Ward, 2018).
In a study that explored factors that influenced implementation of a specific EBP in a school setting, Langley, Nadeem, Kataoka, Stein, and Jaycox (2010) determined that the primary reason for non-implementation of the EBP was “competing responsibilities.” They also found that the major barriers to implementation were at the systems and organizational levels.
Using a concept mapping process, Naoom, Wallace, Blase, Haines, and Fixsen (2004) ranked the most important implementation factors: (a) initial staff training; (b) leadership buy-in and support of the new model; (c) commitment of staff to the new model; (d) availability of ongoing training and technical assistance; (e) support from administration; (f) qualified staff interested in doing the work; (g) adequate funding; and (h) support from the program developer. As an example of addressing these factors, the state of Kentucky used an implementation science approach that resulted in improved student math outcomes, as well as the development of a sustainable state implementation infrastructure that could be used going forward to implement other new practices (Jackson, Fixsen, Ward, Waldroup, & Sullivan, 2018).
A second state example is Michigan, which used a systemic multi-tiered system of support (MTSS) approach to improve student outcomes. The state started with 22 schools in its first year, 2003, and in 2017 supported over 800 schools in 260 school districts across Michigan (Goodman, 2017). Such efforts can be seen through the lens of implementation science. For example, the designers of one MTSS practice—school-wide positive behavior interventions and supports (SW-PBIS)—assessed their ongoing efforts to scale up this practice through the lens of implementation science (Horner, Sugai, & Fixsen, 2017). Horner et al.’s strongest recommendation to others working to scale-up the use of EBPs in schools was to address the implementation of organizational systems in conjunction with daily practices. They described organizational systems as “the policies, operating procedures, allocation of personnel, professional development options, hiring and evaluation expectations, teaming protocols, and data systems that affect the who, where, when, and how practices are used daily in schools” (p. 29).
By looking at school and districts as complex systems, SW-PBIS has been implemented across the country. According to Bohanon and Wu (2014), schools with SW-PBIS programs that have been informed by implementation science demonstrate higher implementation fidelity and better student outcomes. In a study of over 800 schools in 14 states, McIntosh et al. (2018) found that school characteristics and demographics did not predict sustained implementation of SW-PBIS. Rather, systems issues (e.g., level of implementation fidelity, better team use of data for decision-making in year one of implementation) were the strongest predictors of sustained implementation of SW-PBIS over three years. An additional strong predictor of long-term success was the number of other schools in the district (i.e., system) that were implementing SW-PBIS. This recent finding echoes Coburn (2003) who commented that in order to successfully scale up an innovation, deep changes in systems must occur, in addition to the necessary changes in surface structures and procedures.
The multilayered nature of administration and resource allocation that supports special education at the federal, state, district, and school levels demands that we consider the role of systems and processes that comprise special education if changes in placement of, and services for, students with the most significant cognitive disabilities and students with other extensive and pervasive support needs are to occur (Stahmer, Suhrheinrich, Schetter, & Hassrick, 2018). One clear link between current research on best practices in special education and the daily practice of special education practitioners is the diffusion of EBPs into the larger context of schools through the use of implementation science-informed practices. According to Cook and Odom (2013), “the cross-disciplinary field of implementation science has great relevance for translating the promise of EBPs into positive outcomes for children and youth with disabilities” (p. 135).
Odom (2009) recommended that high quality models of professional development in special education follow the principles of implementation science. The National Professional Development Center on Autism Spectrum Disorders (NPDC) uses a system it described as an example of implementation science in action, consisting of: (a) identifying the practice to be implemented; (b) planning and committing to an infrastructure of support at the state and community levels; (c) teaching practitioners directly; and (d) providing ongoing coaching and technical assistance for all practitioners to support their improved use of EBPs. One such model to support the implementation of a comprehensive high school program for students with autism spectrum disorder was described in detail in Odom, Duda, Kucharczyk, Cox, and Stabel (2014); it did not, however, report outcome data.
The EPIS (Exploration, Preparation, Implementation, and Sustainment) EBP implementation model was designed with consideration of human service organizations and their complex layers of systems of human social interaction (Aarons, Hurlburt, & Horwitz, 2011). The authors described a large variety of both inner factors (e.g., school-based, individual teacher based) and outer factors (e.g., socio-political context, inter-organizational systems) to consider when implementing each stage of the implementation model. Stahmer et al. (2018) proposed an adaptation of the EPIS model of managing change efforts for use in schools as an attempt to address the limited capacity of most state systems to scale up practices that likely would lead to improved outcomes for students with disabilities. They identified potential “malleable” district and school factors as important foci of EBP implementation and sustainment, including (a) attitudes toward the EBP, (b) district and school climate, (c) communication, (d) collaboration, (e) buy-in, (f) resource allocation, (g) trainer skills, and (h) teacher skills.
When considering the literature on the placement of, and implementation of inclusive education practices for, students with the most significant cognitive disabilities, there is limited use of the term “implementation science” within the context of strategies specifically related to system change efforts. There has been some research on efforts to develop inclusive education practices for these students, but historically these efforts have been embedded in federally-funded model demonstration projects. There is an important distinction, however, between model demonstration projects and implementation science approaches. Specifically, model demonstration projects develop and demonstrate the effectiveness of an intervention, practice, or approach to a system of service delivery, and then disseminate findings. Some model demonstration projects also focus on the replicability of the intervention, practice, or approach across teachers and schools. However, little, if any, focus is placed on the extent to which a school or district: (a) sustained the use of the intervention, practice, or approach to a system of service delivery over time; or (b) revised systemic policies, procedures, or infrastructures to address long-term systemic transformation.
Research provides evidence that when students with the most significant cognitive disabilities have access to the general education core curriculum and grade-level general education settings, as well as evidence-based instructional practices, the students have better outcomes in academics, social and behavioral skills, and post-secondary life (Ryndak, Jackson, & White, 2013; Ryndak, Morrison, & Sommerstein, 1999). To ensure sustainable systemic change in educational services, the limited research has used an implementation science approach to facilitate changes at the classroom, school, district, and state levels, while addressing the unique needs of students with the most significant cognitive disabilities (Ryndak et al., 2007).
A few studies have examined the implementation, replication, or sustainability of EBPs for students with the most significant cognitive disabilities that promote access to general education settings, grade-level peers, and the core curriculum using the school level as a unit of analysis (Fisher, Sax, & Grove, 2000; Kozleski & Choi, 2018; Kurth, Morningstar, Hicks, & Templin, 2018; Salisbury, Palombaro, & Hollowood, 1993; Shogren, McCart, Lyon, & Sailor, 2015). Several themes emerge from this small body of research. First, administration and leadership are critical for implementing change. Second, representatives of each set of stakeholders (teachers, parents, related service providers, school leaders) must engage in creating or re-envisioning a school-wide mission and vision if needed, as well as rethinking current school policies, procedures, and infrastructure. Third, professional development and technical assistance must align with the school’s mission and vision.
Kozleski and Choi (2018) found that if school personnel wanted to replicate and maintain the changes in service they had developed, having strong and informed leaders alone was insufficient. Having engaged leaders who received targeted professional development was critical to facilitating systemic change. For example, Kozleski and Choi found that when school leaders received technical assistance aligned with MTSS, an integrated educational framework, and family and community engagement, there were improvements in the degree to which students with the most significant cognitive disabilities, as well as other students with extensive and pervasive support needs, received services in inclusive general education settings.
Implementation of such frameworks requires addressing areas of professional development (e.g., EBPs for general and special education practitioners, including paraprofessionals and resource teachers), as well as administrative policies and procedures (e.g., scheduling time for planning and collaboration; reviewing, and potentially revising, roles and responsibilities of faculty and other school personnel) (Fisher et al., 2000; Giangreco & Suter, 2015; Kurth et al., 2018; Salisbury et al., 1993). Fisher et al. (2000) and Salisbury et al. (1993) highlighted the need for reconceptualized school-wide visions and beliefs around membership (e.g., students with extensive and pervasive support needs should be seen as being members of general education classes, the school, and the community) and responsibilities (e.g., the success of students with extensive and pervasive support needs is the responsibility of all the adults on their education teams). This supports the importance of collaboration and joint planning time, as well as clear communication and expectations. Practices such as effectively adapting curriculum (Salisbury et al., 1993) and using in-class supports and services (Fisher et al., 2000) also were found to be necessary components. All these practices can be addressed through comprehensive professional development that includes coaching, and by removing barriers to flexibility and collaboration by adjusting or re-envisioning current school policies, procedures, and infrastructure.
We identified only one study on sustainable district-level reform that specifically addressed placement of students with the most significant cognitive disabilities and students with other extensive and pervasive support needs, and the provision of inclusive education practices in general education settings for those students. Ryndak et al. (2007) studied the efforts of a school district to facilitate immediate and sustained changes to their services for students with severe disabilities over a seven-year period. Specifically, data were collected on student placement, the use of EBPs by education teams, district and school policies and procedures, and administrative responsibilities (e.g., hiring personnel, professional development, assessment, and grading) over a five-year period. District personnel, regional state-funded technical assistance providers, and university faculty collaborated to provide leadership, professional development workshops, job-embedded technical assistance and coaching, and data collection support across the district’s schools. They provided professional development across the district and monthly coaching to education teams in three schools, school and district inclusive education task forces, and a district inclusive education facilitator to both address scaling-up and building local capacity for sustainability. The inclusive education facilitator then replicated the systems change efforts with the remaining schools in the district. The intent of these efforts was to increase the time that the students with extensive and pervasive support needs were engaged in instruction with same-age classmates within general education settings in their home schools. This goal was achieved by using EBPs and embedding instruction with appropriate supports and services within general education instructional and non-instructional activities. Leadership activities included district- and school-level task forces with representation across constituencies: (a) general and special educators; (b) curriculum and assessment specialists; (c) related services personnel; (d) parents; and (e) administrators. Members of the district and school task forces were also engaged at both levels to ensure communication that aligned district and school needs with inclusive education and other initiatives. Data were also collected two years following the termination of this effort. The analysis of data from this study led to three major findings. First, data indicated that the combination of professional development activities was effective at moving students with extensive and pervasive support needs to general education settings in the schools they would attend if they did not have a disability, increasing the use of EBPs in general education settings, and improving student outcomes. Second, data indicated that the communication structure was effective for maintaining communication both across the groups of constituents per level (e.g., general and special educators, related services personnel, grade-level personnel) and across participants engaged at different levels of service (i.e., district- and school-level participants). Finally, data indicated that these changes had been sustained two years following the end of the systemic change efforts, suggesting the successful development of local capacity.
Improvement science emerged within the past decade from developments in health sciences (Bryk, Gomez, Grunow, & LeMahieu, 2015) and from earlier efforts in business, because even research-based interventions (e.g., implementation science) have often failed to produce widespread and durable improvement in outcomes among marginalized groups (Coburn & Stein, 2010; Peterson, 2016). Over the past 15 years, much research about educational practices—programs or products used in schools, in particular—have sought to establish (e.g., in randomized controlled trials) effectiveness as the average effect. Improvement science critiques this approach as flawed because it systematically ignores the varied contexts in which any practice might be used. Lewis (2015) claimed that “the failure of research-based knowledge to ‘scale up’ broadly is a central challenge in education” (p. 54).
Instead of ignoring variability, improvement science explicitly works with variability to extend the range of a practice’s effectiveness (Bryk, 2015). Improvement science recognizes the contextual variability in which any intervention might address a problem (e.g., Bryk et al., 2015), and offers a collective approach to engaging such variation in a scientific framework. Bryk et al. provided the following definition of improvement science:
[Improvement science] joins together [scientific] discipline… with the power of structured networked communities to accelerate learning to improve. It uses disciplined, analytic, and systematic methods to develop and test changes that achieve reliable improvements. It is inclusive in drawing together the expertise of practitioners, researchers, designers, technologists, and many others. And it is very deliberate in organizing its improvement activities in ways akin to a scientific community. (p. 475)
Improvement science in education embraces the concept of “what works” but addresses the durable phenomenon of “what works” as not working well very often. That is, from this perspective, the problems of school improvement itself require concerted scientific attention. Improvement science can be seen as an agenda with plans for an education delivery system that empowers workers (e.g., educators). It also provides the framework for a system of educational research and development (R&D).
Improvement science in education addresses the complexity of educational systems and variability in performance across settings (Bryk, 2015). It promotes networked improvement communities (NICs). The construct of improvement science has been strongly elaborated, promoted, and supported by the Carnegie Foundation for the Advancement of Teaching, directed by Anthony Bryk, since 2008. Bryk has exerted intellectual leadership of the effort, in part based on his school reform experience in Chicago (which began in 1988) and appreciation of improvement science in healthcare (Bryk et al., 2015).
As an educational construct, improvement science incorporates six key principles (Bryk et al., 2015, pp. 12-17):
The first principle centered improvement work on “problems” and reinforced the focus by indicating that “users” (i.e., practitioners) help identify the problems. The second principle recognized the need for “getting quality results under a variety of conditions” (Bryk et al., 2015, p. 35) as the improvement aim. It substituted “what works on average” for what works in view of extant contextual variation (e.g., Stockard, 2010). The third principle addressed the need for a systemic critique of the issue, which historically has been rare in both educational improvement efforts and scientifically grounded improvement efforts (Bickel, Tomasek, & Eagle, 2000; Kirp & Driver, 1995; Peterson, 2016). The fourth principle required an empirical approach, and is a prelude to the fifth principle, disciplined and systematic inquiry. The final principle situated the large effort imagined (implicit in the other principles) in collaborative communities which are often referred to as NICs. These NICs are the locus of scale-up and of the improvement science methodological innovation.
To identify the literature base supporting the use of improvement science in education a search of three education databases—ERIC, Education Research Complete, and Education Full —was conducted, searching on the exact phrase of “improvement science” in any field. “Networked improvement community” was also used as a search phrase. Additionally, a search was conducted of Dissertation Abstracts International to discover doctoral studies relevant to improvement science.
This section presents the results of the improvement science literature review. First we describe the documents that were identified. Then we report the findings from the literature on the use of improvement science in post-secondary settings, followed by studies addressing K-12 teacher preparation and development, and K-12 academic instruction. The section ends with a discussion of findings related to special education.
No documents discovered in the non-dissertation literature published prior to 2010 proved relevant (i.e., they were false drops, e.g., “reading improvement; science”). The search yielded 61 documents of any sort (i.e., essays, guides, practitioner articles, and research articles) relevant to improvement science published between 2010 and 2018. Of the 61 indexed by the databases, 16 concerned health care or health professions education. Of the remaining 45, just 12 exhibited an empirical base of any sort, and of those 12, the reference to improvement science was tangential (i.e., not the focus of the empirical effort or conceptualization) or the scientific engagement too slight (e.g., reported briefly in a practitioner magazine) in six. In short, just six empirical reports are publicly available for review from indexed databases of publicly available professional literature on this basis (Edwards, Sandoval, & McNamara, 2015; Gomez, Gomez, Rodela, Horton, Cunningham, & Ambrocio, 2016; Hannan, Russell, Takahashi, & Park, 2015; Martin & Gobstein, 2015; Proger, Bhatt, Cirks, & Gurke, 2017; Yamada & Bryk, 2016). All were published between 2015 and 2017. Three of the studies focused on community colleges (Edwards et al., 2015; Gomez et al., 2016; Yamada & Bryk, 2016). Four of the studies were sponsored by the Carnegie Foundation or were conducted by researchers closely connected with the Carnegie Foundation (Edwards et al., 2015; Gomez et al., 2016; Hannan et al., 2015; Yamada & Bryk, 2016). No improvement science studies were found that specifically addressed special education or students with the most significant cognitive disabilities.
In no dissertation study was improvement science a “main subject,” but it was used as one among several indexing terms in six studies (Daley, 2017; Lozano, 2017; Mathis, 2016; Morello-DeSerio, 2017; Novak, 2017; Pearce, 2015). All focused prominently on NICs, and all were completed between 2015 and 2017. Of the six dissertation studies, two only tangentially touched on improvement science (Mathis, 2016; Morello-DeSerio, 2017); they investigated problems and advised improvement science as a better way to address them (i.e., at the conclusion of each of the studies). Two studies (Daley, 2017; Pearce, 2015) purported to use improvement science in an intervention. However, neither Pearce nor Daley mentioned NICs (at all); Pearce claimed use of the professional development awareness (PDSA) cycle, but did not describe the procedures. Novak (2017) made an attempt that included (and described) NICs and PDSAs, but the effort operated via existing organizational structures and procedures, and participants reportedly found the PDSA procedures distracting (e.g., needless paperwork). One dissertation (Lozano, 2017) among the six stands out, and will be described in more detail in this review.
The community college studies, though not focused on systems change at the K-12 level, can provide insights into what works and what does not within an educational context. Edwards et al. (2015) described a nationwide professional development (PD) program for community college faculty teaching either of two radically revised courses being scaled-up nationwide to replace traditional “developmental” mathematics offerings, and provided insights into some of the challenges of scale-up. The report described the reality of scale-up. The first iteration of the PD proved woefully inadequate in formative evaluations: users engaged the PD materials thinly and did not subscribe to the outcomes or the aims of the PD system. The report was couched as a case study of the PD redesign. The researchers collected no data outside the frame of the redesign activities, but they did interview 32 faculty. The interview data specified the issues faculty had with the first-year PD:
A PD redesign occurred via a NIC in a user-centered design process derived from the work of Stanford University’s Hasso Platner Institute of Design: (a) empathize; (b) define the problem; (c) ideate; (d) prototype; and (e) test. These five steps became the design framework for the Faculty Support Program (FSP) put into place after the difficult first year for the project.
Another postsecondary study, Gomez et al. (2016), reported on a user-centered design approach to improve “story problems” for use in 12 lessons of a Carnegie-sponsored Quantway community-college math course designed to replace traditional remedial math in that setting. Quantway is a set of quantitative reasoning options designed to promote student success in community college mathematics courses. According to Gomez et al.:
This article’s guiding hypothesis is that using improvement science techniques to engage community college faculty in collaborative curriculum design and development may be a key site for faculty learning. (p. 451)
In other words, the study focused on the activities of the Quantway NIC which centered on the professional development and self-awareness (PDSA) cycle as if the activity were a form of PD. This qualitative case study conducted 36 interviews (i.e., one for each of the 12 lessons) with three of 16 participants involved in the improvement effort in 2014-2015—with each participant considered an individual case. The study also analyzed artifacts, including ten hours of videotaped lessons. Data analysis drew on the relatively recent conception of professional development (i.e., long-term, job-embedded, collaborative). The study found advantages for user-centered design and also reported challenges. Advantages attributed to design-focused PD (of this sort) included:
The findings of Gomez et al. (2016) suggested that building and sustaining a functional NIC with sufficient footprint requires ample resources.
Yamada and Bryk (2016) reported the results of a quasi-experiment with propensity score matching for the Statway Community College developmental math program (the companion “pathway” in this Carnegie project to Quantway). Dependent variables included attainment of college math credits, and number of college credits (all courses) earned. Statway students (at 19 campuses and including over 2,600 students) attained college math credit far more often than control students, and earned somewhat more college credits overall. The authors barely mentioned NICs or improvement science (it was not the focus of the study), but noted, “Although highly speculative, there is also the possibility of significant derivative effects associated with faculty participation in the Statway NIC” (p. 199).
Hannan et al. (2015) examined a nationwide project to support beginning teachers, the Building Teaching Effectiveness Network (BTEN). Like Gomez et al. (2016), this research was a qualitative case study. Rather than including a NIC as one component, the authors characterized the entire BTEN as a NIC, as do Martin and Gobstein (2015). The BTEN leadership developed protocols for providing feedback to novice teachers and reportedly deployed improvement science methods to ensure the BTEN’s effectiveness. The responsibility for this application of the relevant methods, however, rested with school teams: “Consequently, BTEN school teams were expected (and supported) to use improvement science methods to generate knowledge about their work environments and refine routines” (p. 496). In other words, the BTEN expected teams of K-12 practitioners to engage the scientific features of the project. The key research question was, “How did the schools use improvement science methods to enhance, refine, and integrate the feedback process into their existing system?” (p. 497). The study selected 10 of the 17 schools in two of the three BTEN districts (purposive sampling), because the engagement of those schools with the project was judged to be strongest. The study used interviews of principals and facilitators, logs, field-notes, and artifacts in its analyses. Two of the ten schools were very engaged in the feedback process, while some of the other schools were little engaged in either the feedback process or improvement science work in general. The two highly engaged schools adapted the feedback process and also accumulated “far more” (p. 502) PDSA cycles than other BTEN schools. The biggest challenge for the two more fully engaged schools was recording data; they discovered that information needed to be recorded immediately and then made the necessary provisions to ensure it was. Overall, the time and attention required for the methods of improvement science (i.e., rapid PDSA cycles carried out faithfully) competed with other initiatives in many of the schools.
Martin and Gobstein (2015) reported on a national effort to improve the preparation of secondary math teachers, the Mathematics Teacher Education Partnership (MTE-P), under the aegis of the Association of Public and Land-Grant Universities (APLCU). The aim of the report was to describe the leadership structures for the MTE-P (characterized as a large, national NIC).
The domain of action for the MTE-P was very large, as evidenced by initially identified goals (i.e., improving content preparation using active learning methods; producing courses and modules for pedagogical content knowledge; using formative assessments to track candidates’ math knowledge; improving clinical experience; and developing strategies to recruit students). The project (ongoing in 2018) has a large institutional footprint (131 institutions of higher education and 142 K-12 districts across 31 states) and very modest funding (about $1 million total; MTE-P, 2018).The study described the establishment of MTE-P using data sources inherent to project operation, sources that were poorly defined in the report. Analysis methods were not explained, and details about the handling of the actual evidence were not provided. The MTE-Partnership addresses its aims (e.g., those just listed and others as they emerge) through research action clusters (RACs), which have reportedly enjoyed limited success and had many difficulties (e.g., difficulties with membership, participation requirements, communication, and continuity). The overall contribution of the study appears to be its account of the challenges of managing a large improvement network. Leadership features and functions identified in the Martin and Gobstein (2015) study included:
Proger et al. (2017) described a project where NICs were established “after state education agency leaders requested assistance from Regional Educational Laboratory (REL) Midwest (one in Michigan and one in Minnesota) to support state-led efforts to use improvement science to raise student achievement and narrow achievement gaps in schools with the widest achievement gaps” (p. i). It also served as a guided opportunity for two state education agencies to “learn by doing” (p. 1). The report derived several “lessons”:
Lozano (2017), in a dissertation guided by one of the improvement science originators (Kimberley Gomez), studied the use of PDSA cycles to improve science instruction in the context of “design-based research partnerships.” The study examined three interventions framed and implemented in this fashion based on small collaborative teams (three teachers, six teachers, and two teachers). Research questions differed for each intervention, all of them examining the joint role of PDSA practice in teacher learning and instructional improvement. The focus of this study was not improvement science per se, just one component (i.e., PDSA). Notably, the involved schools gave teachers several full days to meet for the projects.
No studies were identified that focused on the use of improvement science for systems change activities related to special education. Still, many of the studies described in this section addressed increasing educator capacity, which would have implications for special education, including the implementation of inclusive education practices for students with the most significant cognitive disabilities.
The field of implementation science has much to offer to efforts to include students with the most significant cognitive disabilities and students with other extensive and pervasive support needs in general education settings. Implementation science efforts often start with an under-used EBP and then identify and address the existing “theory to practice” gaps across individual and administrative systems of responsibility (Bauer et al., 2015, p. 3). Implementing inclusive education practices in general education settings for students with extensive and pervasive support needs is a research-based practice (Jackson, Ryndak, & Wehmeyer, 2010) that is not consistently available for these students across schools, districts, and states. Implementing efficacious inclusive education services for students with the most significant cognitive disabilities is a complex and multi-faceted undertaking, and as LeMahieu, Edwards, and Gomez (2015) stated, “the rubber meets the road when people seek to bring programs and innovations that have shown some level of promise into practice effectively, reliably, and at scale” (p. 446).
In school districts where the placement of students with the most significant cognitive disabilities in general education settings and the implementation of evidence-based inclusive education practices are the norm, the individual and administrative systems supporting these students have undergone a deliberate process of change (Ryndak et al., 2007). Implementation science offers both the theoretical and practical knowledge necessary to facilitate sustainable change in practices and systems.
Much can be learned from the educational improvement science literature. The large ambitions of improvement science efforts must be focused. Proger et al. (2017) advised that improvement science is appropriate for “an important problem that is specific enough to act on” (p. 12). The development and deployment of improvement science methods stand astride a very long history of messy work with large projects, and those aiming to scale-up from a base in experimentally-proven practices (e.g., Baez & Boyles, 2009). The construct requires an ample portfolio of settings to encompass the different contexts that distinguish it from an experimentalist approach (Bryk et al., 2015; Lozano, 2017). This requires adequate funding. For example, the MTE-Partnership (Martin & Gobstein, 2015) exemplified a project with seemingly inadequate funding. If typical large-scale projects allocate 10% of funds to evaluation, the allocation for an improvement-science approach driving the project would seem to require more.
Organizational capacity to engage improvement science methods is limited. For example, several studies reported that there were challenges in establishing and maintaining routines, and with recording data (e.g., Hannan et al., 2015; Lozano, 2017; Martin & Gobstein, 2015). Recording data, however, is a very first step—a markedly basic one—in the various empirical techniques and, ultimately, habits of mind that operationalize the “science” in improvement science. Scaffolding the appreciation of and engagement in such habits of mind (and the array of related techniques) is a momentous task that improvement science sets for itself (Bryk et al., 2015).
The direction of efforts that deploy improvement science methods requires leadership—both individual and team—that is deeply experienced, wise, and multiply talented. The need for such leaders, moreover, spans the researcher and practitioner communities. This issue also concerns the initial intellectual power to conceive and guide the plan for, and modification of, the overall work (Rohanna, 2017). Without it, an improvement science effort is much less likely to thrive.
As Sashkin and Egermeir (1992) observed, changing the separate parts of an education system in isolation of the others is not likely to result in improved use of a desired practice or improved outcomes. To ensure sustainable systemic change in educational services, successful efforts to facilitate change have to occur at multiple levels, such as at the education team or classroom, school, and district levels (Ryndak et al., 2007). The linkages that exist between state education agencies (SEAs) and local education agencies (LEAs), however, require the inclusion of the state level in future efforts.
SEA policies and practices foster and support the successful implementation and sustained use of inclusive education practices at the local education agency (LEA) level. SEA support (e.g., appropriate policies, effective professional development and technical assistance) creates the backdrop for LEA efforts. From the perspective of systemic complexity, helpful rules do not proceed from a benign or prescient state power (e.g., a state legislature, SEA) able to plan and impose one system that is best for all localities. No one-best education system has emerged, nor can one emerge. The odds of systems change efforts being successful improve with the inclusion of efforts to mitigate resistance (Fullan, 2011; Lewin, 1951). Nevertheless, the literature notes that helpful rules are more likely to come forth in response to the struggle of alliances and coalitions inside and outside an organization, with some constituent groups working in concert (e.g., Mintrom & Vergari, 1996; Winzer, 2012). Notably, an alliance between each SEA and LEAs might shape rules with sufficient flexibility to respond to local variability.
Beyond rules, SEAs can impact PD activities, including technical assistance with coaching, provided for educational personnel across the state to increase the implementation and sustained use of EBPs through building local capacity. PD addresses the substantive work taking place in LEAs, and each SEA can assist in the development of the LEAs’ organizational capacity for that substantive work. According to Fullan (2011) effective technical assistance would: (a) cultivate the intrinsic motivation of educators; (b) position improvement as the continuous work of teachers and students; (c) foster widespread collaboration; and (d) address the learning and teaching of every student and teacher without exception. Capacity-building of this sort arguably affects the whole educational organization, rendering improvement sustainable as a cultural feature of the organization (e.g., Coburn, 2003; Elmore, 2017; Fullan, 2011, 2016; Herman, 2012; Jardine, Friesen, & Clifford, 2006; Sahlberg, 2011; Sizer, 2013).
LEAs include districts and the smaller units that comprise them (e.g., schools, classrooms, other instructional and extra-curricular settings). They vary in their capacity for change. When reform efforts fizzle in schools, features of the organization (e.g., resource flows, structures, norms, leadership) contribute to the failure (e.g., Bryk, Sebring, Allensworth, Luppescu, & Easton, 2010; Spillane, Mesler, Croegaert, & Sherer, 2004). When sufficiently numerous, such conditions describe a lack of capacity for change that otherwise seems desirable (Massell, 2000). Capacity goes well beyond a supportive central office, because it is a feature of the entire organization, not of an individual or single office.
The idea of implementing and sustaining a change in practice is simply stated, but affecting such change is difficult. The learning referenced by Spillane and Thompson is organizational (e.g., Senge, 1990) and accomplishing organizational learning and change is tricky. Fullan (2016) analyzed the relevant organizational capacities for organizational learning as including (a) the culture of learning; (b) local ownership of the learning agenda; and (c) a system of continuous improvement and innovation that is simultaneously bottom-up, top-down, and sideways. For implementation to occur across an educational organization (i.e., at scale), change efforts need to acknowledge and provide for the on-the-ground conditions that the initiative will confront: no matter how challenging or how difficult to accommodate (Fullan, 2016).
District-level Change. In the current era of ambitious reform contexts, some district-level change processes have emerged from the new appreciation of district capacity (e.g., Fixsen, Blase, Horner, & Sugai, 2009; Fullan & Pinchot, 2018). For instance, Fullan and Pinchot identify coherence (i.e., focused rather than scattered initiatives) and distributed leadership as key processes. Others agree that such processes build organizational culture and collective efficacy in ways that sustain systemic improvement (Fixsen et al., 2009; Fullan & Pinchot, 2018; Spillane & Thompson, 1998; Tefs & Telfer, 2013). These processes strengthen over time and are supported by high-quality PD that is intensive and demanding of individuals, is long-term, and involves coaching (Desimone & Pak, 2017). Fullan (2016) noted that this sort of PD is designed to enhance all three features of organizational capacity (e.g., culture of learning, ownership of the learning agenda, continuous improvement).
To result in sustained change, reform efforts confront two related but seemingly separate challenges. The first sustainability challenge is the seeming failure of many reform efforts—a failure to get going in the first place (Fixsen et al., 2009). The second sustainability challenge is maintaining the ongoing implementation of the new practice into the future.
Similarly, the failure of reform efforts to spread and take hold is very familiar. As described in the section on implementation science, one strategic response to the failure of reform efforts is to devote more careful attention to implementation (e.g., Blase, Fixsen, Naoom, & Wallace, 2005; Bryk et al., 2015). If such extra support does establish implementation of a new EBP, concerns for sustaining it supplant concerns for implementing it widely (Datnow, 2005). The successful reform, after all, exists within a larger and more dynamic context: the entire organization. This insight suggests that a commensurate response would involve the entire organization (e.g., Bryk et al., 2015; Fixsen et al., 2015; Fullan, 2010). In other words, capacity building needs to include cultivation of an improvement culture throughout the district. McDonnell and Weatherford (2016) reminded reformers concerned with implementation that school districts are shaped by “organizational politics” internally and by partisan politics externally.
School-level Change. “School reform” or “school improvement” is the traditional overall domain of educational change. Schools, after all, are where teachers and students go, and where the work of learning gets done. Schools are organizationally subsidiary to districts, but functionally they house the important work of schooling. At the same time, of course, schools are themselves organizations, with differing organizational cultures, different ages and kinds of students, and situated in differing communities, even within districts.
Schools exhibit variation in their own organizational capacity for creating and sustaining change—whether working on their own with a school-improvement effort or with appropriate district-level support (Datnow, 2005; Geijsel, Van Den Berg, & Sleegers, 1999). Schools that are well-resourced with human capital (i.e., expertise) and social capital (i.e., collective efficacy) are also those most likely to be economically well-resourced (Spillane & Thompson, 1998). It is no surprise that the schools most thought in need of improvement, predictably, are the ones whose capacity for improvement has been compromised by external influences (Bascia, 1996; Fullan, 2011; Tye, 2000).
In a review of literature on PD for teachers, Darling-Hammond, Chung, Andree, Richardson, and Orphanos (2009) noted that well-designed PD can improve teacher performance and have a positive effect on student outcomes. Darling-Hammond et al. stated that although research findings demonstrate that a minimum of 30 contact hours over six to 12 months is necessary to affect change in teacher behavior, the majority of PD continues to occur through short-term conferences or workshops. Knight (2007) advocated for the essential combination of training and coaching, since coaching improves the accuracy of teacher performance while helping to create a good fit between the new practice and a teacher’s natural work environment.
As D’Amico (1982) predicted, guiding schools to emulate the characteristics of “good” benchmarks has turned out to be quite difficult. Indeed, the failures of school-level PD efforts, as well as school reform projects in general, can be traced to the conditions of school culture context (Fullan, 2011; Tye, 2000). According to many observers (e.g., Bryk et al., 2015; Datnow, 2005; Fullan, 2001; Hargreaves, 1995; Little, 1981), the organizational conditions that influence the degree to which systems change efforts are successful prominently include (a) norms of collaboration or isolation (e.g., teams); (b) educators’ belief in their own effectiveness (e.g., teacher collective efficacy); (c) the development of common purpose(s); (d) prevalent concern for the craft of teaching; and (e) the norms of leadership (e.g., shared or closely held).
Reform efforts that draw on the findings of culture studies prominently deploy job-embedded PD designed to move school cultures toward norms that support instructional improvement, since learning is the core purpose of schooling. PD for this purpose notably includes leadership development (Leithwood & Seashore-Louis, 2011).
Classroom-level Change. Classrooms composed of students and teachers are at the very pinnacle, or perhaps bottom, of the organizational heap. They are the organizational pinnacle because they are the venue of actual teaching and learning. When observers refer to “bottom-up reform” efforts, they often have the solidarity of classroom teachers in view. So, from the perspective of the organizational chart, classrooms make up the broad base of the school district organization. Classrooms also are their own organizations. For instance, like districts and schools, each classroom also exhibits its own culture, which is guided and shaped by the instructional personnel present.
The available theoretical literature provides extensive guidance for using implementation science and improvement science in education. This knowledge, in combination with EBPs, inclusive education practices, and outcomes-oriented practices for meeting the needs of students with the most significant cognitive disabilities, might finally allow the vision of inclusive education to be realized.
A systems change approach is needed to overcome the complex barriers that historically have limited the placement of students with the most significant cognitive disabilities in inclusive general education settings in their neighborhood schools, as well as the use of evidence-based inclusive education practices. This means that for educational change to occur efforts must simultaneously address the multiple levels of these complex organizations. The need for such perspicacity is widely acknowledged but has proven difficult to achieve (Fullan, 2011; Slavin, 2007).
Implementing and sustaining practices and policies that support inclusive education for all students, including students with the most significant cognitive disabilities, in general education settings requires a systemic approach at all levels of the education system. SEAs need to design systemic approaches to improvement. Such approaches should support LEAs in: (a) decreasing the placement of students in self-contained settings, integrating and aligning both resources, and providing collaborative and embedded education team services; (b) creating efficacious infrastructures; and (c) redesigning work processes at all levels to improve the collective instructional capacity of the system in supporting higher levels of learning for all students.
The literature suggests that the capacity to make organizational change on behalf of all students is an important component of the capacity to make change on behalf of any subset of students (e.g., Tefs & Telfer, 2013), such as those with the most significant cognitive disabilities. The literature also suggests that capacity for change in a complex system requires the involvement of change agents at all levels of the system (e.g., Jenlink, Reigeluth, Carr, & Nelson, 1995); and that change on behalf of marginalized students, including those with the most significant cognitive disabilities, entails non-negotiable practices that function to improve equity, inclusiveness, and social justice widely throughout the system (e.g., Frattura & Capper, 2007).
This review of the literature resulted in the identification of several components associated with effective and sustainable systemic change efforts that should be addressed when attempting systemic change related to educational placements for students with the most significant cognitive disabilities and the implementation and sustained use of EBPs in inclusive general education settings. These include:
A Common Vision. The existence or development of a common vision for desired services (e.g., what comprises efficacious education for students with the most significant cognitive disabilities in inclusive general education settings in their home schools) is critical for organizational ownership of the desired practices. Without this common vision personnel might use the same language while actually referring to a different set of practices that are inconsistent with others’ vision of desired services.
A Common Understanding of and Commitment to the Change Process. The existence or development of a common understanding of both the change process and the amount of effort required to achieve sustainable systemic change is critical. Without this, personnel might engage with different levels of commitment to activities required to achieve the desired change.
Effective Formalized Structures for Top-Down, Bottom-Up, and Sideways Communication. Formalized structures for communication ensure all groups of constituents at each level of a system—state-, district-, school-, and classroom-level—are engaged in, or informed about, the desired change and the activities occurring to realize that change are critical. Without such formalized structures, it is less likely that all constituents will be informed and, therefore, in a position to assist with or support efforts for the desired change.
External Critical Friends. Critical friends can assist in looking objectively at the current services and comparing those with the desired services. Without such trusted and respected external critical friends, it is more likely that state, district, and school personnel could observe what they want to observe, instead of what actually is occurring.
Meaningful Data. Data that inform personnel about the impact of change efforts on the variables that are most indicative of progress toward the desired practice, as well as the use of those data for making decisions that impact services and outcomes for continuous improvement, are critical. Without such data and their efficacious use, strategic planning could be misguided and progress monitoring could be less than helpful in leading to sustainable systemic change.
Coordinated Efforts at Multiple Levels. Addressing everything from policy, to administrative procedures, to PD on EBPs, to leadership of instructional and other personnel to ensure use of EBPs with fidelity is critical to the success of change efforts (Tefs & Telfer, 2013). Systems are very complex organizations comprised of numerous individuals with different sets of skills, experiences, expectations, and roles.
These findings from the systems change literature can guide initiatives to increase and sustain the placement of students with the most significant cognitive disabilities in inclusive general education settings in their home schools, as well as the implementation and sustained use of EBPs in those settings.
Aarons, G., Hurlburt, M., & Horowitz, S. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Policy in Mental Health Services, 38(1), 4–23.
Baez, B., & Boyles, D. (2009). The politics of inquiry: Education research and the “culture of science.” Albany, NY: State University of New York Press.
Balas, E. A., & Boren, S. A. (2000). Managing clinical knowledge for health care improvement. In J. Bemmel & A. T. McCray (Eds.), Yearbook of medical informatics, 2000: Patient-centered systems (65-70). Stuttgart, Germany: Schattauer Verlagsgesellschaft mbH.
Bascia, N. (1996). Caught in the crossfire: Restructuring, collaboration, and the “problem” school. Urban Education, 31(2), 177–198.
Bauer, M., Damschroder, L., Hagedorn, H., Smith, J., & Kilbourne, A. (2015). An introduction to implementation science for the non-specialist. BMC Psychology, 3(32), 1–12.
Bickel, R., Tomasek, T., & Eagle, T. H. (2000). Top-down, routinized reform in low-income, rural schools: NSF’s Appalachian Rural Systemic Initiative. Education Policy Analysis Archives, 8(12).
Blase, K., Fixsen, D., & Duda, M. (2011). Implementation science: Building the bridge between science and practice. Presented at the Institute of Education Sciences Conference, Washington, DC.
Blase, K. A., Fixsen, D. L., Naoom, S. F., & Wallace, F. (2005). Operationalizing implementation: Strategies and methods. Tampa, FL: University of South Florida. Retrieved from http://nirn.fmhi.usf.edu/resources/detail.cfm?resourceID=48
Bohanon, H., & Wu, M. J. (2014). Developing buy-in for positive behavior support in secondary settings. Preventing School Failure, 58, 223–229.
Bryk, A. S. (2015). 2014 AERA distinguished lecture: Accelerating how we learn to improve. Educational Researcher, 44(9), 467–77.
Bryk, A. S., & Gomez, L. M. (2008). Reinventing a research and development capacity. In F. M. Hess (Ed.), The future of education entrepreneurship: Possibilities for school reform (pp. 181–206). Cambridge, MA: Harvard Education Press.
Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Cambridge, MA: Harvard Education Press.
Bryk, A., Ladd, H., O’Day, J., & Smith, M. (2016, December 21). Memo: A shift in the federal role is needed to promote school improvement. Retrieved from https://www.brookings.edu/blog/brown-center-chalkboard/2016/12/21/memo-a-shift-in-the-federal-role-needed-to-promote-school-improvement/
Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S., & Easton, J. Q. (2010). Organizing schools for improvement: Lessons from Chicago. Chicago, IL: University of Chicago Press.
Carnegie Foundation for the Advancement for the Teaching. (2019). Our ideas. Retrieved from https://www.carnegiefoundation.org/our-ideas/
Castillo, J. M., Batsche, G. M., Curtis, M. J., Stockslager, K., March, A., & Minch, D. (2016). Problem Solving/Response to Intervention evaluation tool technical assistance manual. Retrieved from http://floridarti.usf.edu/resources/program_evaluation/ta_manual_revised2016/sections/opening.pdf
Chaudoir, S., Dugan, A., & Barr, C. (2013). Measuring factors affecting implementation of health innovations: A systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Science, 8(22), 1–20.
Coburn, C. E. (2003). Rethinking scale: Moving beyond numbers to deep and lasting change. Educational Researcher, 32(3), 3–12.
Coburn, C. E., & Stein, M. K. (2010). Research and practice in education: Building alliances, bridging the divide. Lanham, MD: Rowman & Littlefield.
Cook, B. G., & Odom, S. L. (2013). Evidence-based practices and implementation science in special education. Exceptional Children, 79(2), 135–144.
Council of Chief State School Officers (CCSSO) (2017). CCSSO principles of effective school improvement systems. Washington D.C.: Author. Retrieved from https://ccsso.org/resource-library/ccsso-principles-effective-school-improvement-systems
Daley, B. (2017). Improvement science for college, career, and civic readiness: Achieving better outcomes for traditionally underserved students through systematic, disciplined inquiry (Doctoral dissertation). Retrieved from http://csusm-dspace.calstate.edu/handle/10211.3/193028
D’Amico, J. (1982). Using effective schools studies to create effective schools: No recipes yet. Educational Leadership, 40(3), 60–62.
Darling-Hammond, L., Chung, R., Andree, A., Richardson, N., & Orphanos, S. (2009). State of the profession. Journal of Staff Development, 30(2), 42–50.
Datnow, A. (2005). The sustainability of comprehensive school reform models in changing district and state contexts. Educational Administration Quarterly, 41(1), 121–153.
Desimone, L. M., & Pak, K. (2017). Instructional coaching as high-quality professional development. Theory into Practice, 56(1), 3–12.
Domitrovich, C. E., Bradshaw, C. P., & Poduska, J. M. (2008). Maximizing the implementation quality of evidence-based preventive interventions in schools: A conceptual framework. Advances in School Mental Health Promotion, 1, 6–28.
DuPaul, G. D. (2009). Assessing integrity of intervention implementation: Critical factors and future directions. School Mental Health, 1, 154–157.
Easterling, D., & Metz, A. (2016). Getting real with strategies: Insights from implementation science. The Foundation Review, 8(2), 97–115.
Eccles, M. P., & Mittman, B. S. (2006). Welcome to implementation science. Implementation Science, 1(1), 1–3.
Edwards, A. R., Sandoval, C., & McNamara, H. (2015). Designing for improvement in professional development for community college developmental mathematics faculty. Journal of Teacher Education 66(5), 466–481.
Elmore, R. (2017). Getting to scale: It seemed a good idea at the time. Journal of Educational Change, 17(4), 529–537.
Fisher, D., Sax, C., & Grove, K. (2000). The resilience of changes promoting inclusiveness in an urban elementary school. The Elementary School Journal, 100(3), 213–227.
Fixsen, D., & Blase, K. (2009). Technical assistance in special education: Past, present, and future. Topics in Early Childhood Special Education, 29(1), 62–64.
Fixsen, D. L., Blase, K. A., Duda, M., Naoom, S. F., & Van Dyke, M. (2010). Sustainability of evidence-based programs in education. Journal of Evidence-based Practices for Schools, 11(1), 30–46.
Fixsen, D. L., Blase, K. A., Horner, R., & Sugai, G. (2009). Readiness for change (Scaling Up Brief #3). Chapel Hill, NC: The University of North Carolina. Retrieved from http://files.eric.ed.gov/fulltext/ED507442.pdf
Fixsen, D., Blase, K., Metz, A., & Van Dyke, M. (2013). Statewide implementation of evidence-based programs. Exceptional Children, 79(2), 213–230.
Fixsen, D. L., Blase, K. A., Naoom, A., & Duda, M. (2015). Implementation drivers: Assessing best practice. Chapel Hill, NC: Frank Porter Graham Child Development Center.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network (FMHI Publication #231).
Frattura, E. M., & Capper, C. A. (2007). Leading for social justice: Transforming schools for all learners. Thousand Oaks, CA: Corwin.
French, S., Green, S., O’Connor, D., McKenzie, J., Francis, J., Michie, S., Buchbinder, R., Schattner, P., Spike, N., & Grimshaw, J. (2012). Developing theory-informed behaviour change interventions to implement evidence into practice: A systematic approach using the Theoretical Domains Framework. Implementation Science, 7(38), 1–8.
Fullan, M. (2001). The new meaning of educational change. New York, NY: Teachers College Press.
Fullan, M. (2010). The big ideas behind whole system reform. Education Canada, 50(3), 24–27.
Fullan, M. (2011). Choosing the wrong drivers for whole system reform. East Melbourne, AU: Center for Strategic Reform. Retrieved from http://michaelfullan.ca/wp-content/uploads/2016/06/13396088160.pdf
Fullan, M. (2016). The elusive nature of whole system improvement in education. Journal of Educational Change, 17(4), 539–544.
Fullan, M., & Pinchot, M. (2018). The fast track to sustainable turnaround. Educational Leadership, 75(6), 48–54.
Geijsel, F., Van Den Berg, R., & Sleegers, P. (1999). The innovative capacity of schools in primary education: A qualitative study. Qualitative Studies in Education, 12(2), 175–191.
Giangreco, M., & Suter, J. (2015). Precarious or purposeful? Proactively building inclusive service delivery on solid ground. Inclusion, 3(3), 112–131.
Gomez, K., Gomez, L. M., Rodela, K. C., Horton, E. S., Cunningham, J., & Ambrocio, R. (2016). Embedding language support in developmental mathematics lessons: Exploring the value of design as professional development for community college mathematics instructors. Journal of Teacher Education, 66(5), 450–465.
Goodman, S. (2017). Lessons learned through a statewide implementation of a multi-tiered system of support. Perspectives on Language and Literacy, Fall, 24–28.
Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly, 82(4), 581–629.
Greenwood, C., & Abbott, M. (2001). The research to practice gap in special education. Teacher Education and Special Education. 24(4) 276–89.
Hall, G., & Hord, S. M. (1987). Change in schools: Facilitating the process. Albany, NY: SUNY Press.
Hall, G., & Hord, S. (2006). Implementing change: Patterns, principles, and potholes. New York, NY: Pearson/Allyn & Bacon.
Hannan, M., Russell, J. L., Takahashi, S., & Park, S. (2015). Using improvement science to better support beginning teachers: The case of the building a teaching effectiveness network. Journal of Teacher Education, 66(5), 494–508.
Hargreaves, D. H. (1995). School culture, school effectiveness, and school improvement. School Effectiveness and School Improvement, 6(1), 23–46.
Heath, C., & Heath, D. (2008). Made to stick: Why some ideas survive and others die. New York, NY: Random House.
Herman, R. (2012). Scaling school turnaround. Journal for the Education of Students Placed at Risk, 17, 25–33.
Horner, R., Sugai, G., & Fixsen, D. (2017). Implementing effective educational practices at scales of social importance. Clinical Child and Family Psychology Review, 20(1), 25–35.
Hott, B., Berkeley, S., Raymond, L., & Reid, C. (2018). Translating intervention research for students with mild disabilities to practice: A systematic journal analysis. The Journal of Special Education, 52(2), 67–77.
Jackson, K. R., Fixsen, D., & Ward, C. (2018). Four domains for rapid school improvement: An implementation framework. Chapel Hill, NC: National Implementation Research Network, University of North Carolina at Chapel Hill.
Jackson, K., Fixsen, D., Ward, C., Waldroup, A., & Sullivan, V. (2018). Accomplishing effective and durable change to support improved student outcomes. Chapel Hill, NC: SISEP/NIRN/Kentucky Department of Education.
Jackson, L., Ryndak, D. L., & Wehmeyer, M. (2010). The dynamic relationship between context, curriculum, and student learning: A case for inclusive education as a research-based practice. Research and Practice for Persons with Severe Disabilities, 33(4), 175–195.
Jardine, D., Friesen, S., & Clifford, P. (2006). Curriculum in abundance. New York, NY: Routledge.
Jenlink, P. M., Reigeluth, C. M., Carr, A. A., & Nelson, L. M. (1995). Facilitating systemic change in school districts: A guidebook. Bloomington, IN: The Systemic Change Agency.
Kirp, D. L., & Driver, C. E. (1995). The aspirations of systemic reform meet the realities of localism. Educational Administration Quarterly, 31(4), 589–612.
Knight, J. (2007). Instructional coaching: A partnership approach to improving instruction. Thousand Oaks, CA: Corwin Press.
Kozleski, E., & Choi, J. (2018). Leadership for equity and inclusivity in schools: The cultural work of inclusive schools. Inclusion, 6(1), 33–44.
Kurth, J., Morningstar, M., Hicks, T., & Templin, J. (2018). Exploring the relationship between school transformation and inclusion: A Bayesian multilevel longitudinal analysis. Inclusion, 6(1), 19–32.
Kurth, J., Morningstar, M., & Kozleski, E. (2014). The persistence of highly restrictive special education placements for students with low incidence disabilities. Research and Practice for Persons with Severe Disabilities, 39(3), 227–239.
Langley, A., Nadeem, E., Kataoka, S., Stein, B., & Jaycox, S. (2010). Evidence-based mental health programs in schools: Barriers and facilitators of successful implementation. School Mental Health, 2(3), 105–113.
Leithwood, K., & Seashore-Louis, K. (2011). Linking leadership to student learning. San Francisco, CA: Jossey-Bass.
LeMahieu, P. G., Edwards, A. R., & Gomez, L. M. (2015). At the nexus of improvement science and teaching: Introduction to a special section of the Journal of Teacher Education. Journal of Teacher of Education, 66(5), 446–449.
Lewin, K. (1951). Field theory in social science. New York, NY: Harper Row.
Lewis, C. (2015). What is improvement science? Do we need it in education? Educational Researcher, 44(1), 54–61.
Little, J. W. (1981, April). Power of organizational setting: School norms and staff development. Washington, DC: National Institute of Education. Paper presented at the annual meeting of the American Educational Research Association, Los Angeles, CA. Retrieved from https://files.eric.ed.gov/fulltext/ED221918.pdf
Lozano, M. (2017). Learning in practice: Exploring the use of plan-do-study-act cycles to support professional learning (Doctoral dissertation). Retrieved from https://escholarship.org/uc/item/3mn1j51b
Lyon, A., Cook, C., Brown, E., Locke, J., Davis, C., Ehrhart, M., & Aarons, G. (2018). Assessing organizational implementation context in the education sector: Confirmatory factor analysis of measures of implementation leadership, climate, and citizenship. Implementation Science, 13(5), 1–14.
Martin, W. G., & Gobstein, H. (2015). Generating a networked improvement community to improve secondary mathematics teacher preparation: Network leadership, organization, and operation. Journal of Teacher Education, 66(5), 482–493.
Massell, D. (2000). The district role in building capacity: Four strategies. Philadelphia, PA: Consortium for Policy Research in Education. ERIC database (ED453575)
Mathis, P. (2016). An investigation of how accountability systems influence the design and development of student centered learning environments (Doctoral dissertation). Retrieved from Proquest Dissertations and Theses A&I. Proquest document ID 10109540
McDonnell, L., & Weatherford, S. (2016). Recognizing the political in implementation research. Educational Researcher, 45(4), 233–242.
McIntosh, K., Mercer, S., Nese, R., Strickland-Cohen, K., Kittelman, A., Hoselton, R., & Horner, R. (2018). Factors predicting sustained implementation of a universal behavior support framework. Educational Researcher, 47(5), 307–316.
McKay, S. (2017, March 15). Quality improvement approaches: Implementation science. Retrieved from https://www.carnegiefoundation.org/blog/quality-improvement-approaches-implementation-science
Mintrom, M., & Vergari, S. (1996). Advocacy coalitions, policy entrepreneurs, and policy change. Policy Studies Journal, 24(3), 420–434.
Morello-DeSerio, D. (2017). Improving the practice and culture of early learning in Ohio (Doctoral dissertation). Retrieved from Proquest Dissertations and Theses A&I. Proquest document ID 10642965
Morningstar, M., Kurth, J., & Jackson, P. (2017). Examining national trends in educational placements for students with significant disabilities. Remedial and Special Education, 38(1), 3–12.
Morris, Z. S., Wooding, S., & Grant, J. (2011). The answer is 17 years, what is the question: Understanding time lags in translational research. Journal of the Royal Society of Medicine, 104(12), 510–520.
Naoom, S., Wallace, F., Blase, K., Haines, G., & Fixsen, D. (2004). Implementation in the real world-taking programs and practice to scale: Concept mapping report. Tampa, FL: Louis de la Parte Florida Mental Health Institute.
Nilsen, P. (2015). Making sense of implementation theories, models, and frameworks. Implementation Science, 10(53), 1–13.
Novak, J. R. (2017). Studying the effectiveness of a community college advising intervention model for undecided students (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses A&I. Proquest Document ID 10666721
Odom, S. (2009). The tie that binds: Evidence-based practice, implementation science, and outcomes for children. Topics in Early Childhood Special Education, 29(1), 53–61.
Odom, S.L., Cox, A.W., & Brock, M.E. (2013). Implementation science, professional development, and autism spectrum disorders. Exceptional Children, 79, 233-251.
Odom, S., Duda, M., Kucharczyk, S., Cox, A., & Stabel, A. (2014). Applying an implementation science framework for adoption of a comprehensive program for high school students with autism spectrum disorder. Remedial and Special Education, 35(2), 123–132.
Pearce, D. (2015). Developing a rubric for the evaluation of reading programs for Johnston County schools (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses A&I. Proquest Document ID 3708910
Peterson, A. (2016). Getting “What Works” working: Building blocks for the integration of experimental and improvement science. International Journal of Research & Method in Education, 39(3), 299–313.
Powell, B., Waltz, T., Chinman, M., Damschroder, L., Smith, J., Matthieu, M., Proctor, E., & Kirchner, J. (2015). A refined compilation of implementation strategies: Results from the Expert Recommendations for Implementing Change (ERIC) project. Implementation Science, 10(21), 1–14.
Proger, A. R., Bhatt, M. P., Cirks, D., & Gurke, D. (2017). Establishing and sustaining networked improvement communities: Lessons from Michigan and Minnesota. Washington, DC: REL Midwest. Retrieved from https://eric.ed.gov/contentdelivery/servlet/ERICServlet?accno=ED573419
Rohanna, K. (2017). Breaking the “adopt, attack, abandon” cycle: A case for improvement science in K–12 education. In C. A. Christie, M. Inkelas, & S. Lemire (Eds.), Improvement science in evaluation: Methods and uses (pp. 65–77). Malden, MA: Wiley.
Ryndak, D. L., Jackson, L., & White, J. (2013). Involvement and progress in the general curriculum for students with extensive support needs: K–12 inclusive-education research and implications for the future. Inclusion, 1(1), 28–49.
Ryndak, D. L., Morrison, A., & Sommerstein, M. (1999). Literacy before and after inclusion in general education settings: A case study. Research and Practice for Persons with Severe Disabilities, 24(1), 5–22.
Ryndak, D. L., Reardon, R., Benner, S., & Ward, T. (2007). Transitioning to and sustaining district-wide inclusive services: A 7-year study of a district’s ongoing journey and its accompanying complexities. Research and Practice for Persons with Severe Disabilities, 32(4), 228–246.
Sahlberg, P. (2011). Finnish lessons: What can the world learn from educational change in Finland? New York, NY: Teachers College Press.
Salisbury, C., Palombaro, M., & Hollowood, T. (1993). On the nature and change of an inclusive elementary school. Research and Practice for Persons with Severe Disabilities, 18(2), 75–84.
Sarason, S. B. (1993). The predictable failure of educational reform: Can we change course before it’s too late? San Francisco, CA: Jossey-Bass.
Sashkin, M., & Egermeier, J. (1992). School change models and processes: A review of research and practice. Presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA. Retrieved from http://files.eric.ed.gov/fulltext/ED348758.pdf
Senge, P. M. (1990). The fifth discipline: The art and practice of the learning organization. New York, NY: Doubleday and Company.
Shogren, K., McCart, A., Lyon, K., & Sailor, W. (2015). All means all: Building knowledge for inclusive schoolwide transformation. Research and Practice for Persons with Severe Disabilities, 40(3), 173–191.
Sizer, T. (2013). The new American high school. San Francisco, CA: Jossey-Bass.
Slavin, R. E. (2007). Educational research in the age of accountability. Boston, MA: Allyn & Bacon, Inc.
Spillane, J. P., Mesler, L., Croegaert, C., & Sherer, J. Z. (2004). Coupling administrative practice with the technical core and external regulation: The role of organizational routines (Working Paper 2009–04). Evanston, IL: Northwestern University. Retrieved from http://www .ipr.northwestern.edu/publications/docs/
Spillane, J. P., & Thompson, C. L. (1998). Looking at local districts’ capacity for ambitious reform. Philadelphia, PA: Consortium for Policy Research in Education. Retrieved from http://files.eric.ed.gov/fulltext/ED466566.pdf
Stahmer, A., Suhrheinrich, J., Schetter, P., & Hassrick, E. (2018). Exploring multi-level system factors facilitating educator training and implementation of evidence-based practices (EBP): A study protocol. Implementation Science, 1(3), 10–16.
State Implementation and Scaling-up of Evidence-based Practices Center (SISEP). (2014, August). NIRN education practice profile planning tool. Retrieved from https://implementation.fpg.unc.edu/resources/practice-planning-tool
Stockard, J. (2010). An analysis of the fidelity implementation policies of the What Works Clearinghouse. Current Issues in Education, 13(4). Retrieved from http://www.eric.ed.gov/ERICWebPortal/detail?accno=EJ907003
Tefs, M., & Telfer, D. M. (2013). Behind the numbers: Redefining leadership to improve outcomes for all students. Journal of Special Education Leadership, 26(1), 43–52.
Thurlow, M. L., & Lazarus, S. S. (2017). Strategies for meeting the 1% state-level cap on participation in the alternate assessment (NCEO Brief #12). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.
Tye, B. B. (2000). Hard truths: Uncovering the deep structure of schooling. New York, NY: Teachers College Press.
U.S. Department of Education (2004). Individuals with disabilities education act: Statutes and regulations. Washington DC: Office of Special Education and Rehabilitative Services (OSERS).
U.S. Department of Education (1994, November, 23). Questions and answers on least restrictive environment (LRE) requirements of IDEA. Washington, DC: Office of Special Education and Rehabilitative Services (OSERS).
Winzer, M. A. (2012). The history of special education: From isolation to integration. Washington, DC: Gallaudet University Press.
Yamada, H., & Bryk, A.S. (2016). Assessing the first two years’ effectiveness of Statway: A multilevel model with propensity score matching. Community College Review, 44(3), 179–204.
(See References for complete citations.)
Aarons, Hurlburt, & Horowitz (2011)
Balas & Boren (2000)
Bauer, Damschroder, Hagedorn, Smith, & Kilbourne (2015)
Blase, Fixsen, & Duda (2011)
Bohanon, Gilman, Parker, Amell, & Sortino (2016)
Bohanon &Wu (2014)
Bryk, Gomez, Grunow, & LeMahieu (2015)
Bryk & Gomez (2008)
Bryk, Ladd, O’Day, & Smith (2016).
Castillo, Batsche, Curtis, Stockslager, March, & Minch (2016)
Chaudoir, Dugan, & Barr (2013)
Cook & Odom (2013)
Council of Chief State School Officers (CCSSO) (2017)
Darling-Hammond, Chung, Andree, Richardson, & Orphanos (2009)
Domitrovich, Bradshaw, & Poduska (2008)
Durlak & DuPre (2008)
Easterling & Metz (2016)
Eccles & Mittman (2006)
Fisher, Sax, & Grove (2000)
Fixsen & Blase (2009)
Fixsen, Blase, Duda, Naoom, & Van Dyke (2010)
Fixsen, Blase, Horner, & Sugai (2009)
Fixsen, Blase, Metz, & Van Dyke (2013)
Fixsen, Naoom, Blase, Friedman, & Wallace (2005)
French, Green, O’Connor, McKenzie, Francis, Michie, Buchbinder, Schattner, Spike, & Grimshaw (2012)
Giangreco & Suter (2015)
Greenhalgh, Robert, MacFarlane, Bate, & Kyriakidou (2004)
Greenwood & Abbott (2001)
Hall & Hord (1987)
Hall & Hord (2006)
Heath & Heath (2008)
Horner, Sugai, & Fixsen (2017)
Hott, Berkeley, Raymond, & Reid (2018)
Jackson, Fixsen, Ward, Waldroup, & Sullivan (2018)
Jackson, Ryndak, & Wehmeyer (2010)
Kozleski & Choi (2018)
Kurth, Morningstar, Hicks, & Templin (2018)
Kurth, Morningstar, & Kozleski (2014)
Langley, Nadeem, Kataoka, Stein, & Jaycox (2010)
LeMahieu, Edwards, & Gomez (2015)
Lyon, Cook, Brown, Locke, Davis, Ehrhart, & Aarons (2018)
McIntosh, Mercer, Nese, Strickland-Cohen, Kittelman, Hoselton, & Horner (2018)
Morningstar, Kurth, & Jackson, (2017)
Morris, Wooding, & Grant (2011)
Naoom, Wallace, Blase, Haines, & Fixsen, (2004)
Odom, Cox, & Brock (2013)
Odom, Duda, Kucharczyk, Cox, & Stabel (2014)
Powell, Waltz, Chinman, Damschroder, Smith, Matthieu, Proctor, & Kirchner (2015)
Ryndak, Jackson, & White (2013)
Ryndak, Morrison, & Sommerstein (1999)
Ryndak, Reardon, Benner, & Ward (2007)
Salisbury, Palombaro, & Hollowood (1993)
Shogren, McCart, Lyon, & Sailor (2015)
Stahmer, Suhrheinrich, Schetter, & Hassrick (2018)
State Implementation and Scaling-up of Evidence-based Practices Center (SISEP) (2014)
Sugai, Simonsen, Freeman, & La Salle (2016)