Skip to main content

Advertisement

Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes

Article metrics

Abstract

Background

The relevance of context in implementation science is reflected in the numerous theories, frameworks, models and taxonomies that have been proposed to analyse determinants of implementation (in this paper referred to as determinant frameworks). This scoping review aimed to investigate and map how determinant frameworks used in implementation science were developed, what terms are used for contextual determinants for implementation, how the context is conceptualized, and which context dimensions that can be discerned.

Methods

A scoping review was conducted. MEDLINE and EMBASE were searched from inception to October 2017, and supplemented with implementation science text books and known published overviews. Publications in English that described a determinant framework (theory, model, taxonomy or checklist), of which context was one determinant, were eligible. Screening and inclusion were done in duplicate. Extracted data were analysed to address the study aims. A qualitative content analysis with an inductive approach was carried out concerning the development and core context dimensions of the frameworks. The review is reported according to the PRISMA guidelines.

Results

The database searches yielded a total of 1113 publications, of which 67 were considered potentially relevant based on the predetermined eligibility criteria, and retrieved in full text. Seventeen unique determinant frameworks were identified and included. Most were developed based on the literature and/or the developers’ implementation experiences. Six of the frameworks explicitly referred to “context”, but only four frameworks provided a specific definition of the concept. Instead, context was defined indirectly by description of various categories and sub-categories that together made up the context. Twelve context dimensions were identified, pertaining to different aggregation levels. The most widely addressed context dimensions were organizational support, financial resources, social relations and support, and leadership.

Conclusions

The findings suggest variation with regard to how the frameworks were developed and considerable inconsistency in terms used for contextual determinants, how context is conceptualized, and which contextual determinants are accounted for in frameworks used in implementation science. Common context dimensions were identified, which can facilitate research that incorporates a theory of context, i.e. assumptions about how different dimensions may influence each other and affect implementation outcomes. A thoughtful application of the concept and a more consistent terminology would enhance transparency, simplify communication among researchers, and facilitate comparison across studies.

Background

The term “context” is derived from the Latin cum (“with” or “together”) and texere (“to weave”). Understanding what happens when an evidence-based practice, e.g. an intervention, programme, method or service, is “woven together” with a team, department or organization is important to better address implementation challenges in health care and other settings. Accounting for the influence of context is necessary to explain how or why certain implementation outcomes are achieved, and failure to do so may limit the generalizability of study findings to different settings or circumstances. Context is considered responsible for study-to-study variations in outcomes [1,2,3,4,5,6,7,8,9].

The relevance of the context in implementation science is reflected in the numerous theories, frameworks, models and taxonomies (referred in this paper to as frameworks) that are applied to analyse barriers and facilitators concerning various implementation outcomes [10]. Frameworks such as Promoting Action on Research Implementation in Health Services (PARIHS) [11, 12] and Theoretical Domains Framework (TDF) [13] explicitly refer to context as one of several determinants; other frameworks do not explicitly mention context. Instead, many other terms referring to the same or similar concept are in use, e.g. “environmental factors” [14] and “inner setting” and “outer setting” [15]. Terms such as “context”, “setting” and “environment” are often used interchangeably in implementation science and other research fields [8].

Regardless of which terms are used, it is not known whether these determinant frameworks conceptualize context in a similar way and describe the same context dimensions or to what extent they encompass different dimensions of the context. Lack of conceptual and terminological clarity and consistency makes it difficult for implementation researchers to identify the most relevant context dimensions for any given study. If neglected dimensions are causally significant for implementation outcomes, their omission may create problems in interpreting and applying the findings.

Some of these determinant frameworks are widely used in implementation science [16], which means that context as understood in these frameworks may have considerable impact on how the concept is studied de facto. No previous study has investigated determinant frameworks in terms of how they define or describe context and what might be a core set of contextual determinants that most frameworks account for. Therefore, the aim of this scoping review was to identify and examine determinant frameworks used in implementation science to address four issues: how were the frameworks developed, what terms do they use to denote contextual determinants for implementation, how is the context conceptualized, and which context dimensions are applied across the frameworks. Greater conceptual and terminological clarity and consistency may enhance transparency, improve communication among researchers, and facilitate exchange of data and comparative evaluations.

Methods

Approach

To address the study aims, a scoping review was undertaken to identify determinant frameworks that describe determinants, including those related to context, that may influence implementation outcomes, i.e. contextual determinants. Determinant frameworks are frameworks which have a descriptive purpose by pointing to factors believed or found to influence implementation outcomes. They do not specify the mechanisms of change; they are typically more like checklists of factors that influence implementation outcomes. They can be referred to as models, theories, checklists and taxonomies because the terminology is inconsistent [10].

A scoping review methodology was chosen because it allows for synthesis of findings across a range of study types and designs and provides a broad overview of a topic [17, 18]. Unlike systematic reviews, which address precise questions (e.g. the effectiveness of a particular type of intervention), scoping reviews can be used to map and clarify key concepts underpinning a research area [19]. To ensure that no published determinant framework would be missed, database searches were complemented with examination of textbooks in implementation science and studies that have presented comprehensive overviews of implementation theories, frameworks, models, checklists or taxonomies. The research questions and inclusion and exclusion criteria were established before the review was conducted. Although not always applicable, conduct and reporting of the review was guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [20]. Because all data were publicly available, ethical review board approval was not necessary. The review protocol was not registered.

Eligibility criteria

To be included in the review, studies in the English-speaking literature were required to report a determinant framework that described different determinants for implementation outcomes, including contextual determinants, in implementation of health care practices, from primary to tertiary care. Peer-reviewed scientific articles, as well as text books, were eligible for inclusion. A generic definition of implementation context was applied in this review. Contextual determinants were considered those determinants in the determinant frameworks that were not attributed to or associated with: the practice being implemented (e.g. an evidence-based practice); individual characteristics of the adopters (e.g. health care practitioners’ attitudes, beliefs and motivation concerning this practice); or the strategies used to support the implementation.

Determinants were defined as factors believed or empirically shown to influence implementation outcomes. Many terms are used for determinants, including barriers, hinders, obstacles, impediments, enablers, and facilitators. Implementation outcomes were defined broadly in terms of behaviours and adherence, adoption, uptake, or use concerning practices of relevance for achieving a more evidence-based health care practice [21].

We excluded theories which describe causal mechanisms of how various determinants may influence implementation outcomes. We also excluded theoretical approaches developed and used in fields other than implementation science, e.g. psychology, sociology, organizational theory and political science. Further, we excluded so-called process models, which describe the research-to-practice path and/or guide the implementation process rather than describe determinants of implementation outcomes [10].

Determinant frameworks with limited generalizability were excluded; for instance, those that focused on a specific health issue (e.g. diabetes), a specific population or patient group (e.g. HIV-positive persons), a specific intervention (e.g. brief alcohol interventions), and/or were generated to describe or structure the results of a single empirical study. We also excluded studies that only described applications of frameworks, because our aim was to identify studies that focused on describing and detailing the contents of the determinant frameworks (including the contextual determinants).

We also excluded community, public health and school settings, governance, health care priority settings and resource allocation, public policy, occupational health, workplace settings, and implementation of models of care. No study design limitations were applied, with the exception of study protocols.

Search strategy

Preliminary searches were done in MEDLINE to identify search terms. MEDLINE and EMBASE were searched from inception to October 2017. These two databases were considered the most relevant for this review, and likely to cover the vast majority of determinant frameworks intended for use in health care settings. A comprehensive search strategy was developed for MEDLINE with support from a medical librarian, and subsequently adapted to the other database (Additional file 1). The search strategy combined search terms with medical subject headings and focused on identifying publications on determinant frameworks.

To supplement the database search, three additional sources were used. Reference lists in publications included for full-text review were screened to identify eligible frameworks. Nine textbooks that provided comprehensive overviews of research regarding implementation science were reviewed [22,23,24,25,26,27,28,29,30]. These textbooks were reviewed because they are written by influential implementation scientists and the authors of this review have them in their possession. Lastly, five comprehensive overviews of theoretical approaches in implementation science were examined [16, 31,32,33,34]. The authors teach implementation science theory at several Swedish universities, and are familiar with these sources as part of their teaching.

Study selection

Both authors independently screened titles and abstracts and selected studies for potential inclusion in the study, applying the predefined inclusion and exclusion criteria. Both authors then read the full texts of these articles to assess eligibility for final inclusion. Any disagreement between the authors regarding eligibility was resolved in consensus.

Data extraction

Data were collected on the following study characteristics: (1) authors; (2) publication year; (3) what was the desired outcome, as described in the framework?; (4) how were the determinants identified in the framework? (i.e. how was the framework developed?); (5) which determinant categories were described in the framework; (6) which of the determinant categories were associated with contextual determinants and/or are labelled “context”?; and (7) which contextual determinant categories and sub-categories did the framework include? Data extraction was done primarily by one reviewer (PN) and verified by the other (SB).

Data analysis

Extracted data were analysed to address the four study aims. A qualitative content analysis with an inductive approach was carried out concerning how the frameworks were developed [35]. Qualitative content analysis is a method of analysing written, verbal or visual communication messages [36], with the aim of attaining a condensed and broad description of the phenomenon under study. The analytical process includes coding, creating categories and abstraction. The inductive approach means that the analysis is driven by the data and no a priori codes or categories are used. Terms that were used in the frameworks to denote contextual determinants for implementation were coded with regard to whether the framework referred explicitly to “context” or whether it used other terms to denote contextual determinants. Contextual determinants described in a framework were categorized into different context dimensions. We use the term context dimension(s) for our categorization of the contextual determinants (categories and sub-categories) described in the determinant frameworks. Conceptualization of context was analysed in relation to whether the framework provided explicit definitions of context or whether the concept was defined or understood by means of describing a number of contextual determinants.

Summary statistics (i.e. frequencies) were used to describe the number of frameworks that were developed in different ways, the number of frameworks that referred to “context”, the number of frameworks that provided explicit definitions of context, and the number of frameworks that addressed the various context dimensions that emerged from the analysis.

Results

Identification of determinant frameworks

Twenty-two relevant publications were identified, describing 17 unique determinant frameworks (Table 1). Database searches yielded a total of 1113 publications, of which 67 were considered potentially relevant and retrieved in full text. The searches yielded three publications each describing a unique determinant framework: Cabana et al. [37]; Cane et al. [13] (TDF); and Harvey and Kitson [38] (PARIHS). Seven publications were excluded because they did not describe a determinant framework, and one publication was excluded because the setting was not health care.

Table 1 Included determinant frameworks

The remaining 56 publications identified in the database searches were excluded because they reported applications of published determinant frameworks. However, the reference lists of those publications were examined to identify the original publications that described the development and contents of the frameworks. This inspection process yielded five determinant frameworks, which were included: Grol and Wensing [39]; Fleuren et al. [40]; Feldstein and Glasgow [41] (PRISM: Practical, Robust Implementation and Sustainability Model); Damschroder et al. [15] (CFIR: Consolidated Framework for Implementation Research); and Flottorp et al. [34] (TICD: Tailored Implementation for Chronic Diseases). Thus, the database searches resulted in the identification of eight unique determinant frameworks.

Inspection of the nine textbooks yielded three determinant frameworks that were not found in the database searches: Greenhalgh et al. [23]; Fixsen et al. [22], Blase et al. [42] (AIF: Active Implementation Frameworks); and Nutley et al. [25]. The five overviews identified six additional determinant frameworks not obtained by means of database searches or textbooks: Mäkelä and Thorsen [43]; Wensing et al. [44]; Rainbird et al. [45] (NICS: National Institute of Clinical Studies); Cochrane et al. [14]; Gurses et al. [46]; and WHO’s SURE (Supporting the Use of Research Evidence [47].

We included two publications each describing AIF [22, 42], TDF [13, 48] and the framework by Greenhalgh et al. [23, 49]: the first publication on the respective framework and a later publication that offered a more comprehensive description or refinement of the framework, thus warranting its inclusion. It should be noted that the TDF was not named so until Cane et al. [13]. Three publications concerning PARIHS were included; the first publication [11], a later publication [12], with a more comprehensive description of the framework, and a more recent publication featuring a revised version of the framework called integrated-PARIHS (i-PARIHS) [38], which was developed to “address a number of perceived limitations to its effective utilisation” ([38], p. 2). The framework by Grol and Wensing [39] is very similar to the one described by Grol et al. [24], but the former provides some more details, which is why we chose the first publication.

The selection process is illustrated in Fig. 1.

Fig. 1
figure1

Identification and selection of publications and determinant frameworks

How were the determinant frameworks developed?

The frameworks were developed in three different ways, as described in the investigated publications. Eleven frameworks are based on literature reviews of empirical studies and of theories, models and frameworks used in implementation science to describe determinants of various implementation-relevant outcomes: Cabana et al. [37]; Fleuren et al. [40]; Greenhalgh et al. [23]; Cochrane et al. [14]; Nutley et al. [25]; Feldstein and Glasgow (PRISM) [41]; Damschroder et al. (CFIR) [15]; Gurses et al. [46]; and Flottorp et al. (TICD) [34]. Presumably the framework by Rainbird et al. [45] is also based on a literature review, although details about how this framework was developed are not actually provided. The basis of the framework developed by SURE [47] is also somewhat unclear; it is simply stated that “published lists of barriers for implementing changes in health care often show a high degree of overlap” ([47], p. 6), implying that it was developed based on the existing literature. Fleuren et al. [40] combined the literature review with a Delphi process involving 44 implementation experts.

Four of the frameworks are based on the authors’ own implementation experiences and/or empirical studies. PARIHS [11] emerged from the observation that successful implementation in health care might be premised on three key determinants (characteristics of the evidence, context and facilitation), a proposition that was subsequently analysed in four empirical studies. PARIHS subsequently underwent substantial research and development work [12]. The revised i-PARIHS was proposed by Harvey and Kitson ([38], p. 2) based on their own “ongoing application of the framework in implementation studies together with critiques and evaluations of the framework by other research teams”. Grol and Wensing ([39], p. 558) based their work on “analyses of the literature and research conducted at our research centre”. Similarly, the AIF [22, 42] combined the developers’ implementation experiences with literature reviews. Mäkelä and Thorsen [43] referred to “previous work in the area” and data from various projects within a project called Concerted Action of the Changing Professional Practice ([43], p. 24).

Two frameworks are derived from existing theory or theoretical assumptions rather than experience or empirical studies. The framework by Wensing et al. [44] was based on an analysis of the literature concerning theories on behaviour or organizational change in a variety of disciplines. It is not stated how many theories were identified, but the searches continued “until the overview of theories was ‘saturated’” ([44], p. 16). The TDF [13] was constructed on the basis of a synthesis of 128 constructs related to behaviour change found in 33 established social-cognitive behaviour change theories.

What terms are used to denote contextual determinants?

Six of the 17 frameworks explicitly refer to “context” as a contextual determinant category [11, 12, 23, 25, 38,39,40, 45]. The other 11 frameworks use a broad range of terms to denote various contextual determinants, including terms such as “external barriers” [37], “environmental factors” [37], “environment” [43], “external environment” [41], “inner setting” and “outer setting” [15], “system characteristics” [46] and “organizational drivers” [42].

How is context conceptualized?

Most of the 17 frameworks do not provide specific definitions of context. Instead, they define the concept indirectly by describing a number of contextual determinants (categories and/or sub-categories) that together make up the context. Three frameworks [11, 13, 15] provided a specific definition of the context concept.

The CFIR [15] is presented in a paper that provides a definition of context although the framework itself refers to “inner and outer setting” rather than context: “Context consists of a constellation of active intervening variables and is not just a backdrop for implementation. … For implementation research, ‘context’ is the set of circumstances or unique factors that surround a particular implementation effort … In this paper, we use the term context to connote this broad scope of circumstances and characteristics” ([15], p. 3).

The TDF includes one category, “environmental context and resources”, that explicitly refers to context. This category is defined as “any circumstance of a person’s situation or environment that discourages or encourages the development of skills and abilities, independence, social competence and adaptive behaviour” ([13], p. 14).

Kitson et al. ([11], p. 150] define context in relation to PARIHS as “the environment or setting in which the proposed change is to be implemented”. The revised version of PARIHS, i-PARIHS, has a wider focus on the different layers of context, differentiating between the immediate local level, the wider organizational level and external health system level, something that was not done in the original PARIHS [38].

What context dimensions are included in the frameworks?

Contextual determinants in the 17 frameworks were categorized into 12 different context dimensions (Table 2). The most comprehensive framework was PRISM [41], which included contextual determinants that could be mapped to 11 context dimensions (Table 3). It was followed by PARIHS [11, 12, 38], CFIR [15], TICD [34] and the framework by Greenhalgh et al. [23], all of which included contextual determinants that could be mapped to ten context dimensions.

Table 2 Description of the context dimensions
Table 3 Context dimensions addressed in the frameworks

The 12 context dimensions pertain to different levels of aggregation, from the micro to the macro level of health care. At the micro level, patients can influence implementation. Four broader organizational determinants can be attributed to the meso level: organizational culture and climate, organizational readiness to change, organizational structures, and organizational support. The macro level consists of even broader, “outside”, influences from the wider environment. It was not possible to attribute six of the context dimensions to a single level of aggregation because they may affect both the micro and meso levels (and to some extent also the macro level): social relations and support, financial resources, leadership, time availability, feedback and physical environment.

The most common context dimensions were organizational support (included in all 17 frameworks), financial resources (16 frameworks), social relations and support (15 frameworks), leadership (14 frameworks), organizational culture and climate (12 frameworks) and organizational readiness to change (12 frameworks). The least common dimension was physical environment (2 frameworks). Patients as a contextual determinant was addressed in 11 of the frameworks.

Discussion

This scoping review identified 17 unique frameworks in implementation science that address contextual determinants. The results show there is considerable variation with regard to the terms used to denote contextual determinants, how context is defined and conceptualized, and which contextual determinants are accounted for. Most of the frameworks were developed based on empirical studies and theories, models and frameworks used in implementation science to describe determinants of implementation outcomes. Hence, there is considerable intra-field referencing, as many researchers have developed frameworks partially based on earlier frameworks. This could potentially contribute to consolidation and convergence towards a number of core context dimensions, but it could also lead to a less inductive approach to exploring and understanding the context.

Interestingly, most of the frameworks do not actually mention or refer to “context”, instead using other terms to denote such determinants. Furthermore, few of the frameworks provide a precise definition or clarify the meaning of the concept. Instead, most frameworks define the concept indirectly, in terms of specifying a number of determinants that comprise the context. These differences notwithstanding, it is clear that context is commonly viewed as a multi-dimensional concept. The frameworks differed with regard to how many and which determinant categories were related to context (from one contextual determinant category to five) and the proportion of context categories in relation to all determinant categories. In most frameworks, context is one of several determinants and a relatively minor aspect. For instance in the TDF [13], only three of the 14 determinant categories relate to contextual determinants. In some frameworks context is a more important aspect; in PRISM [41], all four determinant categories relate to contextual determinants, and in the framework by Fleuren et al. [40], four of five determinant categories account for contextual determinants.

We found a large variation in the number of contextual determinants (i.e. categories and sub-categories) described in the frameworks. For example, Gurses et al. [46] list 10 sub-categories belonging to two categories of contextual determinants, whereas Greenhalgh et al. [23] provide a list of 22 sub-categories that are part of five contextual determinant categories. Frameworks such as those by Greenhalgh et al. [23], CFIR [15] and TICD [34] are quite specific and detailed concerning the contextual determinants. Some of the differences with regard to the number of contextual determinants are due to slightly different aims of the frameworks. Although all frameworks address influences on implementation, the focus varies somewhat, with some identifying determinants for behaviour change and others describing determinants pertaining to adherence to guidelines, research use or use of innovations.

The frameworks broadly include two types of context dimensions: those that function as necessary conditions for implementation and those that may be viewed as active, driving forces required to achieve successful implementation. For instance, having sufficient financial resources and time availability may constitute favourable conditions for implementation, but they likely need to be combined with, for example, supportive leadership and social relations if implementation is to succeed. This means that strategies to facilitate implementation, which are usually described as a determinant category of its own [10], overlap with some context dimensions. Implementation strategies have been defined as “methods or techniques used to enhance the adoption, implementation and sustainability of a clinical program or practice” [50]. Hence, the boundary between implementation strategies and some contextual determinants on implementation is ambiguous. One of the dimensions, readiness for change, differs from the others since it is specific to the (evidence-based) practice being implemented, whereas the other context dimensions have relevance regardless of specific practices.

The frameworks describe discrete contextual determinants by breaking down context into a number of constituent parts. However, the 12 context dimensions are interdependent. For instance, a lack of available staff (organizational support) and/or poor funding for the implementation (financial resources) will likely have a negative impact on the organization’s preparedness for implementation (organizational readiness to change). Therefore, it is important to view context in holistic terms because successful implementation depends on combinations of different contextual determinants. Taking an overly reductionist approach, studying the impact of different dimensions in isolation of each other neglects the fact that two or more seemingly unimportant contextual determinants may combine to create powerful effects, or potentially strong determinants may combine to generate weak effects. Stressing a holistic view, organizational behaviour theorist Johns [51] has referred to context as a “bundle of stimuli” and talked about “deadly combinations” of otherwise effective determinants that can yield unfavourable outcomes.

With regard to the most common context dimensions that emerged from the content analysis of the frameworks, most of the frameworks described contextual determinants that could be attributed to organizational support, financial resources, social relations and support, leadership and organizational culture and climate. Many of the barriers for implementation of evidence-based practices that have been identified earlier in the literature have been associated with these context dimensions [25, 52,53,54,55], underscoring their importance for understanding and addressing implementation challenges.

All the frameworks included some form of organizational support as a contextual determinant. This support was reflected in various types of administrative, technological and human resources that provide favourable conditions for successful implementation, e.g. planning and organization of work, availability of staff, staff training and information and decision-support systems. Organizational support has been associated with both attitudes toward EBP and EBP use in practice, and has also been shown to mediate the link between organization type (private vs. public organization) and attitudes toward EBP [56].

The dimension of financial resources, which was identified in all but one determinant framework, was expressed in terms of funding, reimbursement, incentives, rewards and costs, i.e. available economic means to support implementation. The importance of this context dimension is supported by a recent systematic review that found lack of financial resources to be an important barrier for the implementation of mental health services integration into primary health care [57]. Another study highlighted the importance of financial resources when implementing sustainability initiatives in health care facilities, particularily in those that are small and medium-sized [58]. These are just a few examples; this context dimension is obviously paramount when it comes to enabling almost any kind of implementation of change in a health care practice.

Social relations and support was also a common context dimension, being comprised of various interpersonal processes that occur when the actions of one or more individuals influence the behaviour, attitudes or beliefs of one or more other individuals. This influence was described in the determinant frameworks as communication, collaboration and learning in groups, teams and networks, identity and norms in groups, and opinion of colleagues.

Although most frameworks specifically refer to organizational culture, it is important to recognize that health care organizations are inherently multi-cultural given the variety of professions, departments, and teams operating within them [59, 60]. Indeed, it has increasingly been acknowledged that organizations rarely possess a single, homogeneous culture, and many organization theorists have questioned the overemphasis on “organizational” culture [61]. Professional cultures are particularly important in health care because professional groups differ with regard to values, norms, beliefs and behaviours [62]. It has been shown that professional groups can serve as barriers to implementation of evidence-based practices. For instance, Ferlie et al. [63] and Fitzgerald and Dopson [64] identified boundaries between different professional groups that inhibited the spread of new practices. Other studies have shown that professional loyalties may be stronger than those to the organization, which may impede change initiatives and implementation endeavours [65,66,67,68,69].

The emphasis on the organization rather than professions is likely due to implementation researchers being more influenced by organization research than the sociology of professions. Although none of the frameworks refer specifically to professional culture, several address social relations and group influences that may serve a similar function in potentially “over-ruling” individuals’ attitudes, beliefs and other behavioural predictors, e.g. “group norms” [13], “group identity” [13] and “culture of the network” [45]. While addressing organizational culture, two of the frameworks, AIF [22] and CFIR [15], also refer to various aspects of organizational climate, which is understood as the surface perceptions and attitudes concerning the observable, surface-level aspects of culture at a particular point in time [70]. Organizational climate is often defined as the employees’ perceptions of the impact of their work environment, taking into account aspects such as what is promoted, rewarded or punished in the work setting [71].

Most of the frameworks refer to contextual determinants in terms of leadership rather than of management. A review of 17 studies that concerned the importance of leadership for implementation found that the two concepts tend to be used interchangeably and are rarely differentiated in implementation research [72]. However, whereas leadership is concerned with setting a direction for change and developing a vision for the future, management consists of realizing those goals through planning, budgeting and coordinating [73, 74]. Leadership is broader than management because it involves influence processes with a wide range of people, not just those who have a managerial role [75]. Hence, a research challenge to account for the importance of leadership is to identify and gather information from and about those who are leaders. Informal leaders often have a critical role in health care, e.g. clinicians whose views are highly regarded and who are particularly persuasive with their colleagues. Informal leaders may lead others in resisting implementation or change proposed by others [76,77,78].

Eleven of the 17 frameworks included patient-related determinants. The relatively low proportion is somewhat surprising in view of the growing importance of patient participation in health care policy making, practice and research [79]. Patient participation (and related concepts such as shared decision making, patient engagement and patient involvement) has been associated with improved health outcomes and has been advocated as a means to improve the quality of care [80, 81]. However, implementation science thus far has not emphasized research concerning potential patient determinants on implementation outcomes.

The 12 context dimensions that emerged from the content analysis of the determinant frameworks belong to different levels of aggregation, suggesting a multi-layered ecological model of the context. Ecological models are used in many disciplines and fields, e.g. public health, sociology, biology, education and psychology, to describe determinants at multiple levels, from the individual to society [82, 83]. Several of the context dimensions that we identified are multi-level and may influence implementation at different levels. This conceptualization of the context underscores that strategies to facilitate implementation must address more than one level. In line with this ecological model of context, some of the frameworks distinguish between an inner and an outer context of implementation. The inner context is typically understood as micro- and meso-level influences, whereas the outer context refers to macro-level influences beyond the organization, e.g. national guidelines, policies or collaboration with other organizations. Still, the “line” between inner and outer context is somewhat arbitrary and not always clear [15].

The fact that relatively few frameworks address the outer context (wider environment) indicates an emphasis on determinants that exist at organizational and lower-aggregate levels (e.g. teams or groups). Whereas “thick descriptions” of the wider circumstances of the implementation are valuable for interpreting findings, it may be difficult to capture or establish causality between the outer context and implementation outcomes. May et al. [9] argue that such a “whole system” approach makes it almost impossible to disentangle the complicated relationships between various determinants and to identify the causal mechanisms by which different processes and actors at multiple levels influence each other. This scepticism is relevant and points to the importance of identifying and accounting for key context dimensions in individual studies. Nevertheless, implementation scientists have focused primarily on the individual and organizational levels. While implementation science is a young field, its future development would benefit from drawing from other disciplines which have dealt more with the impact of the macro system, e.g. political science, prevention science, and complexity science.

The literature on implementation context has suggested that there are two different context conceptualizations: context as something concrete and passive, e.g. the physical environment in which implementation occurs; and context as something abstract but potentially dynamic, e.g. active support from colleagues and management [15, 46]. Most of the frameworks identified in this review emphasize the active view of context, indicating that it is widely recognized that context is not merely a passive backdrop to implementation. The view of context as a physical place implies a positivist notion of context, i.e. the context is an objective entity that can be observed, whereas the view of the context as something more intangible and active represents a more subjectivist perspective that acknowledges the complexity and multi-dimensionality of the context.

Organization theorists [84, 85] have described context as a socially constructed phenomenon that is difficult to manipulate or manage. However, the underlying assumption of the frameworks instead is that it is possible to break down the context into its constituent parts, which can be influenced to have an impact on implementation outcomes on the premise of a cause-and-effect relationship between the context and outcomes. Furthermore, some of the frameworks have spawned instruments to measure and quantify various aspects of the context, underscoring an essentially objectivist understanding of the context in implementation science. Examples of such instruments are the Alberta Context Tool [86] and the Context Assessment Index [87].

A few recently published reviews have also attempted to identify determinant frameworks, but have used different, albeit overlapping, selection criteria and research questions. Li et al. [88] focused on organizational contextual determinants for the implementation of evidence-based practices in health care and identified six such determinants. All six of those determinants were included among the 12 context dimensions we identified in our review. While the review by Li et al. was limited to the organizational (meso) level, our review also identified contextual determinants at both micro and macro levels, including patients and the wider environment. A review by Strifler et al. [89] identified 159 different theories, models and frameworks, but they did not distinguish between the different types of theoretical approaches and did not delve into whether or how context was addressed. Their focus was on the use of the theories, models and frameworks in practice and research concerning prevention and/or management of cancer or other chronic diseases.

Discussion about the meaning and relevance of context is not unique to implementation science. Researchers in quality improvement have defined context as “everything else that is not the intervention” ([90], p. 605). This is somewhat similar to implementation science, in that the intervention, e.g. an evidence-based practice, is not considered to be part of the context. However, researchers in implementation science typically view this “everything else” in terms of characteristics of the adopters (e.g. health care professionals) and the strategies used to support the implementation. In organizational behaviour, context is typically understood as influences that are external to and/or “above” (i.e. a higher aggregation level than) the individual, e.g. a team, professional group, department or organization [91, 92]. This perspective of context resembles the view conveyed in the implementation science frameworks in this review.

In the learning literature, context is considered to be “multisensory, diffuse and continuously present” ([93], p. 418). Various forms of context have been described, including spatial context (everything we do occurs in a place), temporal context (events are often defined by their sequential properties), cognitive context (influences how information is perceived, processed and stored), and social and cultural contexts (influence how we understand the world and ourselves) [94,95,96,97]. The temporal aspect of context was not explicitly addressed in any of the frameworks in this review other than time being considered a limited resource (time availability). However, it seems obvious that the timing of implementation could have an impact on the outcomes. For instance, successful results seem less likely if the implementation of an evidence-based practice coincides with numerous other change initiatives or if it occurs during a time of change fatigue, i.e. feelings of stress, exhaustion and burnout among staff associated with rapid and continuous changes in the workplace [98]. Although not explicitly mentioned in any of the frameworks, the timing of implementation may be considered an underlying influence on time availability and organizational readiness to change.

Study limitations

Some limitations of this review should be acknowledged. We only searched two databases and we may not have identified all relevant determinant frameworks. Although our searches yielded thousands of hits, most publications were excluded because they did not describe a determinant framework according to our definition. Our focus on health care settings may have led us to miss relevant frameworks used in other fields, such as public health, community-based services, and in disciplines such as psychology, sociology, organizational theory and political science, which limits the generalizability of our findings. We did not attempt any kind of quality assessment of the included publications or frameworks. This was not considered feasible due to the variety in study design and scope of the different publications.

Conclusions

This scoping review of 17 determinant frameworks in implementation science shows that there is considerable variation with regard to how the frameworks were developed, the terms used to denote contextual determinants, how context is defined and conceptualized, and which contextual determinants are accounted for in frameworks used in implementation science. Most of the included frameworks provide only a limited and narrow description and definition of context, and a broad range of terms is used to denote various contextual determinants. Context is generally not described consistently, coherently or comprehensively in determinant frameworks, and there is inconsistency with regard to which contextual determinants are addressed. Still, it was possible to identify common dimensions of the context based on the frameworks, the most frequently used being organizational support, financial resources, social relations and support, leadership, and organizational culture and climate.

Our categorization of context dimensions may help the implementation researcher to consider the relevance of the various determinants in a structured way. Ultimately, however, the findings of this review are consistent with the observation by Pfadenhauer et al. ([8], p. 104) that context in implementation science is an “inconsistently defined and applied” concept that is “only partially mature”.

It is important that researchers are aware of how context is defined or interpreted in studies, which context dimensions are considered, and why these dimensions might be relevant. The challenge for the researcher is to identify the most important context dimensions and address these in the research. Although it is difficult to capture all potentially relevant influences in any given study, recognition of core context dimensions can facilitate research that incorporates a theory of context, i.e. assumptions about how different dimensions may influence each other and affect implementation outcomes. A thoughtful application of the concept and a more consistent terminology will enhance transparency, simplify communication among researchers, and facilitate comparisons across studies. Together, these advances will further our understanding of the role of context within implementation science.

Abbreviations

AIF:

Active Implementation Frameworks

CFIR:

Consolidated Framework for Implementation Research

NICS:

National Institute of Clinical Studies

PARIHS:

Promoting Action on Research Implementation in Health Services

PRISM:

Practical, Robust Implementation and Sustainability Model

SURE:

Supporting the Use of Research Evidence

TDF:

Theoretical Domains Framework

TICD:

Tailored Implementation for Chronic Diseases

References

  1. 1.

    McCormack B, Kitson A, Harvey G, Rycroft-Malone J, Titchen A, Seers K. Getting evidence into practice: the meaning of ‘context’. J Adv Nurs. 2002;38:94–104.

  2. 2.

    Dopson S, Fitzgerald L. The active role of context. In: Dopson S, Fitzgerald L, editors. Knowledge to action? Evidence-based health care in context. New York: Oxford University Press; 2005. p. 79–103.

  3. 3.

    Kaplan HC, Brady PW, Dritz MC, Hooper DK, Linam WM, Froehle CM, et al. The influence of context on quality improvement success in health care: a systematic review of the literature. Milbank Q. 2010;88:500–59.

  4. 4.

    Taylor SL, Dy S, Foy R, Hempel S, McDonald KM, Övretveit J, et al. What context features might be important determinants of the effectiveness of patient safety practice interventions? BMJ Qual Saf. 2011;20:611–7.

  5. 5.

    Tomoaia-Cotisel A, Scammon DL, Waitzman NJ, Cronholm PF, Halladay JR, Driscoll DL, et al. Context matters: the experience of 14 research teams in systematically reporting contextual factors important for practice change. Ann Fam Med. 2013;11(Suppl 1):S115–23.

  6. 6.

    Edwards N, Barker PM. The importance of context in implementation research. J Acquir Immune Defic Syndr. 2014;67(S2):S157–62.

  7. 7.

    Squires JE, Graham ID, Hutchinson AM, Michie S, Francis JJ, Sales A, et al. Identifying the domains of context important to implementation science: a study protocol. Implement Sci. 2015;10:135.

  8. 8.

    Pfadenhauer LM, Mozygemba K, Gerhardus A, Hofmann B, Booth A, Bakke Lysdahl K, et al. Context and implementation: a concept analysis towards conceptual maturity. Z Evid Fortbild Qual Gesundhwes. 2015;109:103–14.

  9. 9.

    May C, Johnson M, Finch T. Implementation, context and complexity. Implement Sci. 2016;11:141.

  10. 10.

    Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.

  11. 11.

    Kitson AL, Harvey G, McCormack B. Enabling the implementation of evidence-based practice: a conceptual framework. Qual Health Care. 1998;7:149–58.

  12. 12.

    Rycroft-Malone J. Promoting action on research implementation in health services (PARIHS). In: Rycroft-Malone J, Bucknall T, editors. Models and frameworks for implementing evidence-based practice: linking evidence to action. Oxford: Wiley-Blackwell; 2010. p. 109–36.

  13. 13.

    Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7:37.

  14. 14.

    Cochrane LJ, Olson CA, Murray S, Dupuis M, Tooman T, Hayes S. Gaps between knowing and doing: understanding and assessing the barriers to optimal health care. J Contin Educ Heal Prof. 2007;27:94–102.

  15. 15.

    Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

  16. 16.

    Birken SA, Powell BJ, Shea CM, Haines ER, Kirk MA, Leeman J, et al. Criteria for selecting implementation science theories and frameworks: results from an international survey. Implement Sci. 2017;12:124.

  17. 17.

    Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8:19–32.

  18. 18.

    O’Brien KK, Colquhoun H, Levac D, Baxter L, Tricco AC, Straus S, et al. Advancing scoping study methodology: a web-based survey and consultation of perceptions on terminology, definition and methodological steps. BMC Health Serv Res. 2016;16:305.

  19. 19.

    Peters MD, Godfrey CM, McInerney P, Soares CB, Khalil H, Parker D. The Joanna Briggs institute reviewers’ manual 2015: Methodology for JBI scoping reviews. Adelaide: The Joanna Briggs Institute; 2015.

  20. 20.

    Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6:e1000097.

  21. 21.

    Proctor E, Silmere H, Raghavan R, Aarons G, Griffey R. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38:65–76.

  22. 22.

    Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: a synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute; 2005.

  23. 23.

    Greenhalgh T, Robert G, Bate P, Macfarlane F, Kyriakidou O. Diffusion of innovations in service organisations: a systematic literature review. Malden, MA: Blackwell Publishing; 2005.

  24. 24.

    Grol R, Wensing M, Eccles M, Davis D, editors. Improving patient care: the implementation of change in health care. Chichester: Wiley; 2013.

  25. 25.

    Nutley SM, Walter I, Davies HTO. Using evidence: how research can inform public services. Bristol: The Policy Press; 2007.

  26. 26.

    Straus S, Tetroe J, Graham ID. Knowledge translation in health care. Chichester: John Wiley; 2009.

  27. 27.

    Rycroft-Malone J, Bucknall T. Models and frameworks for implementing evidence-based practice: linking evidence to action. Oxford: Wiley-Blackwell; 2010.

  28. 28.

    Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health. New York: Oxford University Press; 2012. p. 1–508.

  29. 29.

    Kelly B, Perkins DF, editors. Handbook of implementation science for psychology in education. Cambridge: Cambridge University Press; 2012.

  30. 30.

    Greenhalgh T. How to implement evidence-based healthcare. Hoboken, NJ: John Wiley; 2018.

  31. 31.

    Mitchell SA, Fisher CA, Hastings CE, Silverman LB, Wallen GR. A thematic analysis of theoretical models for translating science in nursing: mapping the field. Nurs Outlook. 2010;58:287–300.

  32. 32.

    Meyers DC, Durlak JA, Wandersman A. The quality implementation framework: a synthesis of critical steps in the implementation process. Am J Community Psychol. 2012;50:462–80.

  33. 33.

    Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43:337–50.

  34. 34.

    Flottorp SA, Oxman AD, Krause J, Musila NR, Wensing M, Godycki-Cwirko M, et al. A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implement Sci. 2013;8:35.

  35. 35.

    Elo S, Kyngäs H. The qualitative content analysis process. J Adv Nurs. 2007;62:107–15.

  36. 36.

    Cole FL. Content analysis: process and application. Clin Nurse Spec. 1988;2:53–7.

  37. 37.

    Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud P-AC, et al. Why don’t physicians follow clinical practice guidelines? JAMA. 1999;282:1458–65.

  38. 38.

    Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci. 2016;11:33.

  39. 39.

    Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence-based practice. Med J Aust. 2004;180:S57–60.

  40. 40.

    Fleuren M, Wiefferink K, Paulussen T. Determinants of innovations within health care organizations. Int J Qual Health Care. 2004;16:107–23.

  41. 41.

    Feldstein AC, Glasgow RE. A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Jt Comm J Qual Patient Saf. 2008;34:228–43.

  42. 42.

    Blase KA, Van Dyke M, Fixsen DL, Bailey FW. Implementation science: key concepts, themes and evidence for practitioners in educational psychology. In: Kelly B, Perkins DF, editors. Handbook of implementation science for psychology in education. Cambridge: Cambridge University Press; 2012. p. 13–34.

  43. 43.

    Mäkelä M, Thorsen T. A framework for guidelines implementation studies. In: Thorsen T, Mäkelä M, editors. Changing professional practice. Copenhagen: Danish Institute for Health Services Research and Development; 1999. p. 23–53.

  44. 44.

    Wensing M, Bosch M, Foy R, van der Weijden T, Eccles M, Grol R. Factors in theories on behaviour change to guide implementation and quality improvement in healthcare. Nijmegen: Centre for Quality of Care Research (WOK); 2005.

  45. 45.

    Rainbird K, Sanson-Fisher R, Buchan H. Identifying barriers to evidence uptake. Melbourne: National Institute of Clinical Studies; 2006.

  46. 46.

    Gurses AP, Marsteller JA, Ozok AA, Xiao Y, Owens S, Pronovost PJ. Using an interdisciplinary approach to identify factors that affect clinicians’ compliance with evidence-based guidelines. Crit Care Med. 2010;38(8 Suppl):S282–91.

  47. 47.

    World Health Organization (WHO). Identifying and addressing barriers to implementing policy options. In: SURE guides for preparing and using evidence-based policy briefs. 2011. http://epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/SURE-Guides-v2.1/Collectedfiles/sure_guides.html. Accessed 2 Feb 2019.

  48. 48.

    Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A, on behalf of the Psychological Theory Group. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005;14:26–33.

  49. 49.

    Greenhalgh T, Robert G, Bate P, Kyriakidou O, Macfarlane F, Peacock R. How to spread good ideas: a systematic review of the literature on diffusion, dissemination and sustainability of innovations in health service delivery and organisation. Report for the National Co-ordinating Centre for NHS Service Delivery and Organisation R&D (NCCSDO), 2004.

  50. 50.

    Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

  51. 51.

    Johns G. The essential impact of context on organizational behavior. Acad Manag Rev. 2006;31:386–408.

  52. 52.

    Wensing M, Grol R. Determinants of change. In: Grol R, Wensing M, Eccles M, Davis D, editors. Improving patient care: the implementation of change in clinical practice. Chichester: Wiley; 2013. p. 139–50.

  53. 53.

    Aarons GA, Horowitz JD, Dlugosz LR, Erhart MG. The role of organizational processes in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health. New York: Oxford University Press; 2012. p. 128–53.

  54. 54.

    Williams B, Perillo S, Brown T. What are the factors of organisational culture in health care settings that act as barriers to the implementation of evidence-based practice? A scoping review. Nurs Educ Today. 2015;35:e34–41.

  55. 55.

    Sibbald SL, Wathen CN, Kothari A. An empirically based model for knowledge management in health care organizations. Health Care Manag Rev. 2016;41:64–74.

  56. 56.

    Aarons GA, Sommerfeld DH, Walrath-Greene CM. Evidence-based practice implementation: the impact of public versus private sector organization type on organizational support, provider attitudes, and adoption of evidence-based practice. Implement Sci. 2009;4:83.

  57. 57.

    Wakida EK, Talib ZM, Akena D, Okello ES, Kinengyere A, Mindra A, Obua C. Barriers and facilitators to the integration of mental health services into primary health care: a systematic review. Syst Rev. 2018;7(1):211.

  58. 58.

    Chu KWK, Cheung LLW. Incorporating sustainability in small health-care facilities: an integrated model. Leadersh Health Serv (Bradf Engl). 2018;31(4):441–51 Epub 2018 Apr 10.

  59. 59.

    Alvesson M. Understanding organizational culture. London: Sage; 2002.

  60. 60.

    Morgan PI, Ogbonna E. Subcultural dynamics in transformation: a multi-perspective study of healthcare professionals. Hum Relat. 2008;61:39–65.

  61. 61.

    Lloyd E. Organizational culture. In: Arvinen-Muondo R, Perkins S, editors. Organizational behaviour. London: Kogan Page; 2013. p. 209–39.

  62. 62.

    Hall P. Interprofessional teamwork: professional cultures as Barriers. J Interprof Care. 2005;19(Suppl 1):188–96.

  63. 63.

    Ferlie E, Fitzgerald L, Wood M, Hawkins C. The nonspread of innovations: the mediating role of professionals. Acad Manag J. 2005;48:117–34.

  64. 64.

    Fitzgerald L, Dopson S. Professional boundaries and the diffusion of innovation. In: Dopson S, Fitzgerald L, editors. Knowledge to action? Evidence-based health care in context. New York: Oxford University Press; 2005. p. 104–31.

  65. 65.

    Hillman AL. Managing the physicians: rules versus incentives. Health Aff. 1991;10:138–46.

  66. 66.

    Hudson B. Interprofessionality in health and social care: the Achilles’ heel of partnership? J Interprof Care. 2002;16:7–17.

  67. 67.

    Sutker WL. The physician’s role in patient safety: What’s in it for me? Proc (Bayl Univ Med Cent). 2008;21:9–14.

  68. 68.

    Mittman BS. Implementation science in health care. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health. New York: Oxford University Press; 2012. p. 400–18.

  69. 69.

    Eriksson N, Mullern T, Andersson T, Gadolin C, Tengblad S, Ujvari S. Involvement drivers: a study of nurses and physicians in improvement work. Q Manage Health Care. 2016;25:85–91.

  70. 70.

    Denison DR. What is the difference between organizational culture and organizational climate? A native’s point of view of a decade of paradigm wars. Acad Manag Rev. 1996;21:619–54.

  71. 71.

    Glisson C, James LR. The cross-level effects of culture and climate in human service teams. J Org Beh. 2002;23:767–94.

  72. 72.

    Reichenpfader U, Carlfjord S, Nilsen P. Leadership in evidence-based practice: a systematic review. Leadersh Health Serv. 2015;28:298–316.

  73. 73.

    Yukl G. Leadership in organizations. Prentice Hall: Upper Saddle River, NJ; 2006.

  74. 74.

    Gill R. Theory and practice of leadership. London: Sage Publications; 2011.

  75. 75.

    Hartley J, Benington J. Leadership for healthcare. Bristol: Policy Press; 2010. p. 1–130.

  76. 76.

    Locock L, Dopson S, Chambers D, Gabbay J. Understanding the role of opinion leaders in improving clinical effectiveness. Soc Sci Med. 2001;53:745–7.

  77. 77.

    Övretveit J. The leaders’ role in quality and safety improvement: a review of the research and guidance. Karolinska Institute: Stockholm; 2005.

  78. 78.

    Dickinson H, Ham C. Engaging doctors in leadership: review of the literature. Birmingham: University of Birmingham; 2008.

  79. 79.

    Hernan AL, Giles SJ, Fuller J, Johnson JK, Walker C, Dunbar JA. Patient and carer identified factors which contribute to safety incidents in primary care: a qualitative study. BMJ Qual Saf. 2015;24:583–93.

  80. 80.

    Longtin Y, Sax H, Leape LL, Sheridan SE, Donaldson L, Pittet D. Patient participation: current knowledge and applicability to patient safety. Mayo Clinic Proc. 2010;85:53–62.

  81. 81.

    Doherty C, Stavropoulou C. Patients’ willingness and ability to participate actively in the reduction of clinical errors: a systematic literature review. Soc Sci Med. 2012;75:257–63.

  82. 82.

    Bronfenbrenner U. The ecology of human development: experiments by nature and design. Cambridge, Massachusetts: Harvard University Press; 1979.

  83. 83.

    McLaren L, Hawe P. Ecological Perspectives in Health Research. J Epidemiol Community Health. 2005;59:6–14.

  84. 84.

    Weick KE. The social psychology of organising. Reading, MA: Addison-Wesley; 1969.

  85. 85.

    Meek VL. Organizational culture: origins and weaknesses. Organ Stud. 1988;9:453–73.

  86. 86.

    Estabrooks CA, Squires JE, Cummings GG, Birdsell JM, Norton PG. Development and assessment of the Alberta context tool. BMC Health Serv Res. 2009;9:234.

  87. 87.

    McCormack B, McCarthy G, Wright J, Slater P, Coffey A. Development and testing of the context assessment index (CAI). Worldviews Evid-Based Nurs. 2009;6(1):27–35.

  88. 88.

    Li SA, Jeffs L, Barwick M, Stevens B. Organizational contextual features that influence the implementation of evidence-based practices across healthcare settings: a systematic integrative review. Syst Rev. 2018;7:72.

  89. 89.

    Strifler L, Cardoso R, McGowan J, Cogo E, Nincic V, Khan PA, Scott A, Ghassemi M, MacDonald H, Lai Y, Treister V, Tricco AC, Straus SE. Scoping review identifies significant number of knowledge translation theories, models, and frameworks with limited use. J Clin Epidemiol. 2018;100:92–102.

  90. 90.

    Övretveit JC, Shekelle PG, Dy SM, McDonald KM, Hempel S, Pronovost P, et al. How does context affect interventions to improve patient safety? An assessment of evidence from studies of five patient safety practices and proposals for research. BMJ Qual Saf. 2011;20:604e610.

  91. 91.

    Cappelli P, Sherer PD. The missing role of context in OB: the need for a meso-level approach. Res Organ Behav. 1991;13:55–110.

  92. 92.

    Mowday RT, Sutton RI. Organizational behavior: linking individuals and groups to organizational con texts. Annu Rev Psychol. 1993;44:195–229.

  93. 93.

    Maren S, Phan L, Liberzon I. The contextual brain: implications for fear conditioning, extinction and psychopathology. Nat Rev Neurosci. 2013;14:417–28.

  94. 94.

    Jarvis P, Holford J, Griffin C. The theory and practice of learning. London: Routledge; 2003. p. 1–170.

  95. 95.

    Phillips DC, Soltis JF, editors. Perspectives on learning. New York: Teachers College; 2009. p. 1–120.

  96. 96.

    Illeris K. A comprehensive understanding of human learning. In: Illeris K, editor. Contemporary theories of learning. Oxford: Routledge; 2009. p. 7–20.

  97. 97.

    Jordan A, Carlile O, Stack A, editors. Approaches to learning – a guide for teachers. New York: Open University Press; 2008. p. 1–251.

  98. 98.

    McMillan K, Perron A. Nurses amidst change. Policy Polit Nurs Pract. 2013;14:26–32.

Download references

Acknowledgements

We thank medical librarian Ida Stadig for support in the search process. We are also grateful for valuable comments on earlier drafts of the manuscript: Margit Neher, Linda Sundberg, Jeanette Wassar Kirk, Kristin Thomas, Siw Carlfjord, Kerstin Roback, and Sarah Birken.

Funding

No particular funding was received.

Availability of data and materials

All data analysed in this review are available in the included published articles.

Author information

PN conceived of the paper. Both authors carried out all aspects of data collection and data analysis. PN wrote the first draft of the manuscript, and revisions were made together with SB. Both authors approved the final version.

Correspondence to Susanne Bernhardsson.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Search strategy and results. Presentation of search strategies and number of records identified in the database searches. (DOCX 18 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Nilsen, P., Bernhardsson, S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res 19, 189 (2019) doi:10.1186/s12913-019-4015-3

Download citation

Keywords

  • Context
  • Determinants
  • Barriers
  • Frameworks
  • Implementation