Skip to main content
  • Research article
  • Open access
  • Published:

Methods used to address fidelity of receipt in health intervention research: a citation analysis and systematic review

Abstract

Background

The American Behaviour Change Consortium (BCC) framework acknowledges patients as active participants and supports the need to investigate the fidelity with which they receive interventions, i.e. receipt. According to this framework, addressing receipt consists in using strategies to assess or enhance participants’ understanding and/or performance of intervention skills. This systematic review aims to establish the frequency with which receipt is addressed as defined in the BCC framework in health research, and to describe the methods used in papers informed by the BCC framework and in the wider literature.

Methods

A forward citation search on papers presenting the BCC framework was performed to determine the frequency with which receipt as defined in this framework was addressed. A second electronic database search, including search terms pertaining to fidelity, receipt, health and process evaluations was performed to identify papers reporting on receipt in the wider literature and irrespective of the framework used. These results were combined with forward citation search results to review methods to assess receipt. Eligibility criteria and data extraction forms were developed and applied to papers. Results are described in a narrative synthesis.

Results

19.6% of 33 studies identified from the forward citation search to report on fidelity were found to address receipt. In 60.6% of these, receipt was assessed in relation to understanding and in 42.4% in relation to performance of skill. Strategies to enhance these were present in 12.1% and 21.1% of studies, respectively. Fifty-five studies were included in the review of the wider literature. Several frameworks and operationalisations of receipt were reported, but the latter were not always consistent with the guiding framework. Receipt was most frequently operationalised in relation to intervention content (16.4%), satisfaction (14.5%), engagement (14.5%), and attendance (14.5%). The majority of studies (90.0%) included subjective assessments of receipt. These relied on quantitative (76.0%) rather than qualitative (42.0%) methods and studies collected data on intervention recipients (50.0%), intervention deliverers (28.0%), or both (22.0%). Few studies (26.0%) reported on the reliability or validity of methods used.

Conclusions

Receipt is infrequently addressed in health research and improvements to methods of assessment and reporting are required.

Peer Review reports

Background

Health behaviour change interventions are typically complex and often consist of multiple, interacting, components [1]. This complexity is magnified by the fact that these interventions are often context-dependent, delivered across multiple settings, by multidisciplinary healthcare professionals, to a range of intervention recipients [24]. As a result, ensuring consistency in the implementation of behaviour change interventions is challenging [5]. Despite this, less attention is given to the implementation of behaviour change interventions than to the design and outcome evaluation of such interventions [68].

Intervention fidelity is defined as the ‘ongoing assessment, monitoring, and enhancement of the reliability and internal validity of an intervention or treatment’ [9, 10]. Monitoring intervention fidelity is integral to accurately interpreting intervention outcomes, increasing scientific confidence and furthering understanding of the relationships between intervention components, processes and outcomes [610]. For example, if an intervention is found to be ineffective, this may be attributable to inadequate or inconsistent fidelity of delivery by the intervention deliverer, rather than the intervention components or design [10]. This can result in the discard of potentially effective interventions, when in fact inadequate implementation may be responsible (described by some as a ‘Type III error’) [11]. Moreover, assessing fidelity can support the wider implementation of interventions in clinical practice by identifying aspects of intervention delivery that require improvement, and intervention deliverer training needs that may form the basis of quality improvement efforts [3]. The importance of assessing intervention fidelity has been emphasised in the recently developed UK Medical Research Council Guidance for conducting process evaluations of complex interventions [12].

Several conceptual models of fidelity have been proposed, and there is no consensus on how best to divide the study of implementation into key components [13]. Proposed models differ in the number and nature of components argued to represent fidelity. In an attempt to synthesise and unify existing conceptual models of fidelity, a Treatment Fidelity Workgroup part of the National Institute of Health (NIH) Behaviour Change Consortium (BCC) has proposed a comprehensive framework that proposes five components of intervention fidelity: design, training, delivery, receipt and enactment [9] (see Bellg et al. (2004) [9] and Borrelli et al. (2005) [10] for full definitions of these components). This framework has guided a considerable amount of health research since then [1417].

The current review examines the methods used to address receipt in health interventions. Patients are now more commonly regarded as active participants in healthcare than as passive recipients [18], particularly with the advent of self-management support in chronic conditions [19]. This active role requires that they engage fully with, understand, and acquire intervention-related skills, so they may subsequently apply them to their day-to-day life (i.e. enactment). As such, receipt is the first recipient-related condition that needs to be fulfilled for outcomes of an intervention to be influenced as intended, and enactment is dependent on this condition being fulfilled.

According to the original BCC framework papers [9, 10, 20], a study that addresses receipt includes one or more strategies to enhance and/or assess participants’ understanding of the intervention and/or the performance of intervention-related skills. The 2011 update [20] added considerations of multicultural factors in the development and delivery of the intervention as a strategy to enhance receipt. Receipt is also defined as the accuracy of participants’ understanding in Lichstein et al.’s (1994) [21] framework, and as ‘ the extent to which participants actively engage with, interact with, are receptive to, and/or use materials or recommended resources’ in frameworks by Linnan and Steckler’s (2002) [22] and by Saunders et al. (2005) [23]. In addition, Saunders et al. (2005) [23] suggest receipt may also refer to participants’ satisfaction with the intervention and the interactions involved. The role of receipt or dose received in these other fidelity, process evaluation, or implementation frameworks, further supports its importance in health research.

Despite this recognised importance of receipt however, systematic reviews to date indicate this concept has received little research attention. Borrelli et al. [10] first examined the extent to which the BCC recommendations to address receipt were followed in health behaviour change research published between 1990–2000. Assessments of participants’ understanding and of performance of skill were found in 40% and 50% of papers, respectively. Strategies to enhance these were found in 52% and 53% of papers, respectively. In subsequent reviews [1417] the proportion of papers addressing receipt varied between 0% and 79% (see Table 1). In general strategies to enhance receipt have more often been included in studies than assessments of receipt (see Table 1).

Table 1 Proportion (%) of papers from past systematic reviews addressing receipt as defined in the BCC framework

There are limitations to the reviews described above. First, they examined fidelity in relation to specific clinical contexts. Currently there is therefore a need to examine the extent to which receipt has been addressed in the wider health intervention research, a little more than a decade after the publication of the original BCC fidelity framework in 2004 [9]. A second limitation, which also applies to Borelli et al.’s review [10], is that limited attention is given to describing the methods used to address receipt. Comparability and coherence in the methods used across studies is advantageous however, particularly for the effective interpretation and use of systematic reviews in decision-making [13]. Providing a synthesis of fidelity methods used so far would be valuable in guiding future work.

This systematic review was designed to address these limitations. It aimed to describe 1) the frequency with which receipt, as defined in the BCC framework, has been addressed in health intervention studies reporting on fidelity and published since 2004, and 2) the methods used to address receipt. Since receipt is a component in other fidelity frameworks than the BCC, and because it can be reported on in papers without reference to a specific framework, the second aim of this review was broader in scope and examined methods used to address receipt irrespective of whether or which guiding framework was used.

Methods

Search strategies

Two electronic searches were used to address the aims of this review. First, to determine the frequency with which receipt, as defined in the BCC framework, has been addressed in health intervention studies since 2004, a forward citation search was conducted using the two seminal BCC framework papers [9, 10]. It was applied to Web of Science and Google Scholar and covered the 2004–2014 period. Results of the second search described below were not used to address this aim, as the focus in search terms on receipt would have introduced bias towards papers reporting on this fidelity component.

Second, to identify methods used to assess receipt in the wider literature (i.e. without focus on the framework(s) used), results from the forward citation search described above were combined with those of a second search performed in five electronic databases (CINAHL, Embase, PsycINFO, Medline, and Allied and Complementary Medicine) using four groups of terms. These comprised synonyms of: i) fidelity, ii) intervention, iii) receipt, and iv) health (Table 2 for a complete list of search terms). Within each group of synonyms, terms were combined using the OR function, and each group of synonyms was combined using the AND function. Terms for receipt and health were used as search terms in all fields (e.g. title, abstract, main body of article), whereas terms for fidelity and intervention were restricted to those contained in titles and abstracts, so as to increase the specificity of the search and identify studies whose main focus was to report on intervention fidelity.

Table 2 Search terms

Paper selection

Papers published in English since 2004, and reporting data on receipt of a health intervention were included in this review. A full list of inclusion and exclusion criteria, applicable to results from both searches conducted, is presented in Table 3. These were applied first at the title level, and abstract, and then at the full-text level. They were piloted by the research team on 80 papers and Cohen’s Kappa [24] was k = 0.82. They were refined as appropriate and verified on a further 40 papers. Discrepancies in screening outcomes were discussed until agreement was reached.

Table 3 Inclusion and exclusion criteria

Data extraction

A standardised data extraction form was developed and used to extract data in relation to: i) Study aims, ii) Study design, iii) Recipients/participants, iv) Intervention description, v) Information on receipt (guiding fidelity framework, assessment methods, enhancement strategies, etc.), and vii) Data collection details (e.g. timing of measurement (s), sample involved, reliability/validity, etc.). Data were extracted by one researcher and subsequently verified by a second researcher. A third reviewer was involved in instances where there were disagreements, and these were resolved through discussion.

Analysis and synthesis

All reviewed papers were examined to investigate how receipt was addressed. This investigation first focused on whether receipt as defined in the BCC framework had been addressed (assessments or strategies to enhance participants’ understanding and performance of skill, and consideration of multicultural factors) and then on any other method reported to assess receipt.

A narrative synthesis of the studies reviewed was performed. The proportion of papers citing the BCC framework and addressing receipt as defined in this framework is first presented, then the frequency at which different methods were used to address receipt in the wider literature is provided.

Results

A PRISMA flow diagram is presented in Fig. 1. Of the 629 papers identified in the forward citation search, 555 were screened following duplicate removal. Thirty-three of these were found to fit the eligibility criteria for this review and were used to address the first aim of this review.

Fig. 1
figure 1

PRISMA Diagram. *168 papers reporting data on any type of fidelity from the forward citation search (left hand side flow) can be calculated by the sum of 52+83+33. Search strategies were conducted consecutively; duplicates removed from the electronic database search results therefore included papers that had already been identified in the forward citation search

Of the 2345 papers identified in the electronic database search, 2282 were screened following duplicate removal. Twenty-two of these papers were selected for inclusion in the review. Combined with the forward citation search results, this resulted in a total of 55 papers being used to address the second aim of this review.

A summary of basic study characteristics (study designs, intervention deliverers and recipients, level and mode of delivery) is presented in Table 4 (detailed information on study characteristics available in Additional file 1).

Table 4 Summary of characteristics of included studies (n = 55)

The fidelity research reported was embedded in RCT or cluster RCT designs in most cases (28 studies, 50.9%) but pilot/feasibility designs were also common (15 studies, 27.3%). All interventions included multiple components. The most common components were education or information provision in 19 studies (34.5%) [2543], and behavioural skills rehearsal or acquisition in 8 studies (14.5%) [25, 26, 30, 3840, 44, 45]. The largest group of intervention recipients (17 studies, 30.9%) was people with health conditions including adults, women and children [33, 34, 43, 44, 4658]. It was unclear who intervention deliverers were in 12 studies (21.8%) [26, 39, 46, 50, 51, 55, 5964], but in studies where this information was identifiable, deliverers were most frequently nurses (10 studies, 18.2%) [33, 3537, 40, 47, 52, 6567]. With regards to level and mode of delivery, interventions were most frequently delivered at the individual (25 studies, 45.5%) [2729, 33, 34, 40, 41, 45, 46, 48, 5052, 54, 56, 60, 63, 65, 66, 6873] and group level (19 studies, 35.1%) [26, 31, 32, 38, 39, 42, 43, 49, 53, 55, 58, 61, 62, 64, 67, 7477]. Face to face was the most common (28 studies, 50.9%) mode of delivery [27, 29, 31, 32, 3538, 4145, 49, 50, 56, 58, 6062, 6668, 7478].

Papers citing the BCC framework and addressing fidelity of receipt as per BCC definition

Of the 629 forward citation search results, 168 papers reported on fidelity of a health intervention (see notes under Fig. 1 to locate these in the PRISMA diagram), 33 (19.6%) of which addressed receipt (studies 1–33 in Table 5). Although all 33 papers cited the BCC framework, 5 (15.2%) papers were not worded in a way to suggest that this framework had informed the fidelity or process evaluation reported [28, 39, 66, 67, 77].

Table 5 Methods of assessment and enhancement of fidelity of receipt

Twenty-five (75.8%) of these 33 studies addressed receipt in one or more ways consistent with the definitions proposed in the BCC framework. An assessment of participants’ understanding was included in 20 (60.6%) studies [25, 29, 31, 3337, 39, 45, 47, 48, 50, 57, 61, 65, 67, 73, 75, 78] and an assessment of participants’ performance of intervention-related skills in 14 (42.4%) studies [3336, 45, 47, 48, 51, 54, 56, 57, 65, 75, 78]. With regards to strategies to enhance receipt, 4 (12.1%) studies reported using a strategy to enhance participants’ understanding [41, 48, 56, 57], 7 (21.1%) to enhance performance of intervention-related skills [39, 41, 44, 47, 48, 56, 57]. Four (12.1%) studies reported having considered multicultural factors in the design or delivery of the intervention [25, 29, 31, 64].

Methods used to assess receipt

To address the second aim of this review, eligible studies identified through both electronic searches (55 studies) were examined. Information on the methods used to assess receipt in these studies is displayed in Table 5 (further details can be found in Additional file 2).

Frameworks used

As a consequence of the focus of the forward citation search on the BCC framework, this was the framework used in the majority (28 studies, 50.9%) of studies to inform planning and/or evaluation (i.e. none of the studies included from the electronic database search reported using the BCC framework). Other frameworks that informed the studies reviewed included the process evaluation framework by Linnan and Steckler (2002) [22] in 11 (20.0%) [27, 46, 52, 53, 55, 60, 66, 68, 69, 71, 74], Lichstein et al.’s Treatment Implementation Model (TIM) [21] in 4 (7.3%) studies [28, 39, 40, 67], Saunders et al.’s framework [23] in 5 (9.1%) studies [26, 30, 46, 49, 59], the Reach, Efficacy, Adoption, Implementation, and Maintenance (RE-AIM) framework [79] in 2 (3.6%) studies [46, 70], Dane & Schneider’s framework [80] in 2 (3.6%) studies [38, 76], Dusenbury et al.’s framework [81, 82] in 2 (3.6%) studies [38, 62], Baranowski et al.’s framework [83] in 1 (1.8%) study [52]. A brief definition of how receipt is defined in these frameworks is available in notes below the Table in Additional file 2. More than one of the above frameworks informed the study in 2 (3.6%) of the 55 reviewed studies [46, 52], with a maximum of 3 frameworks being used, none of them being the BCC framework. In 4 studies (7.3%), there was no suggestion that a framework had been considered [32, 72, 77, 84].

Operationalisations of receipt

Given the focus of the forward citation search on the BCC framework, the two most common ways of assessing receipt in the 55 studies reviewed were measurements of understanding, included in 26 (47.3%) studies [25, 2931, 3337, 39, 40, 45, 4750, 57, 6062, 65, 67, 70, 73, 75, 78], and of performance of skills, included in 16 (29.1%) studies [3336, 45, 47, 48, 51, 54, 56, 57, 65, 70, 71, 75, 78].

Receipt was also operationalised in relation to intervention content (e.g. intervention components received or completed, problems areas discussed, advice given) in 9 (16.4%) studies [28, 32, 44, 60, 61, 6770], satisfaction in 8 (14.5%) studies [27, 41, 49, 52, 55, 59, 65, 66], engagement (level of participation, involvement, enjoyment, or communication) in 8 (14.5%) studies [30, 39, 52, 55, 57, 66, 73, 76], attendance in 8 (14.5%) studies [31, 43, 56, 58, 64, 73, 74, 76], acceptability in 6 (10.9%) studies [26, 42, 48, 49, 63, 75], use of materials (e.g. website use, homework completed) in 4 (7.3%) studies [28, 46, 47, 51], behavioural change and/or maintenance in 4 (7.2%) studies [25, 54, 67, 71], receptivity or responsiveness in 3 (5.5%) studies [38, 62, 77], receipt of intervention materials in 3 (5.5%) studies [39, 59, 84], intention to implement learnings from the intervention in 2 studies [52, 60], telephone contacts during intervention delivery in 2 (3.6%) studies [48, 64], reaction to intervention or feedback on program in 2 (3.6%) studies [32, 39], self-efficacy or confidence in 2 (3.6%) studies [30, 61], exposure (e.g. awareness of intervention) in 2 (3.6%) studies [59, 71], and use of skills learnt in 2 (3.6%) studies [45, 74]. Operationalisations of receipt that were only used in 1 study (1.8%) were attitude in relation to intervention topic [61], perceived effects of exposure [36], treatment received with respect [70], feasibility [26], adherence to commitments made [52], adequacy of communication methods used [40], and availability of hardware to use intervention materials [48].

Studies using the same framework operationalised receipt in many ways, some of which were not consistent with the conceptualisation of receipt proposed in respective frameworks. One example is the 12 studies using the Linnan and Steckler framework [22] in which dose received is defined as ‘the extent to which participants actively engage with, interact with, are receptive to, and/or use materials or recommended resources’. These studies included measures of engagement, present in 4 studies [52, 53, 55, 66] and measures relating to exposure to or use of intervention materials in 3 studies [46, 71, 74], behaviour change following the intervention in 1 study [71], intention to implement intervention in 2 studies [52, 60]. Other measures were used that were less consistent with the frameworks’ definition of receipt. These included measures of satisfaction in 4 studies [27, 52, 55, 66], intervention content in 3 studies [60, 68, 69], attendance in 1 study [74], and adherence to commitments made in 1 study [52].

A second example is the 4 studies using Lichstein et al’s [21] framework in which receipt is defined as the accuracy of participants’ understanding of receipt. These studies included measures of receipt that related to intervention content (problems areas discussed [28], accuracy of recall of intervention content [67]), contacts [28], participants’ receipt of intervention materials [39] or level of participation [39], feedback on the intervention [39], and adequacy of communication methods used [40]. The same applies for studies using other frameworks (see frameworks and measures used in Additional file 2).

Assessments of receipt

Five (9.1%) studies included only an objective assessment of receipt [43, 44, 46, 58, 76], whilst 7 (12.7%) combined this with a subjective assessment [31, 38, 48, 51, 56, 64, 73]. The majority of studies (43 studies, 78.2%) included only a subjective assessment of receipt (i.e. collected on intervention deliverers or recipients) [2530, 3237, 3942, 45, 47, 49, 50, 5255, 57, 5963, 6572, 74, 75, 77, 78, 84].

Objective assessments

In the 12 (21.8%) studies that included an objective assessment of receipt [31, 34, 38, 43, 44, 46, 48, 51, 58, 64, 73, 76], this was measured using the number of participants reached during the intervention and the number of participants requiring to borrow hardware to use intervention materials in 1 study [48], website monitoring of module or chapter completion in 2 studies [46, 51], website logins in 1 study [46], records from intervention sessions in 1 study [44], or attendance logs in 8 studies [31, 34, 38, 43, 58, 64, 73, 76].

Subjective assessments

In total 50 (90.0%) of the 55 studies included a subjective assessment, 21 (42.0%) of which used qualitative methods [25, 28, 32, 33, 36, 39, 40, 42, 45, 47, 50, 5254, 57, 63, 66, 67, 69, 73, 75] and 38 (76.0%) of which used quantitative methods [26, 27, 2932, 34, 35, 3742, 45, 48, 49, 51, 52, 5557, 5962, 6466, 68, 7072, 74, 75, 77, 78, 84].

Fourteen (28.0%) of the 50 studies included a subjective assessment collected on the intervention deliverer [26, 28, 30, 33, 34, 53, 56, 57, 62, 64, 69, 73, 78, 84], 25 (50.0%) studies on the intervention recipient [27, 29, 31, 32, 3538, 4042, 45, 47, 51, 54, 5961, 63, 65, 7072, 74, 77], and 11 (22.0%) studies on both of these [25, 39, 4850, 52, 55, 6668, 75].

Assessments collected on intervention deliverers

Twenty-five (45.5%) of the 55 studies that included a measurement of receipt collected this data on the intervention deliverer. Although these were collected on intervention deliverers, they were generally about intervention participants’. An equal number of these assessments involved the collection of qualitative (14 studies, 25.5%) and quantitative data (14 studies, 25.5%). Qualitative data collected in 14 (25.5%) studies consisted of individual interviews, focus groups or reports in 4 studies [50, 52, 67, 69], field notes and comments in 3 studies [39, 53, 66], audio or videotapes of intervention sessions in 3 studies [66, 73, 75], participant observations in 2 studies [33, 48], documentation in participants’ care plan in 1 study [25], records of contacts kept during the intervention in 1 study [28], and active questioning to participants in 1 study [57]. Quantitative data was collected via self-report through questionnaires, surveys or checklists in 8 studies [26, 30, 49, 52, 55, 62, 68, 84], checklists or ratings completed during or following participant observations in 5 studies [34, 53, 56, 57, 78], number and length of phone contacts with participants in 1 study [64].

Assessments collected on intervention recipients

In total there were 36 (65.5%) studies that included a measure of receipt taken on intervention participants’. Thirteen (23.6%) studies included an assessment of receipt that was performed using qualitative methods. These included interviews in 4 studies [40, 50, 63, 67], focus groups in 3 studies [32, 36, 75], reports in 2 studies [25, 67], audio recordings in 2 studies [45, 54], verbal confirmation of participants’ understanding in 1 study [25], confirmation of receipt of information on intervention requirements in 1 study [39], data on meeting discussions in 1 study [42], and daily journals in 1 study [45], and review of participants’ skills and understanding through demonstrations and practice in 1 study [47]. Quantitative data was collected in just over the majority (29 studies, 52.7%) of studies via questionnaire/surveys [27, 29, 31, 32, 35, 3742, 45, 48, 49, 51, 52, 55, 5961, 65, 66, 68, 7072, 74, 75, 77].

Validity and reliability of subjective assessments

In only 13 (26.0%) of the 50 studies that included a subjective assessment, there was some consideration made towards the reliability or validity of the methods used to assess receipt [26, 29, 37, 42, 45, 48, 53, 54, 61, 63, 65, 69, 75].

These considerations were reported in relation to quantitative methods (surveys, questionnaires, or checklists) in 10 (26.3%) of the 38 studies making use of these [26, 29, 37, 42, 45, 48, 53, 61, 65, 75]. These considerations included reporting or providing justification for the lack of reporting of Cronbach alpha [45, 48, 53, 65], information on psychometric properties [29, 37, 75], reporting on construct/content validity [42, 61] or on blinding [26].

These considerations were reported in relation to qualitative methods in 4 (19.0%) of the 21 studies using these [45, 54, 63, 69]. Data was coded by more than one person [54, 63], the coder was blinded to group allocation [45], or the scoring attributed to each participant based on the qualitative data collected was calculated independently by 2 researchers and the kappa coefficient for their agreement reported [69].

Sample selection for receipt assessment

The majority of the 55 studies reviewed (38 studies, 69.1%) [2530, 33, 35, 36, 3847, 49, 51, 52, 5562, 64, 67, 68, 72, 74, 7678] collected receipt data on all (100%) intervention deliverers’ or intervention participants. There were 4 (7.3%) studies in which the proportion of the sample on which the data was collected varied by assessment measure, one of them being less than 100% [48, 50, 73, 75]. For the 15 (27.3%) studies in which receipt was assessed on less than 100% of the sample, the selection of the subsample assessed was related to missing data or participant withdrawal in 4 studies [63, 65, 66, 70], invitations issued (no further details provided) [50], purposive sampling [54], random selection [56, 73], convenience sampling [53], specific eligibility criteria defined to select the cluster to assess [32], a representative sampling method [69], one in every 5 participants being assessed [71], only one of the intervention groups being assessed [48], or a subset of people randomly selected from one of the clusters assessed [84]. In one study this information was unclear [75].

Timing of receipt assessments

In 23 (41.8%) of the 55 studies reviewed, the assessment (s) of receipt were conducted during the intervention period (e.g. during/after each intervention session) [25, 27, 28, 30, 33, 34, 43, 44, 46, 47, 50, 5459, 62, 64, 68, 73, 76, 78]. A slightly lower number of studies (15 studies, 27.3%) included an assessment of receipt that was performed following the intervention [26, 29, 32, 36, 38, 40, 41, 60, 63, 6972, 74, 77]. Others (14 studies, 25.5%) included assessments of receipt taken at different time points: 4 (7.3%) studies included pre and post assessments [31, 35, 37, 61], one of which combined this with an assessment during the intervention too [31]. Nine (16.4%) studies included assessments taken both during and after the intervention [39, 42, 45, 48, 49, 52, 66, 67, 75]. Another, less frequent combination, consisted in assessments taken before as well as during the intervention, and this was found to happen in 1 study [51]. In 2 (3.6%) studies the timing of the receipt assessments was unclear [65, 84].

Assessments of receipt such as those based on attendance logs, documentation in care plans, field notes, comments, meeting data, recordings, daily journals, observations, records of contacts, demonstrations of skills or completion of practice logs, logins/website monitoring, were generally collected during the intervention period.

Assessments of receipt collected after the intervention were generally those that required participants’ exposure to the intervention, for example measures of satisfaction, acceptability, feasibility, recall of intervention content, feedback forms, use or receptivity to intervention materials/skills, interviews/focus groups on intervention content/experiences using intervention. Assessments based on pre and post intervention measurements were used to examine effects of the intervention on variables such as knowledge or self-efficacy.

Discussion

The first aim of this review was to identify the frequency with which receipt, as defined in the BCC framework, is addressed in health intervention research. Only 19.6% of the studies identified from the forward citation search to report on fidelity were found to address receipt, compared with 33% in a recent review on clinical supervision [85]. Amongst the studies identified, 60.6% assessed receipt in relation to understanding (compared to 0–69% in other reviews [10, 1417]) and 42.4% in relation to performance of skill (39–65% in other reviews [10, 1417]). Strategies to enhance understanding were present in only 12.1% (0–79% in other reviews [10, 1417]) and performance of skill in 21.1% of studies (50–69% in other reviews [10, 1417]). These results suggest that there has been little improvement over time with regards to the frequency with which receipt is addressed in health intervention research and that there is a need to continue to advocate for better quality evaluations that focus and report on this fidelity component. These results were further supported in our examination of the wider literature (i.e. not only BCC-related studies), in which understanding was found to be assessed in 47.3% of the 55 studies reviewed and performance of skill in 29.1%. As was suggested by Prowse and colleagues [86], integrating fidelity components to the list of recommended information to report on in reporting guidelines may help increase the proportions of studies addressing and reporting on receipt. Some reporting guidelines have encouraged reporting on fidelity of receipt (e.g. Template for Intervention Description and Replication checklist [87]) but others have not. The Consolidated Standards of Reporting Trials (CONSORT) checklist for RCTs [88] for example emphasises the importance of external validity with regards to generalisability, but the importance of reporting on fidelity is not included. Similarly, a CONSORT extension for non-pharmacological trials [89] does underline importance of reporting on implementation details, but the emphasis is on intervention delivery and not on fidelity of receipt. Consistency across reporting guidelines would help to ensure receipt is addressed and reported more consistently.

The proportions listed above taken from our findings are considerably lower than proportions found in other reviews (see Table 1) that examine receipt using the BCC framework as a guide, particularly with regards to strategies to enhance receipt. Possible explanations for this may be related to differences in the methods used to conduct these systematic reviews. Previous reviews have excluded papers based on study designs. Preyde et al. [17] for example focused only on RCTs and quasi-experimental designs, whilst Garbacz et al [14] required the presence of a comparison or control group. Similarly, McArthur et al [16] included only RCTs and control groups. In contrast, our review was inclusive of all study designs and a considerable proportion was for example, pilot or feasibility studies (27.3%). In a further 5 papers (9.1%) the study design was unclear. Higher quality studies, and those aiming to test hypotheses, may be more likely to monitor and report on fidelity components. Maynard and colleagues [90] for example found that RCTs were 3 times more likely to measure fidelity than studies with a design of lower quality. In this review, studies were not excluded on the basis of study design. We believe that addressing fidelity components is important in study designs like pilot or feasibility studies, and the proportion of these designs included in our review tends to indicate this belief is not uncommon. These trials play a fundamental role in determining the methods and procedures used to assess and implement an approach that will subsequently be used in a larger study and they can help refine an intervention and its implementation to increase its probability of success when evaluated in a larger RCT [91].

Another explanation for some of the differences found between this and other reviews lies in the method used to assess the presence or absence of assessments or strategies to enhance receipt. In other reviews [10, 1517], fidelity components were judged to be ‘present’, ‘absent (but should be present)’, or ‘not applicable’ (the particular fidelity strategy was not applicable to the paper in question). In this review, the denominator used to calculate proportions was the total number of studies, not only those studies where receipt was deemed to be applicable. It is therefore a conservative estimate of receipt. Similar to Garbacz et al.[14], our review did not account for studies where receipt was not deemed applicable. Performance of a skill, for example, may not have been relevant in all the studies we reviewed. An intervention aiming to provide information on health benefits only (e.g. Kilanowski et al.[31] in this review) is one example of this. As most interventions reviewed involved multiple components and targeted behaviour change, it is unlikely this difference in methods significantly affected our findings. In line with this, future work may benefit from developing guidance for researchers on the types of methods to address fidelity components and that is specific to different intervention types, populations, or evaluation methodologies. Some researchers have begun this process by working towards the identification of features that are unique to the fidelity of technology-based interventions [92].

An important challenge in the field of fidelity is the varying nature of interventions, and the tailoring of the design of an intervention fidelity plan that is therefore required [90]. This is compounded by the other challenge that is the lack of reliable methods available to measure intervention fidelity [93]. The second aim of this review was to describe the methods used to address receipt. Our main findings are that receipt has been operationalised in a variety of ways across studies, and that operationalisations are not always consistent with the framework reported to be guiding the evaluation. Such inconsistencies in the operationalisation of receipt make it difficult to synthesise evidence of receipt and to build a science of fidelity. Clearer reporting of methods to address receipt is also required and may help improve consistency in this field. In this review a third reviewer was involved in data extraction for 18 (32.3%) papers to help reach agreement on the methods used to assess receipt. One common problem was the lack of clear differentiation between fidelity components or other constructs measured and reported on. Ensuring constructs are clearly labelled and differentiated from others is recommended for future work. A recent meta-evaluation of fidelity work in psychosocial intervention research supports our reviews’ findings as it found that there was strong variation in whether authors defined fidelity, that the use of different fidelity frameworks and terminology tended to generate confusion and make comparisons difficult, and that the operationalisation of receipt varied greatly [94]. The BCC framework was an attempt to build consistency in the science of fidelity, but ten years later this attempt does not appear to have been entirely successful. As was underlined by Prowse and colleagues [94] there is a need for standardisation in the field of fidelity, but this must not increase complexity.

A subjective assessment of receipt was included in 90.0% of the studies reviewed, and these were carried out using quantitative (76.0%) and/or qualitative methods (42.0%). Quantitative and qualitative methods have been recognised to provide valuable process evaluation data [13], therefore the combination found in this review is not surprising. One important finding from our review however was that only 26.0% of studies using subjective assessments of receipt reported on the reliability and validity of the measurement tools or qualitative methodology used. More specifically, 26.3% of studies using quantitative methods and 19.0% of those using qualitative methods were found to provide such information. This has been found to be the case in a previous review on fidelity in which none of the studies addressing fidelity were found to have reported on reliability [90]. The lack of information on these issues limits the utility and value of the measures used and their potential to inform evidence-based practice and policy.

Strengths and limitations of the review

A strength of this review lies in the search strategies used. A forward citation search strategy on the two seminal papers presenting the BCC framework was performed to determine the frequency with which healthcare intervention studies citing this framework assessed receipt. This has been shown to be an effective search strategy to identify literature pertaining to a specific framework or model [95]. Its use in this review was therefore well-suited to the exhaustive identification of relevant papers. Citation searching has been shown to help locate relevant work that traditional database searching sometimes fails to identify [96, 97] but is not commonly used in reviews. The second strategy combined the results from the forward citation search and a database search to examine methods used to assess receipt in healthcare interventions. One other strength of this review is the range of health interventions it covered. Previous reviews on fidelity have focused on specific fields of intervention research and populations (e.g. second-hand smoking [15], mental health [16], and psychosocial oncology [17]. Although Borrelli and colleagues [10] examined a broad range of interventions, their review was published over 10 years ago. To the best of our knowledge, the current review is the first to focus specifically on fidelity of receipt. It was therefore considered more appropriate to broaden the intervention focus as much possible, to reach an overall understanding of the current state of this field of research. Finally, our focus on methods to address receipt has not been investigated before. Earlier reviews [98, 99] have reported on methods to assess fidelity but these were focused on delivery.

This review is not without limitations. First, the first research question focused on the BCC framework. Other fidelity frameworks have been used and the study of their applications may have yielded findings that could have added to our understanding of receipt in interventional research. Despite this we contend that the BCC framework was chosen for its comprehensiveness, as it was developed to unify previously proposed frameworks of fidelity, and to enable comparison with previous reviews that have examined fidelity using this framework. Furthermore, our second research question was broad in scope, and examined the use of several other frameworks. This was to account for the emerging science of fidelity assessment [100], and the likely variability in fidelity conceptualisations and practices.

Second, this review included only published work. The reporting of complex health interventions is often incomplete [101, 102], and the lack of reporting in published manuscripts of fidelity assessments does not necessarily imply their omission from evaluation designs. Consulting the grey literature may have identified a higher frequency with which fidelity of receipt was assessed. Finally, our examination of how receipt was addressed in the literature was applied to the intervention group and not to control groups [20]. We agree that it is important for fidelity to be assessed in control groups, however we did not feel it was within the scope of this review to examine this.

Furthermore, it should also be noted that fidelity of interventions is part of a broader process in which context is an important consideration, in terms of how it affects the implementation of the intervention (e.g. adaptations and alterations to the intervention) and the mechanisms of impact (e.g. participants’ responses to and interactions with the intervention) [13]. For example, in interventions to increase vaccination uptake, both media scares (context) and individual differences in cognitive and emotional antecedents (individual beliefs and fears) to vaccine uptake may be important considerations. If such interventions are not successful in improving participants’ understanding of vaccination, or skills in cognitive reframing regarding vaccination in the context of collective fear, then it is unlikely that vaccination would be enacted and fear would remain. Yet participants with improved understanding and skills in challenging unhelpful beliefs would be more likely to vaccinate. Therefore, for optimal receipt of an intervention, tailoring an intervention to the individual and their social and cultural context will plausibly relate to better receipt of the intervention, which will result in turn improved outcomes. Future studies should examine the extent to which intervention receipt is the mediating mechanism between tailored interventions and enactment, and how these factors impact on outcomes.

Conclusion

Addressing intervention fidelity is a fundamental part of conducting valid evaluations in health intervention research, and receipt is one of the fidelity components to address. This systematic review examined the extent to which, and the methods used to address receipt in health intervention research in the last ten years. The results indicate a need for receipt to be more frequently integrated to research agendas. The review also identified some issues and concerns relating to the ways in which receipt has been addressed to date, with operationalisations of receipt lacking in consistency. We recommend that information on reliability and validity of the receipt measures be reported in future fidelity research.

Box 1: Lessons learnt and recommendations from this review

Lessons learnt

• Fidelity of receipt (as defined in the BCC framework, i.e. assessments of participants’ understanding and performance of skill and strategies to enhance these) remains poorly assessed in health intervention research

• Reporting of strategies to enhance receipt, i.e. participants’ understanding and performance of skill, remains particularly low.

• Other frameworks than the BCC have been used to guide fidelity/process evaluation work, but operationalisations of receipt do not always match the definitions of receipt provided in these frameworks

• The reporting of methods used to assess receipt requires improvement. Reporting was unclear in a number of papers, requiring readers to read manuscripts attentively several times to identify how receipt was operationalised and providing no information on the validity/reliability of the methods used

• Quantitative and qualitative methods, or a combination of both, have been used to address fidelity of receipt in health intervention research.

Recommendations for future work

• In the early stages of study design, consider how to address fidelity of receipt both in relation to assessments and strategies to enhance

• Select one or more fidelity frameworks to guide fidelity work (or use an overarching model) and ensure the methods used to assess receipt are consistent with the definitions of receipt in the chosen framework (s) (provide definitions of receipt)

• Clearly differentiate between fidelity components and other constructs when writing papers (e.g. receipt and enactment are different constructs, therefore methods used to assess them need to be described separately, as well as results).

• Address and report on the reliability and validity of the methods used to assess receipt

Abbreviations

BCC:

Change consortium framework

CONSORT:

Consolidated Standards of reporting trials

NIH:

National institute of health

PRISMA:

Preferred reporting items for systematic reviews and meta-analyses

RCT:

Randomised controlled trial

RE-AIM:

Reach, efficacy, adoption, implementation, and maintenance

TARS:

Training acceptability rating scale

Tim:

Treatment implementation model

References

  1. Michie S, Abraham C, Whittington C, McAteer J, Gupta S. Effective techniques in healthy eating and physical activity interventions: A meta-regression. Health Psychol. 2009;28(6):690–701. doi:10.1037/a0016136.

    Article  PubMed  Google Scholar 

  2. Bonell C, Fletcher A, Morton M, Lorenc T, Moore L. Realist randomised controlled trials: a new approach to evaluating complex public health interventions. Soc Sci Med. 2012;75(12):2299–306. doi:10.1016/j.socscimed.2012.08.032.

    Article  PubMed  Google Scholar 

  3. Durlak JA. Why Program Implementation is Important. J Prev Interv Community. 1998;17(2):5–18. doi:10.1300/J005v17n02_02.

    Article  Google Scholar 

  4. Montgomery P, Grant S, Hopewell S, Macdonald G, Moher D, Michie S, et al. Protocol for CONSORT-SPI: an extension for social and psychological interventions. Implement Sci. 2013;8:99. doi:10.1186/1748-5908-8-99.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Alexander JA, Hearld LR. Methods and metrics challenges of delivery-system research. Implement Sci. 2012;7:15. doi:10.1186/1748-5908-7-15.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Hardeman W, Michie S, Fanshawe T, Prevost AT, McLoughlin K, Kinmonth AL. Fidelity of delivery of a physical activity intervention: Predictors and consequences. Psychol Health. 2008;23(1):11–24. doi:10.1080/08870440701615948.

    Article  PubMed  Google Scholar 

  7. Issenberg SB, McGaghie WC, Petrusa ER, Lee GD, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach. 2005;27:10–28. doi:10.1080/01421590500046924.

    Article  PubMed  Google Scholar 

  8. Mihalic S, Fagan A, Argamaso S. Implementing the LifeSkills Training drug prevention program: factors related to implementation fidelity. Implement Sci. 2008;3(1):5. doi:10.1186/1748-5908-3-5.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23:443–51. doi:10.1037/0278-6133.23.5.443.

    Article  PubMed  Google Scholar 

  10. Borrelli B, Sepinwall D, Ernst D, Bellg AJ, Czajkowski S, Breger R, et al. A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research. J Consult Clin Psychol. 2005;73(5):852–60. doi:10.1037/0022-006x.73.5.852.

    Article  PubMed  Google Scholar 

  11. Dobson D. Avoiding type III error in program evaluation: Results from a field experiment. Eval Program Plann. 1980;3:269–76.

    Article  Google Scholar 

  12. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M: Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008, 337 doi: http://0-dx-doi-org.brum.beds.ac.uk/10.1136/bmj.a1655

  13. Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258. doi:10.1136/bmj.h1258.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Garbacz LL, Brown DM, Spee GA, Polo AJ, Budd KS. Establishing treatment fidelity in evidence-based parent training programs for externalising disorders in children and adolescents. Clin Child Fam Psychol Rev. 2014;17(3):230–47. doi:10.1007/s10567-014-0166-2.

    Article  PubMed  Google Scholar 

  15. Johnson-Kozlow M, Hovell MF, Rovniak LS, Sirikulvadhana L, Wahlgren DR, Zakarian JM. Fidelity issues in secondhand smoking interventions for children. Nicotine Tob Res. 2008;10(12):1677–90.

    Article  PubMed  PubMed Central  Google Scholar 

  16. McArthur BA, Riosa PB, Preyde M. Treatment fidelity in psychosocial intervention for children and adolescents with comorbid problems. Child Adolesc Mental Health. 2012;17(3):139–45. doi:10.1111/j.1475-3588.2011.00635.x.

    Article  Google Scholar 

  17. Preyde M, Burnham PV. Intervention Fidelity in Psychosocial Oncology. J Evid Based Soc Work. 2011;8(4):379–96. doi:10.1080/15433714.2011.542334.

    Article  PubMed  Google Scholar 

  18. Newman S, Steed E, Mulligan K. Chronic Physical Illness: Self Management and Behavioural Interventions. 1st ed. Berkshire: Open University Press; 2008.

    Google Scholar 

  19. Lorig KR, Holman H. Self-management education: history, definition, outcomes, and mechanisms. Ann Behav Med. 2003;26:1–7. doi:10.1207/S15324796ABM2601_01.

    Article  PubMed  Google Scholar 

  20. Borrelli B. The assessment, monitoring, and enhancement of treatment fidelity in public health clinical trials. J Public Health Dent. 2011;71 Suppl 1:S52–63. doi:10.1111/j.1752-7325.2011.00233.x.

    Article  PubMed Central  Google Scholar 

  21. Lichstein KL, Riedel BW, Grieve R. Fair tests of clinical trials: A treatment implementation model. Adv Behav Res Ther. 1994;16:1–29. doi:10.1016/0146-6402(94)90001-9.

    Article  Google Scholar 

  22. Linnan L, Steckler A. Process Evaluation for Public Health Interventions and Research: An Overview. In: Process Evaluation for Public Health Interventions and Research. 1st ed. San Fransisco: Jossey-Bass; 2002. p. 1–23.

    Google Scholar 

  23. Saunders RP, Evans MH, Joshi P. Developing a process-evaluation plan for assessing health promotion program implementation: a how-to guide. Health Promot Pract. 2005;6:134–47. doi:10.1177/1524839904273387.

    Article  PubMed  Google Scholar 

  24. Cohen J. A Coefficient of Agreement for Nominal Scales. Educ Psychol Meas. 1960;20:37–46. doi:10.1177/001316446002000104.

    Article  Google Scholar 

  25. Black K. Establishing empirically-informed practice with caregivers: findings from the CARES program. J Gerontol Soc Work. 2014;57:585–601. doi:10.1080/01634372.2013.865696.

    Article  PubMed  Google Scholar 

  26. Branscum P, Sharma M, Wang LL, Wilson B, Rojas-Guyler L. A process evaluation of a social cognitive theory-based childhood obesity prevention intervention: the Comics for Health program. Health Promot Pract. 2013;14:189–98. doi:10.1177/1524839912437790.

    Article  PubMed  Google Scholar 

  27. Brice JH, Kingdon D, Runyan C. Welcome to the world. Prehosp Emerg Care. 2009;13:228–36. doi:10.1080/10903120802474497.

    Article  PubMed  Google Scholar 

  28. Chee YK, Gitlin LN, Dennis MP, Hauck WW. Predictors of adherence to a skill-building intervention in dementia caregivers. J Gerontol A Biol Sci Med Sci. 2007;62:673–8.

    Article  PubMed  Google Scholar 

  29. Ford S, Meghea C, Estes T, Hamade H, Lockett M, Williams KP. Assessing the fidelity of the Kin KeeperSM prevention intervention in African American, Latina and Arab women. Health Educ Res. 2014;29:158–65. doi:10.1093/her/cyt100.

    Article  PubMed  Google Scholar 

  30. Goenka S, Tewari A, Arora M, Stigler MH, Perry CL, Arnold JP, et al. Process evaluation of a tobacco prevention program in Indian schools--methods, results and lessons learnt. Health Educ Res. 2010;25:917–35. doi:10.1093/her/cyq042.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Kilanowski JF, Lin L: Summer migrant students learn healthy choices through videography. J Sch Nurs 2014, 30: 272–280. doi: 10.59840513506999.

  32. Potter SC, Schneider D, Coyle KK, May G, Robin L, Seymour J. What works? Process evaluation of a school-based fruit and vegetable distribution program in Mississippi. J Sch Health. 2011;81:202–11. doi:10.1111/j.1746-1561.2010.00580.x.

    Article  PubMed  Google Scholar 

  33. Pretzer-Aboff I, Galik E, Resnick B. Feasibility and impact of a function focused care intervention for Parkinson’s disease in the community. Nurs Res. 2011;60:276–83. doi:10.1097/NNR.0b013e318221bb0f.

    Article  PubMed  Google Scholar 

  34. Resnick B, Inguito P, Orwig D, Yahiro JY, Hawkes W, Werner M, et al. Treatment fidelity in behavior change research: a case example. Nurs Res. 2005;54:139–43.

    Article  PubMed  Google Scholar 

  35. Resnick B, Galik E, Pretzer-Aboff I, Gruber-Baldini AL, Russ K, Cayo J, et al. Treatment fidelity in nursing home research: the Res-Care Intervention Study. Res Gerontol Nurs. 2009;2:30–8. doi:10.3928/19404921-20090101-09.

    Article  PubMed  Google Scholar 

  36. Resnick B, Galik E, Enders H, Sobol K, Hammersla M, Dustin I, et al. Pilot testing of function-focused care for acute care intervention. J Nurs Care Qual. 2011;26:169–77. doi:10.1097/NCQ.0b013e3181eefd94.

    Article  PubMed  Google Scholar 

  37. Resnick B, Galik E, Gruber-Baldini A, Zimmerman S. Understanding dissemination and implementation of a new intervention in assisted living settings: the case of function-focused care. J Appl Gerontol. 2013;32:280–301. doi:10.1177/0733464811419285.

    Article  PubMed  Google Scholar 

  38. Skara S, Rohrbach LA, Sun P, Sussman S. An evaluation of the fidelity of implementation of a school-based drug abuse prevention program: project toward no drug abuse (TND). J Drug Educ. 2005;35:305–29.

    Article  PubMed  Google Scholar 

  39. Stevens AB, Strasser DC, Uomoto J, Bowen SE, Falconer JA. Utility of Treatment Implementation methods in clinical trial with rehabilitation teams. J Rehabil Res Dev. 2007;44:537–46.

    Article  PubMed  Google Scholar 

  40. Teel CS, Leenerts MH. Developing and testing a self-care intervention for older adults in caregiving roles. Nurs Res. 2005;54:193–201.

    Article  PubMed  Google Scholar 

  41. Weinstein P, Milgrom P, Riedy CA, Mancl LA, Garson G, Huebner CE, et al. Treatment fidelity of brief motivational interviewing and health education in a randomised clinical trial to promote dental attendance of low-income mothers and children: Community-Based Intergenerational Oral Health Study “Baby Smiles”. BMC Oral Health. 2014;14:15. doi:10.1186/1472-6831-14-15.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Yamada J, Stevens B, Sidani S, Watt-Watson J. Test of a Process Evaluation Checklist to improve neonatal pain practices. West J Nurs Res. 2015;37:581–98. doi:10.1177/0193945914524493.

    Article  PubMed  Google Scholar 

  43. Yates BC, Schumacher KL, Norman JE, Krogstrand KS, Meza J, Shurmur S. Intervention fidelity in a translational study: lessons learned. Res Theory Nurs Pract. 2013;27:131–48.

    Article  PubMed  Google Scholar 

  44. Asenlof P, Denison E, Lindberg P. Individually tailored treatment targeting activity, motor behavior, and cognition reduces pain-related disability: a randomised controlled trial in patients with musculoskeletal pain. J Pain. 2005;6:588–603. http://0-dx-doi-org.brum.beds.ac.uk/10.1016/j.jpain.2005.03.008.

    Article  PubMed  Google Scholar 

  45. Zauszniewski JA, Musil CM, Burant CJ, Standing TS, Au TY. Resourcefulness training for grandmothers raising grandchildren: establishing fidelity. West J Nurs Res. 2014;36:228–44. doi:10.1177/0193945913500725.

    Article  PubMed  Google Scholar 

  46. Broekhuizen K, Jelsma JG, Van Poppel MN, Koppes LL, Brug J, Van MW. Is the process of delivery of an individually tailored lifestyle intervention associated with improvements in LDL cholesterol and multiple lifestyle behaviours in people with familial hypercholesterolemia? BMC Public Health. 2012;12:348. doi:10.1186/1471-2458-12-348.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Bruckenthal P, Broderick JE. Assessing treatment fidelity in pilot studies assist in designing clinical trials: an illustration from a nurse practitioner community-based intervention for pain. ANS Adv Nurs Sci. 2007;30:E72–84.

    Article  PubMed  Google Scholar 

  48. Carpenter JS, Burns DS, Wu J, Yu M, Ryker K, Tallman E, et al. Strategies used and data obtained during treatment fidelity monitoring. Nurs Res. 2013;62:59–65.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Cosgrove D, Macmahon J, Bourbeau J, Bradley JM, O’Neill B. Facilitating education in pulmonary rehabilitation using the living well with COPD programme for pulmonary rehabilitation: a process evaluation. BMC Pulm Med. 2013;13:50. doi:10.1186/1471-2466-13-50.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Dyas JV, Togher F, Siriwardena AN. Intervention fidelity in primary care complex intervention trials: qualitative study using telephone interviews of patients and practitioners. Qual Prim Care. 2014;22:25–34.

    PubMed  Google Scholar 

  51. Eaton LH, Doorenbos AZ, Schmitz KL, Carpenter KM, McGregor BA. Establishing treatment fidelity in a web-based behavioral intervention study. Nurs Res. 2011;60:430–5. doi:10.1097/NNR.0b013e31823386aa.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Jonkers C, Lamers F, Bosma H, Metsemakers J, Kempen G, Van EJ. Process evaluation of a minimal psychological intervention to reduce depression in chronically ill elderly persons. Patient Educ Couns. 2007;68:252–7.

    Article  PubMed  Google Scholar 

  53. McCreary LL, Kaponda CP, Kafulafula UK, Ngalande RC, Kumbani LC, Jere DL, et al. Process evaluation of HIV prevention peer groups in Malawi: a look inside the black box. Health Educ Res. 2010;25:965–78. doi:10.1093/her/cyq049.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Michie S, Hardeman W, Fanshawe T, Prevost AT, Taylor L, Kinmonth AL. Investigating theoretical explanations for behaviour change: the case study of ProActive. Psychol Health. 2008;23:25–39. doi:10.1080/08870440701670588.

    Article  PubMed  Google Scholar 

  55. Nakkash R, Kobeissi L, Ghantous Z, Saad MA, Khoury B, Yassin N. Process evaluation of a psychosocial intervention addressing women in a disadvantaged setting. Glob J Health Sci. 2012;4:22–32. doi:10.5539/gjhs.v4n1p22.

    PubMed Central  Google Scholar 

  56. Resnick B, Michael K, Shaughnessy M, Nahm ES, Sorkin JD, Macko R. Exercise intervention research in stroke: optimising outcomes through treatment fidelity. Top Stroke Rehabil. 2011;18 Suppl 1:611–9. doi:10.1310/tsr18s01-611.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Robb SL, Burns DS, Docherty SL, Haase JE. Ensuring treatment fidelity in a multi-site behavioral intervention study: implementing NIH Behavior Change Consortium recommendations in the SMART trial. Psychooncology. 2011;20:1193–201. doi:10.1002/pon.1845.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Smith SM, Paul G, Kelly A, Whitford DL, O’Shea E, O’Dowd T. Peer support for patients with type 2 diabetes: cluster randomised controlled trial. BMJ. 2011;342:d715.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  59. Bjelland M, Bergh IH, Grydeland M, Klepp KI, Andersen LF, Anderssen SA, et al. Changes in adolescents’ intake of sugar-sweetened beverages and sedentary behaviour: results at 8 month mid-way assessment of the HEIA study--a comprehensive, multi-component school-based randomised trial. Int J Behav Nutr Phys Act. 2011;8:63. doi:10.1186/1479-5868-8-63.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Boschman JS, van der Molen HF, Sluiter JK, Frings-Dresen MH. Improving occupational health care for construction workers: a process evaluation. BMC Public Health. 2013;13:218. doi:10.1186/1471-2458-13-218.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Delaney C, Fortinsky R, Mills D, Doonan L, Grimes R, Rosenberg S, et al. Pilot Study of a Statewide Initiative to Enhance Depression Care Among Older Home Care Patients. Home Health Care Management Practice. 2012;25:45–53.

    Article  Google Scholar 

  62. Fagan AA, Hanson K, Hawkins JD, Arthur M. Translational Research in Action: Implementation of the Communities That Care Prevention System in 12 Communities. J Community Psychol. 2009;37:809–29.

    Article  PubMed  PubMed Central  Google Scholar 

  63. Shaw RJ, Bosworth HB, Hess JC, Silva SG, Lipkus IM, Davis LL, et al. Development of a Theoretically Driven mHealth Text Messaging Application for Sustaining Recent Weight Loss. JMIR Mhealth Uhealth. 2013;1, e5. doi:10.2196/mhealth.2343.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Waxmonsky J, Kilbourne AM, Goodrich DE, Nord KM, Lai Z, Laird C, et al. Enhanced fidelity to treatment for bipolar disorder: results from a randomised controlled implementation trial. Psychiatr Serv. 2014;65:81–90. doi:10.1176/appi.ps.201300039.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Battaglia C, Benson SL, Cook PF, Prochazka A. Building a tobacco cessation telehealth care management program for veterans with posttraumatic stress disorder. J Am Psychiatr Nurses Assoc. 2013;19:78–91. doi:10.1177/1078390313483314.

    Article  PubMed  Google Scholar 

  66. Blaakman S, Tremblay PJ, Halterman JS, Fagnano M, Borrelli B. Implementation of a community-based secondhand smoke reduction intervention for caregivers of urban children with asthma: process evaluation, successes and challenges. Health Educ Res. 2013;28:141–52. doi:10.1093/her/cys070.

    Article  PubMed  Google Scholar 

  67. Minnick A, Catrambone CD, Halstead L, Rothschild S, Lapidos S. A nurse coach quality improvement intervention: feasibility and treatment fidelity. West J Nurs Res. 2008;30:690–703. doi:10.1177/0193945907311321.

    Article  PubMed  Google Scholar 

  68. Arends I, Bultmann U, Nielsen K, Van RW, De Boer MR, Van der Klink JJ. Process evaluation of a problem solving intervention to prevent recurrent sickness absence in workers with common mental disorders. Soc Sci Med. 2014;100:123–32.

    Article  PubMed  Google Scholar 

  69. Devine CM, Maley M, Farrell TJ, Warren B, Sadigov S, Carroll J. Process evaluation of an environmental walking and healthy eating pilot in small rural worksites. Eval Program Plann. 2012;35:88–96. doi:10.1016/j.evalprogplan.

    Article  PubMed  Google Scholar 

  70. Gitlin LN, Jacobs M, Earland TV. Translation of a dementia caregiver intervention for delivery in homecare as a reimbursable Medicare service: outcomes and lessons learned. Gerontologist. 2010;50:847–54. doi:10.1093/geront/gnq057.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Lee-Kwan SH, Goedkoop S, Yong R, Batorsky B, Hoffman V, Jeffries J, et al. Development and implementation of the Baltimore healthy carry-outs feasibility trial: process evaluation results. BMC Public Health. 2013;13:638. doi:10.1186/1471-2458-13-638.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Pbert L, Fletcher KE, Flint AJ, Young MH, Druker S, DiFranza J. Smoking prevention and cessation intervention delivery by pediatric providers, as assessed with patient exit interviews. Pediatrics. 2006;118:e810–24.

    Article  PubMed  Google Scholar 

  73. Robbins LB, Pfeiffer KA, Maier KS, Ladrig SM, Berg-Smith SM. Treatment fidelity of motivational interviewing delivered by a school nurse to increase girls’ physical activity. J Sch Nurs. 2012;28:70–8. doi:10.1177/1059840511424507.

    Article  PubMed  Google Scholar 

  74. Coffeng JK, Hendriksen IJ, Van MW, Boot CR. Process evaluation of a worksite social and physical environmental intervention. J Occup Environ Med. 2013;55:1409–20.

    Article  PubMed  Google Scholar 

  75. Culloty T, Milne DL, Sheikh AI. Evaluating the training of clinical supervisors: a pilot study using the fidelity framework. Cogn Behav Ther. 2010;3:132–44.

    Google Scholar 

  76. Lisha NE, Sun P, Rohrbach LA, Spruijt-Metz D, Unger JB, Sussman S. An evaluation of immediate outcomes and fidelity of a drug abuse prevention program in continuation high schools: project towards no drug abuse (TND). J Drug Educ. 2012;42:33–57.

    Article  PubMed  Google Scholar 

  77. Millear PM, Liossis P, Shochet IM, Biggs HC, Donald M. Being on PAR: outcomes of a pilot trial to improve mental health and wellbeing in the workplace with the Promoting Adult Resilience (PAR) Program. Behav Chang. 2008;25:215–28.

    Article  Google Scholar 

  78. Teri L, McKenzie GL, Pike KC, Farran CJ, Beck C, Paun O, et al. Staff training in assisted living: evaluating treatment fidelity. Am J Geriatr Psychiatry. 2010;18:502–9. doi:10.1097/JGP.0b013e3181c37b0e.

    Article  PubMed  PubMed Central  Google Scholar 

  79. Glasgow RE, McKay HG, Piette JD, Reynolds KD. The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management? Patient Educ Couns. 2001;44:119–27.

    Article  CAS  PubMed  Google Scholar 

  80. Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: are implementation effects out of control? Clin Psychol Rev. 1998;18:23–45.

    Article  CAS  PubMed  Google Scholar 

  81. Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Educ Res. 2003;18:237–56.

    Article  PubMed  Google Scholar 

  82. Dusenbury L, Brannigan R, Hansen WB, Walsh J, Falco M. Quality of implementation: developing measures crucial to understanding the diffusion of preventive interventions. Health Educ Res. 2005;20:308–13.

    Article  PubMed  Google Scholar 

  83. Baranowski T, Stables G. Process evaluations of the 5-a-day projects. Health Educ Behav. 2000;27:157–66.

    Article  CAS  PubMed  Google Scholar 

  84. Naven LM, Macpherson LMD. Process evaluation of a Scottish pre-fives toothpaste distribution programme. Int J Health Promot Educ. 2006;44:71–7.

    Article  Google Scholar 

  85. Reiser RP, Milne DL. A Systematic Review and Reformulation of Outcome Evaluation inClinical Supervision: Applying the Fidelity Framework. Training Educ Prof Psychol. 2014;8:149–57.

    Article  Google Scholar 

  86. Prowse P-TD, Nagel T, Meadows GN, Enticott JC. Treatment Fidelity over the Last Decade in Psychosocial Clinical Trials Outcome Studies: A Systematic Review. J Psychiatry. 2015;18.

  87. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687.

    Article  PubMed  Google Scholar 

  88. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332. http://0-dx-doi-org.brum.beds.ac.uk/10.1136/bmj.c332.

    Article  PubMed  PubMed Central  Google Scholar 

  89. Boutron I, Moher D, Altman DG, Schulz KF, Ravaud P. Extending the CONSORT statement to randomised trials of nonpharmacologic treatment: explanation and elaboration. Ann Intern Med. 2008;148:295–309. doi:10.7326/0003-4819-148-4-200802190-00008.

    Article  PubMed  Google Scholar 

  90. Maynard BR, Peters KE, Vaughn MG, Sarteschi CM. Fidelity in After-School Program Intervention Research: A Systematic Review. Res Soc Work Pract. 2013;23:613–23. doi:10.1177/1049731513491150.

    Article  Google Scholar 

  91. Leon AC, Davis LL, Kraemer HC. The role and interpretation of pilot studies in clinical research. J Psychiatr Res. 2011;45:626–9. doi:10.1016/j.jpsychires.2010.10.008.

    Article  PubMed  Google Scholar 

  92. Devito Dabbs A, Song MK, Hawkins R, Aubrecht J, Kovach K, Terhorst L, et al. An intervention fidelity framework for technology-based behavioral interventions. Nurs Res. 2011;60:340–7. doi:10.1097/NNR.0b013e31822cc87d.

    Article  PubMed  Google Scholar 

  93. Gearing RE, El-Bassel N, Ghesquiere A, Baldwin S, Gillies J, Ngeow E. Major ingredients of fidelity: a review and scientific guide to improving quality of intervention research implementation. Clin Psychol Rev. 2011;31:79–88. doi:10.1016/j.cpr.2010.09.007.

    Article  PubMed  Google Scholar 

  94. Prowse P-T, Nagel T. A Meta-Evaluation: The Role of Treatment Fidelity within Psychosocial Interventions during the Last Decade. J Psychiatry. 2015;18:251.

    Article  Google Scholar 

  95. Field B, Booth A, Ilott I, Gerrish K. Using the Knowledge to Action Framework in practice: a citation analysis and systematic review. Implement Sci. 2014;9:172.

    Article  PubMed  PubMed Central  Google Scholar 

  96. Kuper H, Nicholson A, Hemingway H. Searching for observational studies: what does citation tracking add to PubMed? A case study in depression and coronary heart disease. BMC Med Res Methodol. 2006;6:4.

    Article  PubMed  PubMed Central  Google Scholar 

  97. Wright K, Golder S, Rodriguez-Lopez R. Citation searching: a systematic review case study of multiple risk behaviour interventions. BMC Med Res Methodol. 2014;14:73.

    Article  PubMed  PubMed Central  Google Scholar 

  98. Di Rezze B, Law M, Gorter JW, Eva K, Pollock N. A narrative review of generic intervention fidelity measures. Phys Occup Ther Pediatr. 2012;32:430–46. doi:10.3109/01942638.2012.713454.

    Article  PubMed  Google Scholar 

  99. Madson MB, Campbell TC. Measures of fidelity in motivational enhancement: a systematic review. J Subst Abuse Treat. 2006;31:67–73. http://0-dx-doi-org.brum.beds.ac.uk/10.1016/j.jsat.2006.03.010.

    Article  PubMed  Google Scholar 

  100. Mars T, Ellard D, Carnes D, Homer K, Underwood M, Taylor SJ. Fidelity in complex behaviour change interventions: a standardised approach to evaluate intervention integrity. BMJ Open. 2013;3, e003555. doi:10.1136/bmjopen-2013-003555.

    Article  PubMed  PubMed Central  Google Scholar 

  101. Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ. 2008;336:1472–4. http://0-dx-doi-org.brum.beds.ac.uk/10.1136/bmj.39590.732037.47.

    Article  PubMed  PubMed Central  Google Scholar 

  102. Lorencatto F, West R, Stavri Z, Michie S. How well is intervention content described in published reports of smoking cessation interventions? Nicotine Tob Res. 2013;15:1273–82. doi:10.1093/ntr/nts266.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the City, University of London  Research sustainability fund for funding this project.

Availability of data and materials

All data is contained within this manuscript and the references cited.

Authors’ contributions

JF, AD and NM conceived the project and acquired funding. LR worked as a part-time researcher on the project and conducted the review including executing the searches and managing the citations. AD, NM and FL assisted with the reliability of the screening. LR, JB, AD and NM were involved in double data extraction and all authors JF LR, JB, FL, AD and NM contributed to the intellectual content of the manuscript. All read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lorna Rixon.

Additional files

Additional file 1:

Details of study characteristics. (DOCX 79.9 kb)

Additional file 2:

Details on methods used to assess and enhance receipt. (DOCX 91.1 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rixon, L., Baron, J., McGale, N. et al. Methods used to address fidelity of receipt in health intervention research: a citation analysis and systematic review. BMC Health Serv Res 16, 663 (2016). https://0-doi-org.brum.beds.ac.uk/10.1186/s12913-016-1904-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12913-016-1904-6

Keywords