Skip to main content

Translating academic research into guidance to support healthcare improvement: how should guidance development be reported?

Abstract

Background

There is interest internationally in improving the uptake of research evidence to inform health care quality and safety. This article focusses on guidance development from research studies as one method for improving research uptake. While we recognise that implementation strategies on the ´demand´ side for encouraging the uptake of research are important, e.g. knowledge brokers and university-practice collaborations, this article focusses on the ´production´ aspect of how guidance development is reported and the consequent influence this may have on end-users´ receptivity to evidence, in addition to other demand-side processes.

Main text

The article considers the following question: how is guidance developed and what are the implications for reporting? We address this question by reviewing examples of guidance development reporting from applied health research studies, then describe how we produced guidance for a national study of evidence use in decision-making on adopting innovations. The starting point for reflecting on our experiences is a vignette of the guidance ´launch´ event at a national conference.

Conclusions

Implications for reporting guidance development and supporting improvement are discussed. These include the need to (a) produce reporting standards for the production of guidance to match reporting standards for other research methods, (b) acknowledge the ´informal´ or emergent aspects of producing guidance and its role within a wider knowledge mobilization strategy, (c) consider guidance development from projects as part of a wider knowledge mobilization strategy, and (d) encourage a receptive environment for guidance development and use, including researcher training, durable funding to support impact, and closer relations between research and practice.

Peer Review reports

Background

There is interest internationally in improving the uptake of research evidence to inform health care quality and safety [1,2,3]. This article examines guidance development from research studies as one method for improving research uptake. We consider this research question: how is guidance developed and what are the implications for reporting? To address this question, we focus on how the production of guidance from applied health research is reported in a small sample of studies and then reflect on our research team’s experiences of developing guidance. The brief review of examples of guidance development reporting from applied health research studies suggests that transparency concerning how guidance was produced could be improved. Moreover, the review suggests to us that there is an informal, somewhat ´mysterious´ aspect to guidance development, which we then explore by reflecting on how we produced guidance for a national study of evidence use in decision-making on adopting innovations. Implications for reporting guidance development and supporting improvement are then discussed. The discussion emerges from our reflections on experiences of guidance development which represents a partial view and is designed to stimulate debate. There will be other angles, points of view and experiences of guidance development and reporting, upon which we hope this article encourages further debate.

Decision-makers are increasingly demanding evidence from research that synthesises implications concerning effectiveness of interventions or change programmes along with actionable findings that can be tailored to their own context [4], including implementation considerations [5]. We recognise that knowledge mobilisation involves a ‘system’ of diverse structures and actors [6], necessitating activity at this level to influence the research to practice gap, and that system-level implementation strategies, e.g. knowledge brokers, university-practice collaborations, and research commissioned to address policy questions [7,8,9], are key to this endeavour. However, the reality is that emphasis continues to be placed upon the ´production´ aspect of knowledge translation, including the development of guidance. This piece therefore focusses on how guidance development is reported and the consequent influence this may have on end-users´ receptivity to evidence, in addition to other demand-side processes.

Building on recent debate in the field [10], we define guidance as systematically developed statements to aid decision-making on health system challenges. We treat the term ‘systematically developed’ as an empirical question in relation to how producers go about developing guidance, rather than equating this a priori with a fixed set of steps to follow (e.g. as found in institutional guidance such as the World Health Organization’s (WHO) approach to evaluating interventions based on systematic and qualitative reviews) [11]. Institutional producers of guidance, such as the WHO and the UK’s National Institute for Health and Care Excellence (NICE) have formalised approaches for developing and reporting guidance. However, no consensus exists on how guidance development should be reported that is derived from individually funded studies in the field of health services research.

The relative informality with which guidance is produced in this context may help to account for the ´image´ problem that guidance for improvement is sometimes ascribed [12]. Formalized approaches to developing guidance normally involve evaluating interventions based on systematic reviews. Less is known about the relevance of formalized approaches for developing guidance concerning (1) other phenomena identified through health services research not reducible to ‘interventions’, (2) the translation of evidence into recommendations that can inform decision-making, and (3) the sharing of recommendations in such a way that is likely to maximise their impact on practice. With regard to the ´production´ side of improving research uptake, key areas of consideration include the message, target audience, the messenger, knowledge transfer processes, and evaluation to inform future knowledge mobilisation strategy [13].

Our contention is that divergent approaches to developing guidance are emerging which are (a) based on findings from individual studies in health services research and are not therefore wedded to institutional approaches to guidance development, (b) increasingly required to respond to specific audiences´ needs [45] and producers may therefore tailor their approach to guidance development to meet those needs, and (c) where ´systematic´ steps for producing guidance have been codified these may or may not be followed in practice by guidance producers. In this debate article, we reflect on the variety of approaches being used to develop guidance, including the ´informal´ or bespoke aspects, and consider any implications for how ´systematic´ is defined and aspired to in relation to guidance development.

Main text

Guidance development reporting

We reviewed a small sample (6) of the reporting of guidance development methods from studies supported by major funding bodies (Table 1). These were identified by searching research funders´ websites in North America and Europe (National Institute for Health Research in England, European Commission, Canadian Institute for Health Research, US National Institutes of Health) and reviewing other examples of guidance for improvement of which the research team were aware. These were selected by searching the websites for key terms including ´guidelines´, ´guidance´, and ´toolkits´ and reviewing some of the resulting reports, or associated journal papers, for examples of guidance for improvement derived from health services research studies. The aim was not to perform a comprehensive review, but to situate our own experiences of developing guidance in relation to other studies. Our interest was in guidance development processes and reporting from individually funded studies, rather than guidance that follows institutionally prescribed approaches such as WHO.

Table 1 Studies reporting guidance development reviewed
Table 2 Summary of the DECIDE guide

In the US, a research dissemination planning tool was developed by reviewing existing literature and tools, and organising expert review of the draft tool and end user testing [14]. In Canada, a guide for assessing knowledge translation plans was developed by drafting guidance, based on existing literature and the team’s expertise, then conducting ‘cognitive interviews’ to assess end-users’ responses [15]. Revisions to the guidance were based on a ‘consensus method’ within the team and reference to a project advisory committee. In the UK, standards for reporting evidence syntheses were informed by literature searches, team’s expertise, Delphi panels, email discussion list, and workshops [16]. Another research team undertook patient/carer interviews, evidence synthesis including learning from the team’s previous studies, and interviews/focus groups to refine an intervention [17]. Guidance for assessing action research proposals was developed by synthesising study findings, including a literature review, and combining this with their views as action researchers [18]. An EU-funded study on quality improvement strategies in five countries used stakeholder workshops to inform a reflective guide for hospital leaders [19].

We now summarise briefly how guidance development is reported from these studies. These examples suggest to us the importance of the ‘informal’ aspects of developing guidance. First, ‘co-production’ of guidance appears sometimes to be used to ‘confirm’, rather than develop or change, authors’ established ideas for guidance. One study reported that, while they obtained user feedback on their knowledge translation guide through interviews, it was developed initially by one researcher, then ´revised and developed based on team review and discussion´ [15]. Second, authors’ experiences are afforded similar status to external evidence. This includes citing learning from their previous projects [17], and using their ‘own content expertise of the topic area’. [16] This suggests to us a need for guidance producers to use a wider range of knowledge than one’s own research. Third, there can be something mysterious or opaque about how guidance is informed by evidence; one report refers to data from different sources being ‘channelled and collated contemporaneously’ to develop quality standards [16]. Fourth, guidance is presented in many forms, from lists of questions or tables of quality standards at the end of a report [16], to ‘draft’ guidance that ‘require field testing’, [18] and practical resources or toolkits used in health service interventions [17].

We now reflect on our experiences of producing guidance for a national study (Table 2) to focus on the ‘informal’ processes in our own example, to unpick the mysterious aspects of developing guidance apparent in other studies´ reporting. We begin with a vignette of the guidance ‘launch’ at a conference workshop.

Vignette: guidance launch evokes cynicism

A month or so after submitting our final report to the research funder, we presented the DECIDE guide at a national conference workshop on translating academic findings into practical guidance. In one of the presentations, the audience were asked to consider what stakeholders’ most common view of toolkits might be from a range of options (warmth, cynicism, ambiguous, fad). The majority of the audience chose ‘cynicism’, reflecting the views identified in the research findings presented [12]. Some of the feedback we received from the table discussions reflected this cynicism about the role of guidance in health care improvement. There was the challenge of being able to reach practitioners, as they do not necessarily read email. Then, a challenge came of how to get people “on board”. There was the challenge of how to get people to act on the guidance versus merely reading it. And, even if local interest in the guidance could be secured, there was the challenge of how to spread the guidance beyond the immediate context. A further problem was raised of identifying who was responsible for implementing and disseminating guidance. Whose role was it? Scholars should not lead implementation (they didn’t have the skills necessary or the inclination). There was a need to create people in charge of implementing guidance. Who should be paying for it, research funders? It sounds expensive, too.

[Reflections on conference workshop, July 2018]

We suggest that the approach to reporting guidance development processes helps to account for such cynicism amongst some of the researchers and practitioners present. As some examples we reviewed showed, this includes sketchy reporting, reliance on personal experience, and variation in how guidance is presented.

DECIDE guidance development methods

Our broad approach for producing the guidance was planned in advance and published in a study protocol [20]; this included recognition of suggested strategies for improving the use of evidence by decision-makers [13]. In practice, many of the steps involved in developing the guidance emerged during the course of the research project as we reflected on our findings and considered how best to present them to inform real-world practices of decision-making (including use of stakeholders´ views to support this endeavour). The emergent guidance development methods (Table 2) led us to include: concise, visual, practical examples; less ‘academic’ text; questions posed from decision-makers´ perspectives; and more prominent questions for decision-making addressed by including a checklist for practitioners.

Given the emergent aspect involved in developing guidance that we, and the examples reviewed above highlight, we now examine in more detail these ‘informal’ practices, which may not be captured by reporting standards. When reflecting on our own efforts to produce guidance, the insight from the sociology of science literature that “scientists and observers are routinely confronted by a seething mass of alternative interpretations” [21], very much resonated with our experience. These alternatives then need to be resolved somehow, either ´informally´ or in a way that is not pregiven by plans. While we are calling for the methods through which guidance is developed to be made more explicit by producers, we would caution against rationalizing these ‘informal’ or emergent processes of guidance development into a logic by which guidance is produced for research funders and practitioners that we might have acted in accordance with, but did not follow in practice [22]. For example, it is sometimes reported in relation to qualitative thematic analysis that differences in opinion among researchers were ‘resolved through debate’ [23] but this glosses over the quality of social interaction, including the role of power dynamics, novelty achieved through dialogue, and hesitancy about how to ‘go on’. The urge to ‘cover up the traces’ of, rather than acknowledge, the messy process by which knowledge is produced can be partly linked to the privileging of rationalism in Euro-American epistemology [24].

Such ‘informal’ or emergent processes played an important role in the development of guidance from our study, as these informed: decisions about which stakeholder comments on the guidance were within scope; balancing the space used for our findings, case study examples, and questions for decision-makers; language style and tone; and arranging the guidance around the metaphor of the ‘long and winding road’ of decision-making. We experienced hesitancy, however, in making such decisions. The hesitancy we experienced might reflect a lack of consensus about how to produce guidance. It could also be linked to the lack of a typical style or format for producing guidance, in the way that journals or research funders have a ‘house’ style that helps orient the ‘epistemic tinkering’ [25] needed to situate new insights in relation to current knowledge. That said, a lack of guidance on reporting may liberate producers to consider novel formats and language to communicate content in creative ways. We suggest that it is important to be explicit about the methods used for producing guidance; reporting standards would improve transparency concerning how guidance was produced, similar to reporting items used for other research methods [26,27,28]. This is not to argue for homogeneity concerning the development of guidance, as it differs from a systematic review and can take different forms depending on the context of improvement being addressed, but for transparency concerning what was involved in its production.

From the review of guidance development in this paper and our own experience, we encourage further debate about whether transparency in guidance development reporting could be improved by routinely including: (a) a statement of evidence on which guidance is based, distinguishing between use of authors’ research and others’ findings, (b) the approach used to gather stakeholder or end-user feedback on guidance need, format and content, (c) how external feedback was translated into change recommendations (e.g. consensus development), (d) any constraints that precluded use of feedback (e.g. out of scope) and how these were determined, and (e) specify where the guidance can be accessed by end-users as a standalone product.

It is also important to acknowledge the interactive, often informal, practices through which knowledge is developed that may not be captured in rationalized accounts. This fits with a ´complexity perspective´ on guidance development which acknowledges the multiple processes influencing the behavior of health care interventions and contexts, and the need for guidance to reflect these [29]. The interpretative work in developing guidance appears analogous to ‘abduction’ [30] in qualitative research whereby data to inform the product’s development (e.g. end-user feedback) are interpreted with ‘theoretical sensitivity’, that is, using knowledge and experience gained through the research study to inform how the feedback is addressed (e.g. our reading of innovation processes differed from some of the participants we gained feedback from). In our study, the sources of ‘sensitivity’ were broader than experience associated with conducting the research because they extended to the external design agency’s knowledge, which provided a steer on ‘what works’ visually and functionally, as well as technical constraints. In future guidance development, we would suggest widening the domain of ‘sensitivity’ to incorporate a range of expertise in guidance development. For example, to overcome differences in interpretation of innovation, we would run more interactive feedback sessions (referred to in Table 2) in which both researchers and end-users can share how and why they interpret key ideas discussed in the guidance as they do.

Conclusions

Cynicism about guidance might be expected given the complexity of the health care environments that it seeks to improve. Written reports of research findings, as well as journal papers, are often received with cynicism concerning their relationship to improving practice. These academic outputs are not necessarily cheaper or more efficient ways of publishing findings, either. The article processing charge for publishing an open access article can be up to £3490 [31] and, in our experience, publishing findings can consume considerable academic time and resources, potentially reducing their timeliness. This is due partly to the need to write in accordance with journals´ or research funders´ conventions (especially making the case for the contribution to knowledge that differs by the audience you are writing for) and to navigate often lengthy peer review processes, with no guarantee of success. We were able to develop and produce guidance summarizing the study findings and their implications for practice in nine months, with the guidance freely available to download from a university webpage six weeks later. We acknowledge that guidance for improvement has an ‘image’ problem, and are calling for guidance producers to be transparent about the formal and informal processes by which guidance is made (e.g. within a brief structured statement of evidence on which guidance is based). However, we suggest that non-traditional outputs have an important role in knowledge mobilization strategies, given the challenges associated with achieving impact through traditional forms of reporting findings (e.g. journal papers, funder reports). As part of the strategy of identifying the target audience(s) for research findings [13], we suggest producers of research consider how the medium can be tailored to each audience. For example, open access journal articles may be more appropriate for academically-oriented audiences, while other forms including lay summaries and carefully crafted questions to help decision-makers with relating research to their own context, may be needed for other audiences.

To support the effective mobilization of guidance from research, a number of issues for policy and practice need to be addressed. Firstly, reporting standards for producers of guidance need to be developed that are appropriate for this form of research output. Secondly, the particular skills required by researchers (or others with this role) to develop and mobilize guidance from research need to be identified and matched to training opportunities. Thirdly, research bids that include guidance development need to acknowledge the time needed to not only disseminate guidance, but also have an impact on practice. This longer time horizon would align with the UK’s audit of research quality, ‘Research Excellence Framework’, which aims to capture impact from research over a 20 year period (2000–2020). Fourthly, however, opportunities for closer relations between research and practice are being fostered through sustained funding of university-healthcare collaborations [32], improvement fellowships, embedded research [33], and rapid service evaluation centres. We suggest the importance of acknowledging both the formal and informal processes involved in developing guidance for improvement (e.g. being explicit about the methods through which guidance is produced and also developing relationships to enable co-design of guidance with stakeholders to be able to wear decision-makers´ shoes). In accordance with a ‘systems’ approach for addressing the research-to-practice gap [10], improving collaborative leadership skills and access to durable funding to support such relationships matter as much as the medium through which practice implications from research are shared.

Availability of data and materials

The dataset supporting the conclusions of this article is included within the article.

Abbreviations

DECIDE:

DEcisions in health Care to Introduce or Diffuse innovations using Evidence

References

  1. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

    Article  Google Scholar 

  2. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

    Article  Google Scholar 

  3. Orem JN, Mafigiri DK, Marchal B, Ssengooba F, Macq J, Criel B. Research, evidence and policymaking: the perspectives of policy actors on improving uptake of evidence in health policy development and implementation in Uganda. BMC Public Health. 2012;12(1):109.

    Article  Google Scholar 

  4. Langlois EV, Daniels K, Akl EA. Evidence synthesis for health policy and systems: a methods guide. Geneva: World Health Organization; 2018. Licence: CC BY-NC-SA 3.0 IGO

    Google Scholar 

  5. Turner S, Morris S, Sheringham, J, Swart N, D’Lima D, Richey R, Hudson E, Ahmed M, Fulop NJ. DEcisions in health care to introduce or diffuse innovations using evidence (DECIDE). Final report for The Health Foundation. April 2019. Available at: https://www.alliancembs.manchester.ac.uk/research/decide/assets/decide-final-report-april-2019ecd6.pdf?ver=2019-04-25-114046-457 (Accessed 31 Oct 2019).

  6. Best A, Holmes B. Systems thinking, knowledge and action: towards better models and methods. Evidence and Policy. 2010;6(2):145–59.

    Article  Google Scholar 

  7. Russell DJ, Rivard LM, Walter SD, Rosenbaum PL, Roxborough L, Cameron D, Darrah J, Bartlett DJ, Hanna SE, Avery LM. Using knowledge brokers to facilitate the uptake of pediatric measurement tools into clinical practice: a before-after intervention study. Implement Sci. 2010;5(1):92.

    Article  Google Scholar 

  8. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Services Research. 2014;14(1):2.

    Article  Google Scholar 

  9. Walshe K, Davies HTO. Research, influence and impact: deconstructing the norms of health services research commissioning. Polic Soc. 2010;29(2):103–11. https://0-doi-org.brum.beds.ac.uk/10.1016/j.polsoc.2010.03.003.

    Article  Google Scholar 

  10. Bosch-Capblanch X, Lavis JN, Lewin S, Atun R, Røttingen J-A, Dröschel D, et al. Guidance for evidence-informed policies about health systems: rationale for and challenges of guidance development. PLoS Med. 2012;9(3):e1001185 https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pmed.1001185.

    Article  Google Scholar 

  11. World Health Organization. WHO Handbook for Guideline Development. 2nd Edition. 2014. Available at: https://www.who.int/publications/guidelines/handbook_2nd_ed.pdf?ua=1 (accessed 31.10.19).

  12. Sharp CA, Dixon WG, Boaden R, Sanders C. The means not the end: Stakeholder views of toolkits developed from healthcare research in: Nugus P, Denis JL, Chenevert D and Rodriguez C (Eds) Transitions and Boundaries in the Coordination and Reform of Health Services, Organizational Behaviour in Healthcare, Palgrave MacMillan. 2020:295–316.

  13. Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J. How can research organizations more effectively transfer research knowledge to decision makers? Milbank Q. 2003;81:221–48.

    Article  Google Scholar 

  14. Carpenter D, Nieva V, Albaghal T, et al. Development of a Planning Tool to Guide Research Dissemination. In: Henriksen K, Battles JB, Marks ES, et al., editors. Advances in Patient Safety: From Research to Implementation (Volume 4: Programs, Tools, and Products). Rockville: Agency for Healthcare Research and Quality US; 2005. Available from: https://0-www-ncbi-nlm-nih-gov.brum.beds.ac.uk/books/NBK20603/.

    Google Scholar 

  15. Goering P, Ross S, Jacobson N. Developing a guide to support the knowledge translation component of the grant application process. Evidence & Policy. 2010;6(1):91–102.

    Article  Google Scholar 

  16. Wong G, Greenhalgh T, Westhrop G, et al. Development of methodological guidance, publication standards and training materials for realist and meta-narrative reviews: the RAMESES realist and meta-narrative evidence syntheses: evolving standards project. Health Serv Deliv Res. 2014;2(30).

    Article  Google Scholar 

  17. Bennett M, Mulvey M, Campling N, et al. Self-management toolkit and delivery strategy for end-of-life pain: the mixed-methods feasibility study. Health Technol Assess. 2017;21(76).

    Article  Google Scholar 

  18. Waterman H, Tillen D, Dickson R, et al. Action research: a systematic review and guidance for assessment. Health Technol Assess. 2001;5(23).

  19. Anderson J, Robert GB, Nunes FG, Bal R, Burnett S, Karltun A, Sanne J, Aase K, Wiig S, Fulop NJ. Translating research on quality improvement in five European countries into a reflective guide for hospital leaders: the ‘QUASER Hospital Guide’. 2019:mzz055. https://0-doi-org.brum.beds.ac.uk/10.1093/intqhc/mzz055.

  20. Turner S, Morris S, Sheringham J, Hudson E, Fulop NJ. Study protocol: DEcisions in health care to introduce or diffuse innovations using evidence DECIDE. Implement Sci. 2015 Dec;11(1):48.

    Article  Google Scholar 

  21. Latour B, Woolgar S. Laboratory life: the social construction of scientific facts. Princeton: Princeton University Press; 1986.

    Google Scholar 

  22. Wittgenstein L. Philosophical investigations. Malden: Blackwell Publishing; 1958.

    Google Scholar 

  23. Turner S, Higginson J, Oborne CA, Thomas RE, Ramsay AI, Fulop NJ. Codifying knowledge to improve patient safety: a qualitative study of practice-based interventions. Soc Sci Med. 2014 Jul 1;113:169–76.

    Article  Google Scholar 

  24. Law J. After method: mess in social science research. London: Routledge; 2004.

    Book  Google Scholar 

  25. Preda A. Financial knowledge. Documents, and the Structures of Financial Services, Journal of Contemporary Ethnography. 2002;31(2):207–39.

    Article  Google Scholar 

  26. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151:264–69.

    Article  Google Scholar 

  27. Ogrinc G, Davies L, Goodman D, et al. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2016;25:986–92.

    Article  Google Scholar 

  28. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;196:349–57.

    Article  Google Scholar 

  29. Norris SL, Rehfuess EA, Smith H, et al. Complex health interventions in complex systems: improving the process and methods for evidence-informed health decisions. BMJ Glob Health. 2019;4:e000963. http://0-dx-doi-org.brum.beds.ac.uk/10.1136/bmjgh-2018-000963.

    Article  Google Scholar 

  30. Timmermans S, Tavory I. Theory construction in qualitative research: from grounded theory to abductive analysis. Sociological theory. 2012;30(3):167–86.

    Article  Google Scholar 

  31. Springer. 2019 Springer Nature fully open access journals, available at: https://www.springernature.com/de/open-research/journals-books/journals (accessed 11.04.19).

  32. Barratt, Helen, et al. “Health Services Research: Building Capacity to Meet the Needs of the Health Care System.” Journal of Health Services Research & Policy, vol. 22, no. 4, 2017, pp. 243–249,

  33. Vindrola-Padros C, Pape T, Utley M, et al. The role of embedded research in quality improvement: a narrative review. BMJ Quality & Safety. 2017;26:70–80.

    Article  Google Scholar 

  34. Turner S, D’Lima D, Hudson E, Morris S, Sheringham J, Swart N, Fulop NJ. Evidence use in decision-making on introducing innovations: a systematic scoping review with stakeholder feedback. Implement Sci. 2017 Dec;12(1):145.

    Article  Google Scholar 

Download references

Acknowledgements

We thank participants in the workshop from which this paper is derived that took place at Health Services Research UK (HSRUK) 2018.

CAS is supported by the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care (NIHR CLAHRC) Greater Manchester.

NJF is an NIHR senior investigator. NJF and JS were in part supported by the NIHR Collaboration for Leadership in Applied Health Research and Care (CLAHRC) North Thames at Barts Health NHS Trust. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care.

Funding

This study was funded by the Health Foundation. The project is part of the Health Foundation’s Evidence-Informed Decision Making in Health Service Innovation and Improvement Programme. The Health Foundation is an independent charity committed to bringing about better health and health care for people in the UK.

Author information

Authors and Affiliations

Authors

Contributions

The authors conceived the paper together. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Simon Turner.

Ethics declarations

Ethics approval and consent to participate

This study was considered by the Chair of the UCL Research Ethics Committee on 29 February 2016 and is exempt from the requirement to obtain ethical approval.

Consent for publication

Not applicable.

Competing interests

The corresponding author is an editorial board member of BMC Health Services Research.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Turner, S., Sharp, C.A., Sheringham, J. et al. Translating academic research into guidance to support healthcare improvement: how should guidance development be reported?. BMC Health Serv Res 19, 1000 (2019). https://0-doi-org.brum.beds.ac.uk/10.1186/s12913-019-4792-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12913-019-4792-8

Keywords