Skip to main content
  • Research article
  • Open access
  • Published:

Development of a benchmark tool for cancer centers; results from a pilot exercise

Abstract

Background

Differences in cancer survival exist between countries in Europe. Benchmarking of good practices can assist cancer centers to improve their services aiming for reduced inequalities. The aim of the BENCH-CAN project was to develop a cancer care benchmark tool, identify performance differences and yield good practice examples, contributing to improving the quality of interdisciplinary care. This paper describes the development of this benchmark tool and its validation in cancer centers throughout Europe.

Methods

A benchmark tool was developed and executed according to a 13 step benchmarking process. Indicator selection was based on literature, existing accreditation systems, and expert opinions. A final format was tested in eight cancer centers. Center visits by a team of minimally 3 persons, including a patient representative, were performed to verify information, grasp context and check on additional questions (through semi-structured interviews). Based on the visits, the benchmark methodology identified opportunities for improvement.

Results

The final tool existed of 61 qualitative and 141 quantitative indicators, which were structured in an evaluative framework. Data from all eight participating centers showed inter-organization variability on many indicators, such as bed utilization and provision of survivorship care. Subsequently, improvement suggestions for centers were made; 85% of which were agreed upon.

Conclusion

A benchmarking tool for cancer centers was successfully developed and tested and is available in an open format. The tool allows comparison of inter-organizational performance. Improvement opportunities were successfully identified for every center involved and the tool was positively evaluated.

Peer Review reports

Background

The number of cancer patients is steadily increasing and, despite rapid improvements in therapeutic options, inequalities in access to quality cancer care and thus survival exist between different countries [1]. These inequalities indicate room for improvement in quality of cancer care, identifying good practices can assist cancer centers(CC’s) in improving their services and can ultimately reduce inequalities, benchmarking is an effective method for measuring and analyzing performance and its underlying organizational practices [2]. Developed in industry in the 1930s, benchmarking made its first appearance in healthcare in 1990 [2]. Benchmarking involves a comparison of performance in order to identify, introduce, and sustain good practices, this is achieved by collecting, measuring and evaluating data to establish a target performance level, a benchmark [3]. This performance standard can then be used to evaluate the current performance by comparing it to other organizations, including good-practice facilities [3]. Due to globalization, absence of national-comparators, and the search for competitive alternatives, there is an increasing interest in international benchmarking [4]. However, a study by Longbottom [5] on 560 healthcare benchmarking projects, showed only 4% of the projects involved institutions from different countries. In literature, relatively few papers are published on healthcare benchmarking methods [6]. Moreover, to the best of our knowledge, there is no confirmed indicator set for benchmarking comprehensive cancer care. In 2013, the Organization of European Cancer Institute (OECI) [7] therefore launched the BENCH-CAN project [8], aiming at reducing health inequalities in cancer care in Europe and improving interdisciplinary comprehensive cancer care by yielding good practice examples. In view of this aim, a comprehensive international benchmarking tool was developed covering all relevant care related and organizational fields. In this study comprehensive refers to thorough, broad, including all relevant aspects - which is also a means to describe interdisciplinary, state of the art, holistic cancer care. In line with the aim of the BENCH-CAN project, the objectives of this study were (i) to develop and pilot a benchmark tool for cancer care with both qualitative and quantitative indicators, (ii) identify performance differences between cancer centers, and (iii) identify improvement opportunities.

Method

Study design and sample

This multi-center benchmarking study involved eight cancer centers (CCs) in Europe, six of which designated as a comprehensive cancer center (encompassing care, research and education) by the OECI [9]. A mix of geographic selection and convenience sampling was used to select the pilot sites. Centers were chosen based on national location, in order to have a good distribution between geographical regions in Europe and secondly willingness to participate. All centers had to be sufficiently organized and dedicated to oncology, and treat significant numbers of cancer patients. Centers were located in three geographical clusters: North/Western-Europe (n = 2), Southern-Europe (n = 3) and Central/Eastern-Europe (n = 3). The benchmark tool was developed and executed according to the 13-step method by van Lent et al., [6] (see Table 1). In short, the first five steps involve the identification of the problem, forming the benchmarking team, choosing benchmark partners and define their main characteristics, and identify the relevant stakeholders. Step 6 to 12 will be explained in more detail in the following paragraphs. Ethical consideration was not applicable in this study.

Table 1 Benchmarking steps developed by van Lent and application in this study

Framework and indicators

As described in step 6 we developed a framework to structure the indicators. The European Foundation for Quality Management (EFQM) [10] Excellence Model (comparable to the Baldridge model [11]) was used for performance-assessment and identification of key strengths and improvement areas [12]. Apart from the enabler fields, we adapted the Institute of Medicine domains of quality [13] for outcomes or results: effective, efficient, safe, patient-centered, integration and timely (Fig. 1).

Fig. 1
figure 1

the BENCH-CAN framework. Note: The enabler domains from the EFQM model describe factors that enable good quality care. The results domains adapted from the IOM domains of quality describe how good quality care can be measured

Indicators (step 7) were derived from literature [14] and expert opinion. Existing assessments were used as basis for the benchmark tool [15]. Stakeholders of the BENCH-CAN project such as representatives from the European Cancer Patient Coalition (ECPC), and clinicians and experts (such as quality managers) from cancer centers (OECI member centers, n = 71) provided feedback to reach consensus on the final set of indicators to be used in the benchmark (step 8). As one person per center was asked to collect feedback within that specific center, it cannot be determined whether the feedback was shared equally by the different stakeholder groups. The combination of data provision, site visit by a combined team and feedback provided sufficient possibilities for cross checking. For the financial and quantitative indicators this included the standardization of data collection to allow comparison between pilot centers and determining the level of detail for cost accounting.

Reliability and validity

A priori stakeholder involvement was used to ensure reliability and validity [6]. After collecting the indicators in step 9, the validity of the indicators was checked using feedback from the pilot centers based on three criteria [16, 17]: 1) definition clarity, 2) data availability and reliability, 3) discriminatory features and usability for comparisons.

Indicator refinement and measurement

The indicators were pre-piloted in three centers to see whether the definitions were clear and the indicators would yield relevant, discriminative information. These three centers were selected based on willingness to participate and readiness to provide the data in a short period. Based on this pilot, we decided to add and remove indicators, and refine definitions of some indicators. After refinement, the resulting set of 63 qualitative indicators and 193 quantitative indicators was measured in the five remaining centers. The pre-pilot centers submitted additional information on the added indicators in order to make all centers comparable.

We collected data from the year 2012 and each pilot center appointed a contact person who was responsible for the data collection within the institute and the delivery of the data to the research team. After a quick data scan, a one-day visit to each pilot center was performed to verify the data, grasp the context and clarify questions arising from the provided data. The visits were performed by the lead researcher, a representative from the ECPC and representatives of (other) members of the consortium. The visits were also used to collect additional information through semi-structured interviews and to acquire feedback on the benchmark tool. In the semi-structured interview, the lead researcher provided some structure based on the questions that arose from the quick scan (see Additional file 1: Appendix 1 for a selection of five topics and corresponding questions in the semi-structured interviews) but worked flexibly and allowed room for the respondent’s more spontaneous descriptions and narratives and questions from the other site visit members [18].

Analysis

Two methods were used to compare the qualitative and quantitative data. A deductive form of the Qualitative Content Analysis was used to analyze the qualitative data [18]. This method contains eight steps which are described in Table 2.

Table 2 steps Qualitative Content Analysis [26]

Quantitative data was first checked for consistency and correctness, and all cost data was converted into euros and adjusted for Purchasing Power Parity [19]. In addition, data was normalized when necessary to be able to compare different types and sizes of centers. Used normalizations were: 1) openings hours of departments, 2) number of inpatient beds, 3) number of inpatient visits, and 4) number of full-time equivalent (FTE). All data was summarized and possible outliers were identified. Outliers were discussed with the relevant centers to elaborate on the possible reasons for the scores.

To ensure validity, a report with all data (qualitative and quantitative) was send to the pilot centers for verification. Not all centers were able to provide all data, as some were not able to retrieve and produce the data and others were concerned with the time needed to gather all the requested information. Hence, for some indicators centers are missing, as we did not use imputation. Data is structured according to the adapted domains of quality from the IOM; effective, efficient, safe, patient-centered, and timely.

Improvement suggestions

After comparison of all quantitative and qualitative data, three researchers independently identified improvement opportunities for each center. Improvement suggestions or opportunities (at least three per center) were only mentioned for those areas where the researchers felt the center could actually make the improvement without being restricted by for example regulations. Based on these improvement suggestions, if in agreement, pilot centers developed improvement plans.

Results

Reliability and validity

Ten indicators deemed irrelevant (such as sick leave) were removed after the pre-pilot. Nineteen indicators were added based on evaluation criteria and feedback. Several indicator definitions were clarified. The final pilot-list contained 63 qualitative indicators and 193 quantitative indicators. After the pilot data collection, a secondary evaluation of the definition clarity, data availability, data reliability and discriminative value was performed. This re-valuation resulted in a final set of 61 qualitative indicators and 141 quantitative indicators that were deemed suitable for wider use in benchmarking cancer centers (Additional file 2: Appendix 2).

Performance differences between centers

The performances of the participating centers varied on many indicators, of which a selection is shown in Table 3 and described below. Organizations are anonymized. The results are structured according to the adapted domains of quality [13].

Table 3 Profiles of the cancer centers against a selection of indicators. For each domain a selection of indicators and their outcomes is presented

Effective

The majority of centers register crude mortality rates of their patient groups (n = 6) as shown in Table 3. Only Institute A publishes this rate. Another type of mortality, 30-day surgical mortality, was not registered in center B, C and G. Centers also reported difficulties with providing novel technologies and therapies limiting their ability to provide the optimal care for patients.

Efficient

Medical efficiency

The medical efficiency, defined as the use of medical production factors to gain desired health outcome with a minimum waste of time, effort, or skills, greatly varies between the participating centers as shown in Fig. 2. Center G scores high (ratio of 7), whereas center C has a low number of daycare treatments (ratio 0.3) in relation to their inpatients visits compared to the other centers.

Fig. 2
figure 2

Number of daycare treatments in relation to the number of inpatient visits

The utilization of beds differs between centers, as shown in Fig. 3. Especially center C, G and H have a relatively low inpatient bed utilization. Similarly, a large variation in utilization of the daycare beds is observed. Center E has a high daycare bed utilization, but scores average in the ratio between daycare treatments/inpatient visits. In contrast, center G also had a relatively high number of daycare treatments but a lower utilization.

Fig. 3
figure 3

Inpatient and day-care bed utilization

Input efficiency

Number of scans per radiology device varies between centers, as shown in Fig. 4. Center D scores high on the efficiency of MRI (4462 scans per MRI) X-ray (7703 scan per X-ray machine), and CT(13,836 scans). Center H scores high on the efficiency of MRI and CT. Center E has outsourced their MRI and no data was available from center G considering X-rays.

Fig. 4
figure 4

Total number of scans made per device in one year

Safe

Center A has a safety management system which is audited annually by an independent external agency. Prospective risk assessments are performed in center A before implementing new treatments, new care pathways or critical changes in key processes. Center B divided risk management into general risk management (e.g. risks of fire) and clinical risk management (e.g. transfusion risks and medication errors). Institute H adopted the “International Patient Safety Goals” (IPSG) issued by the Joint Commission International [20]. Most centers (n = 7)have an institution-wide reporting systems that registers different types of adverse events: near miss; incident; adverse event; sentinel event. Only doctors can make official notifications of a medical error in institute E and nurses cannot report an incident directly. Center G uses a system that generates reports for patient satisfaction, patient safety and patient complaints. Near misses should be reported in institute H according to their procedures but in practice only actual events are reported. For more information on the domain of safety see Table 3.

Patient-centered

Although all center have some type of contact-person for patients, none had an official case-manager for all patient pathways. In institute A and D a formalized inclusion of patients in the strategy development is present. Other centers reported to collaborate with external patient organizations to represent patients. All centers provide some care for cancer survivors, however, only center A has an extensive survivorship program in-house with a dedicated budget. Center G also reports to have a budget for survivorship care (e.g. Psychosocial support). For more information on patient centeredness see Table 3.

Timely

For seven centers the waiting times are set by the government (see Table 3). Institute A indicated that they encountered difficulties in meeting the maximum waiting time for some types of surgeries. The maximum waiting times are input for negotiations with healthcare insurers, and have potential influence on the funding for center A. Center H reports waiting times to the regional government who uses this data to adjust the amount of services offered by the regional healthcare-system. Possible reasons mentioned for long waiting times are high demand of patients for diagnostic tests and insufficient staff. The largest variation between institutes occurred in overall waiting time before first visit, which varied between 1.5 and 21.8 days.

Improvement suggestions

Table 4 describes examples of improvement suggestions per pilot center and resulting improvement plans. Improvement suggestions varied from broader processes such as the involvement of patients in the care process, to specific recommendations (e.g. measure staff satisfaction). Adoption of case managers was a frequently mentioned improvement suggestion. Regarding the suggestion to improve patient participation in the organization, center C only partially agreed as they stated “not all patients want to be involved”. Center A felt a complication registry was mainly useful per discipline and therefore partly agreed with the suggestion to implement an institution-wide complications registry. Out of the total improvement suggestions, pilot centers agreed with 85% and partially agreed with 15%. For center G improvement suggestions were given, however, no improvement plan was received.

Table 4 Improvement suggestions, response and planned actions

Discussion

In this study, we developed a benchmark tool to assess the quality and effectiveness of comprehensive cancer care consisting of 61 qualitative indicators and 141 quantitative indicators. The tool was successfully tested in eight cancer centers to assess its suitability for yielding improvement suggestions and identifying good practices.

The benchmark data showed performance differences between cancer centers which led to improvement suggestions/opportunities for all participating centers. In general, the indicators revealed well-organized centers. However, there were indicators on which centers performed less. For example, not all centers register mortality rates and it is unclear whether these rates, when registered, are made public. Nevertheless, there is broad consensus that public reporting of provider performance can be an important tool to drive improvements in patient care [21]. An indicator on which only two centers performed well was the offering of in-house survivorship care by having a dedicated budget. An advantage of follow-up taking place in cancer centers is that it is comfortable for patients and provides continuity of care [22]. However, it is debatable whether offering this kind of care should be the responsibility of cancer centers, as multiple pilot centers already indicated to have tight budgets.

Large variety existed in the domain of efficiency between centers. This variety was only partly related to differences in healthcare systems, leading to multiple improvement suggestions. For example, center C, G and H had a relatively low inpatient bed utilization, which is likely to be less cost-efficient. Center G had a high number of daycare treatments but a lower bed utilization, possibly indicating a utilization loss. A higher ratio indicates efficient use of beds and chairs and, hence, most likely also staff use. Centers C and D might have a surplus of daycare beds and chairs. Wind et al. [23] showed that having fewer beds has no association with low financial performance and could indeed improve efficiency.

Another important improvement area was patient-centeredness. Specifically in the area of case management for which all centers agreed that it was necessary to implement or expand. Case management is an organizational approach used to optimize the quality of treatment and care for individuals within complex patient groups [24]. However, centers indicated that implementing or extending these case managers will take a long time and therefore categorized this as mid-term (2–5 years) or long-term (6–10 years) goals.

Limitations

Several assumptions underpinned this study. First, although we thoroughly searched the literature and existing quality assessments to identify indicators for the initial list, some suitable indicators may have been missed. Identifying suitable outcome indicators was more challenging than for example process indicators due to the difference in case-mix and healthcare system and financing. We tried to minimize this influence by including a large group of experts from various fields who had affinity with development and management of cancer centers and quality assessment in cancer care. We continuously modified the set of indicators in response to feedback on their relevancy, measurability and comparability by the pilot centers. An advantage of this approach is that the indicators benchmark what the cancer centers want to know, which can increase adoption of the benchmark format as a tool for future quality improvement.

Second, the tool was only tested once in eight European cancer centers. This makes it impossible to say whether the benchmark actually led to quality improvements. Consequently, future research should evaluate the implementation of improvement plans to investigate whether the benchmark actually leads to quality improvement. In addition, future inclusion of more centers will allow to assess the actual discriminative capabilities of the indicator set. The benchmark tool was successfully applied in eight European countries with different wealth status. Although differences in healthcare systems and social legislation unavoidably led to differences in nature and availability of data, comparison still revealed relevant and valuable recommendations for all centers. We mainly achieved this by correcting for size, case-mix and type of healthcare reimbursements.

Finally, due to the extensive scope of indicators, it was difficult to go into detail for each topic. A benchmark focused on a single domain would allow to yield more profound information and more specific improvement suggestions and good practices. Future research is therefore advised to focus on specific domains of the BENCH-CAN framework, such as strategy and effectiveness, to gain a more profound understanding of the processes behind the performance differences, enabling a better comparison and more applied improvement recommendations.

Lessons learned

Multiple lessons were learned from benchmarking cancer care in specialized centers throughout Europe. First, representatives of the pilot centers indicated that international projects such as these can increase awareness that performance can be improved and promote the notion that countries and centers can learn from each other. Identifying successful or good-practice approaches can assist hospitals in improving their services, and reduce inequalities in care provision raising the level of oncologic services across countries. Pilot centers did however indicate not to be able to implement all suggestions or good practices due to socio-economic circumstances. Second, learning through peers enabled cancer centers to improve their performance and efficiency without investing in developing these processes separately. A frequently mentioned comment was the casual, non-competitive atmosphere which led to an open collaboration. Involvement of key stakeholders from the centers at the start of the benchmark is highly recommended to develop interest, strengthen commitment, and ensure sufficient resources which not only accommodates a successful benchmark but also ensures implementation of the lessons learned.

From our earlier review on benchmarking [25], we learned research on benchmarking as a tool to improve hospital processes and quality is limited. The majority of the articles found in this study [25] lacked a structured design, were mostly focused on indicator development and did not report on benchmark outcomes. With this study we used a structured design, reported the benchmark outcomes and contributed to the knowledge base of benchmarking in practice. Although improvement suggestions were made, within the scope of the study we could not report on the effect of the improvement suggestions. This reinforces the need for further research and evidence generation in especially the fields of effectiveness of benchmarking as tool for quality improvement, particularly in terms of patient’s outcomes and learning from good practices.

Conclusion

In conclusion, we successfully developed and piloted a benchmark tool for cancer centers. This study generated more insight into the process of international benchmarking, providing cancer centers with common definitions, indicators and a tool to focus, compare and elaborate on organizational performance. Results of the benchmark exercise highlight the importance of an accurate description of underlying processes and understanding the rationale behind these processes. The tool allowed comparison of inter-organizational performance in a wide range of domains, and improvement opportunities were identified. The tool and the thereof derived improvement opportunities were positively evaluated by the participating cancer centers. Our tool enables cancer centers to improve on quality and efficiency by learning from good practices from their peers instead of reinventing the wheel.

References

  1. De Angelis R, Sant M, Coleman MP, et al. Cancer survival in Europe 1999–2007 by country and age: results of EUROCARE-5—a population-based study. Lancet Oncol. 2014;15(1):23–34.

    Article  PubMed  Google Scholar 

  2. Ettorchi-Tardy A, Levif M, Michel P. Benchmarking: a method for continuous quality improvement in health. Healthc Policy. 2012;7(4):e101.

    PubMed  PubMed Central  Google Scholar 

  3. Joint CommissionBenchmarking in health care. Joint commission. Resources; 2011.

    Google Scholar 

  4. Gudmundsson H, Wyatt A, Gordon L. Benchmarking and sustainable transport policy: learning from the BEST network. Transp Rev. 2005;25(6):669–90.

    Article  Google Scholar 

  5. Longbottom D. Benchmarking in the UK: an empirical study of practitioners and academics. BIJ. 2000;7(2):98–117.

    Google Scholar 

  6. van Lent W, de Beer R, van Harten W. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres. BMC Health Serv Res. 2010;10:253.

    Article  PubMed Central  PubMed  Google Scholar 

  7. www.oeci.eu Accessed 14 May 2013

  8. www.oeci.eu/benchcan Accessed 18 Dec 2013

  9. http://oeci.eu/accreditation/ Accessed 7 Aug 2013

  10. http://www.efqm.org/the-efqm-excellence-model Accessed 15 Oct 2013

  11. MALCOLM BALDRIGE NATIONAL QUALITY AWARD (MBNQA) http://asq.org/learn-about-quality/malcolm-baldrige-award/overview/overview.html Accessed 28 Dec 2016

  12. Hakes C. The EFQM excellence model for assessing organizational performance. Van Haren Publishing; 2007.

  13. Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington DC: National Academy Press; 2001.

    Google Scholar 

  14. Thonon F, Watson J, Saghatchian M. Benchmarking facilities providing care: an international overview of initiatives. SAGE Open Med. 2015;3. https://0-doi-org.brum.beds.ac.uk/10.1177/2050312115601692.

    Article  Google Scholar 

  15. Wind A, Rajan A, van Harten W. Quality assessments for cancer centers in the European Union. BMC Health Serv Res. 2016;16:474.

    Article  PubMed Central  PubMed  Google Scholar 

  16. De Korne DF, Sol KJCA, van Wijngaarden JDH, et al. Evaluation of an international benchmarking initiative in nine eye hospitals. Health Care Manag Rev. 2010;35:23.

    Article  Google Scholar 

  17. Cowper J, Samuels M. Performance benchmarking in the public sector: the United Kingdom experience. Benchmarking, evaluation and strategic Management in the Public Sector. Paris: OECD; 1997.

    Google Scholar 

  18. Brinkmann S. Interview. In: Teo T, editor. Encyclopedia of critical psychology. New York: Springer; 2014.

    Google Scholar 

  19. Purchasing Power Parities - Frequently Asked Questions OECD. http://www.oecd.org/std/prices-ppp/purchasingpowerparities-frequentlyaskedquestionsfaqs.htm Accessed 6 Jan 2017.

  20. “International Patient Safety Goals” (IPSG) issued by the Joint Commission International. http://www.jointcommissioninternational.org/improve/international-patient-safety-goals/ Accessed 16 Nov 2016.

  21. Joynt KE, Orav EJ, Zheng J, Jha AK. Public reporting of mortality rates for hospitalized Medicare patients and trends in mortality for reported conditions. Ann Intern Med. 2016;165(3):153–60.

    Article  PubMed Central  PubMed  Google Scholar 

  22. Models of Long-Term Follow-Up Care. American Society for Clinical Oncology. https://www.asco.org/practice-guidelines/cancer-care-initiatives/prevention-survivorship/survivorship/survivorship-3 Accessed 28 Aug 2016.

  23. Wind A, Lobo MF, van Dijk J, et al. Management and performance features of cancer centers in Europe: a fuzzy-set analysis. JBR. 2016;69(11):5507–11.

    Article  Google Scholar 

  24. Wulff CN, Thygesen M, Søndergaard J, Vedsted P. Case management used to optimize cancer care pathways: a systematic review. BMC Health Serv Res. 2008;8(1):1.

    Article  Google Scholar 

  25. Wind A, van Harten W. Benchmarking specialty hospitals, a scoping review on theory and practice. BMC Health Serv Res. 2017;17(1):245.

  26. Zhang Y, Wildemuth BM. Qualitative Analysis of Content. Applications of Social Research Methods to Questions in Information and Library. Science. 2016;318 https://www.ischool.utexas.edu/~yanz/Content_analysis.pdf Accessed 15 Sept 2016.

Download references

Acknowledgements

The authors thank all participating centers for their cooperation. We would also like to Maarten Ijzerman for his contribution in the research design and valuable input throughout the project.

Funding

This study was funded by the European Commission Consumers, Health, Agriculture and Food Executive Agency through the BENCH-CAN project. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available due to protect privacy of contributing cancer centers but are available from the corresponding author on reasonable request. The indicators used in this study can be found in the additional material and on the BENCH-CAN website through http://oeci.eu/benchcan/Resources.aspx.

Author information

Authors and Affiliations

Authors

Contributions

AW designed the study, developed the indicators, collected the data, analyzed and interpreted the qualitative data and drafted the manuscript. JvD contributed to the design of the study and the development of the indicators, collected the data, analyzed and interpreted the quantitative data and was a major contributor in writing the manuscript. IN contributed to the design of the study and the development of the indicators, and to the analysis and interpretation of the quantitative data. WvL contributed to the design of the study and the development of the indicators, and contributed to the collection of the data. PN contributed to the design of the study and the development of the indicators, to the collection of the data and analyzed and interpreted the data. EJ contributed to the design of the study and the development of the indicators, and contributed to the collection of the data. TH contributed to the design of the study and the development of the indicators, and contributed to the collection of the data. FRG contributed to the design of the study and the development of the indicators, contributed to the collection of the data, and was a major contributor in writing the manuscript. WvH contributed to the design of the study and the development of the indicators, contributed to the analysis and interpretation of the data and was a major contributor in writing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Wim van Harten.

Ethics declarations

Ethics approval and consent to participate

Not applicable

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Appendix 1. Semi-structured interview topic list. This file contains some examples of topics that were discussed during the semi-structured interviews. (PDF 134 kb)

Additional file 2:

Appendix 2A. Qualitative indicators. This file contains the qualitative indicators that were used in the benchmark. Appendix 2B. Quantitative indicators. This file contains the quantitative indicators that were used in the benchmark. (ZIP 1350 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wind, A., van Dijk, J., Nefkens, I. et al. Development of a benchmark tool for cancer centers; results from a pilot exercise. BMC Health Serv Res 18, 764 (2018). https://0-doi-org.brum.beds.ac.uk/10.1186/s12913-018-3574-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12913-018-3574-z

Keywords