- Open Access
- Open Peer Review
Quantitative data management in quality improvement collaboratives
© van den Berg et al; licensee BioMed Central Ltd. 2009
- Received: 1 October 2008
- Accepted: 26 September 2009
- Published: 26 September 2009
Collaborative approaches in quality improvement have been promoted since the introduction of the Breakthrough method. The effectiveness of this method is inconclusive and further independent evaluation of the method has been called for. For any evaluation to succeed, data collection on interventions performed within the collaborative and outcomes of those interventions is crucial. Getting enough data from Quality Improvement Collaboratives (QICs) for evaluation purposes, however, has proved to be difficult. This paper provides a retrospective analysis on the process of data management in a Dutch Quality Improvement Collaborative. From this analysis general failure and success factors are identified.
This paper discusses complications and dilemma's observed in the set-up of data management for QICs. An overview is presented of signals that were picked up by the data management team. These signals were used to improve the strategies for data management during the program and have, as far as possible, been translated into practical solutions that have been successfully implemented.
The recommendations coming from this study are:
From our experience it is clear that quality improvement programs deviate from experimental research in many ways. It is not only impossible, but also undesirable to control processes and standardize data streams. QIC's need to be clear of data protocols that do not allow for change. It is therefore minimally important that when quantitative results are gathered, these results are accompanied by qualitative results that can be used to correctly interpret them.
Monitoring and data acquisition interfere with routine. This makes a database collecting data in a QIC an intervention in itself. It is very important to be aware of this in reporting the results. Using existing databases when possible can overcome some of these problems but is often not possible given the change objective of QICs.
Introducing a standardized spreadsheet to the teams is a very practical and helpful tool in collecting standardized data within a QIC. It is vital that the spreadsheets are handed out before baseline measurements start.
- Data Management
- Project Leader
- Quality Improvement Program
- Feedback Report
- Quality Improvement Intervention
Quality collaboratives have gained in attention since the formulation of the "quality chasm" by the US Institute of Medicine  and its spread across the Western world. The Breakthrough method developed by the Institute of Health Improvement has been one of the major instruments put to use in such collaboratives. Quality improvement collaboratives (QICs) are seen as a means to spread evidence-based practices quickly across care organizations, as there is some evidence that the integration of quality instruments leads to synergistic effects . As noted in the literature, however, hardly any evidence exists as of yet to the effectiveness of quality collaboratives in bridging the quality chasm: do collaboratives indeed lead to better care? Are they doing this in an efficient manner? Questions like these have hardly been systematically addressed [3, 4]. There are some indications that a significant publication bias exist, and most studies are methodologically weak, e.g. relying on self-reporting . The few systematic studies that have been done are inconclusive as some show no improvement [6, 7] whereas others show significant improvements . A recent review showed some positive effects, but the study base for this review was very limited . These mixed effects have been attributed to several factors, i.e. differences in external context of care providers, cultural aspects, team functioning, availability of resources . Further evaluations of quality collaboratives are in dire need, if only to bridge the "apparent inconsistency between widespread belief in and use of the QIC method and the available supporting evidence" .
In large and complex organizations such as most quality collaboratives, data management can be a critical factor in communicating results, both within the collaborative as outside. However, data collection and management within a QIC poses significant challenges and dilemmas, mainly arising from the possible contrast between the objective of the collaborative to improve care and the need to gather reliable data from the evaluator's perspective [11, 12]. For any evaluation to succeed, data collection on interventions performed within the collaborative and outcomes of those interventions is crucial. Getting enough data from QICs for evaluation purposes, however, has proved to be difficult (e.g.  report to have a response rate of some 50%). This paper provides a retrospective analysis on the process of quantitative data management in a Dutch QIC, the so-called Faster Better pillar three program (FB p3) in acute care hospitals. From this analysis general failure and success factors will be identified. Below, these dilemmas as met in the FB p3 program are analyzed and interventions to solve them within this program described.
The general goal of the FB p3 program was inducing quality improvement in the participating hospitals by implementing best practices, sharing knowledge and securing and documenting successes. The FB p3 program aims at improving on safety, logistics, leadership and patient centered working, increasing transparency of health care in The Netherlands. Sharing successes and best practices with other hospitals, not participating in the FB p3 program, should eventually lead to widespread implementation of quality improvement interventions [13, 14].
In order to secure these goals breakthrough projects on selected themes were organized and supported both by nationally operating project leaders and, in the hospitals, by hospital advisers. Hospitals, though left free to some extent on the internal organization of the program, were expected to appoint a 'project coordinator' as well as project teams participating in the Breakthrough collaboratives. The participating teams were brought together for at least three learning sessions, at the start of the project, half way the project and at the finish of the project. Furthermore, the project leader delivered consultation on a project level to the teams in-person. Hospitals could rely on the hospital adviser for more practical issues. For hospital CEOs and clinical leaders, a 'leadership' program was developed.
Within the FBp3 program data management served multiple purposes. On team level the data was used in order to provide feedback to motivate and inform the teams as part of the Breakthrough method. On program level data was used to monitor the larger program. External evaluators used the database for evaluation of program outcomes at the patient level, but collected their own (survey and interview) data for other parts of the evaluation . Because of the introduction of both the performance indicators to control and improve quality of health care in the Netherlands (the Faster Better pillar 2 program) by the Dutch Health Care Inspectorate in 2003 and the introduction of a new health care financing system in 2006, it was virtually impossible to rely for evaluation on existing medical databases.
Teams participating in the QI projects at any time in the first four years of the FB p3 program
Y2C1 & YY1C2
Y2C2 & YY1C3
Postoperative wound infections
Operating Room OK
Data management protocol
The outline of the data warehouse used for data management was carefully described in a data management protocol with instructions on how the data should be handled. All parties were informed and asked to agree before data was gathered. The protocol contained information on the indicator-set, what they measured and the frequency with which data should be provided. Teams could add local indicators on the spreadsheets. However, these were not aggregated and analyzed by the data management team. Only the data manager could access the database and the involved parties could request for the aggregated data. The data was stored on a secure intranet network provided by Prismant Corporate, an independent third party known for storing medical data of all general and academic hospitals in The Netherlands. Prismant Corporate had no access to the data and was not involved in data management activities.
This paper is reporting on complications observed by the data management team in retrospective. It is based on progress reports of the data management project and notes of the monthly meetings in which data management was a returning subject. During the FB p3 program the data management team assessed together with the project leaders what factors caused the perceived problems in data management. Jot notes taken at these sessions have been analyzed for the purpose of this communication. In this communication an overview is presented of signals that were picked up by the data management team. These signals were used to improve upon the strategies for data management during the process of data gathering and have, as far as possible, been translated into practical solutions that have been successfully implemented. The main purpose of this was to adjust and improve the database in order to be able to report on the results of the FB p3 program.
Percentage data coverage (received files)
Start Y2C1 & YY1C2
Start Y2C2 & YY1C3
It becomes clear from the data presented in Table 2 that problems with data management were manifest during Y1C1 and that measures taken thereafter solved the data management problem in large part. Most of the Y1C1 data was acquired in the years following the first year of the FB p3 program. Interpreting the Y1C1 data however remains problematic, especially with regard to follow-up measurements, because by the time this data was gathered most of the context information was lost.
A difficulty with the initially lacking data for Y1C1 was that all teams had been working with great enthusiasm for a whole year and there was no data to prove it by. These results lead to an introspective research in order to explain why participating teams appeared not to be able to provide the information they were requested. There were various explanations. These will be described and analyzed in this paper together with the measures to improve data management.
The first factor that could explain reservations toward data management is that the involved parties were very cautious. The FB p3 program was launched in the context of a significant reform of the Dutch health care system [16, 17], and was presented as part of that reform. At that time there was a great increase in attention for health care quality in The Netherlands. Most important developments in The Netherlands at the onset of the FB p3 program are the introduction of a new health care financing system, the diagnosis-related group as part of the upcoming introduction on regulated competition, which was introduced in 2006. Furthermore in 2003 the Dutch Health Care Inspectorate introduced a number of quality indicators to control and improve quality of health care in the Netherlands (the Faster Better pillar 2 program). At the start of the program, the first of the 'top 100' lists of hospital performance - based on healthcare inspectorate indicators  - were published which resulted in major public debates about both hospital performance and the legitimacy of such listings . This created a climate in which central data management was considered a risk factor of being exposed.
Despite the data management protocol and even though the database was kept in a secure environment and all data traffic was password secured, this was not enough to prevent resistance. At different levels in the process there was great resistance to the principle of standardized data management. The suggestion that the gathered information would be used for benchmarking purposes raised a lot of critique from as well the participating hospitals as some of the partners of the consortium. The most fundamental worry was that, despite the data management protocol, the gathered data would not be secure and would be used for other purposes than described in the protocol. Some feared that they would not be able to publish their own data, whereas others were worried that there would be no restriction for third parties to publish all data. There was a great lack of trust among the involved parties with regard to data management.
Another problem was the suggestion that whenever a team would not be able to reach its goals, the results could be used against them, either by hospital management or by other stakeholders (e.g. insurance companies, politics, media). On the program level, this proved to be the case as well, when in 2006 the NIVEL, that was assigned to evaluate the program, published its report on the results of the first year of the program, accompanied by a press release that had as a title that "only 20% of Faster Better projects meet the target" . Although in the body of the press release this heading was nuanced, it triggered significant discussion amongst the partners in the program.
The data management team invested a lot of time in communication. Meetings were scheduled at all levels in the FB p3 program. At the level of the consultants and the hospitals the procedures were explained and explicated. It was made clear how and why data was gathered and who had access to the database. This information was also accessible in the data management protocol. The experience was however that making an extra effort in explaining data management face to face created more trusts and willingness to cooperate. At the management level of the FB p3 program the data management team emphasized the importance of transparency and communication. It was necessary to create consciousness at all levels of the vital importance of a reliable database, both for learning within the collaborative as for program accountability. In the second and especially the third cohorts of hospitals entering the program, distrust towards data collection and management decreased. Apart from more intensive communication, also in the selection phase of the program, the longer experience with public disclosure of performance data might have attributed to this.
Furthermore, to insure absolute secure data handling the data was stored on a secure intranet network provided by an independent third party (Prismant Corporate). An important reason for storing the data with this third party organization was to gain trust within the hospitals. It was presumed that it would be easier for the hospitals to send their data to this third party organization then to a member of the Better Faster consortium, even if this was a university department. A member of the data management team held office at this organization twice a week. The password-secured spreadsheets were sent to an email address connected to the third party organization. The passwords were altered for each series. The flipside was that despite all these efforts there was a lot of unsecured data traffic. The main reason for this was that people thought the security measures were too extensive, they had problems remembering the password or they were not aware of the risks of unsecured data traffic by email. In 2007, Prismant stopped offering the service of storing data; at that point, trust in the data management project had increased and it was decided to build a secure server at the university department participating in the program.
In line with the perceived sensitivity of the data it was felt that the comparability resulting from the central database would generate political sensitivities within the participating hospitals, especially with regard to some projects. For instance, earlier benchmarks (that were associated with the Faster Better program) around the performance of operating rooms (OR's) had generated a host of hospital-internal discussion. It appeared that some of the hospitals, also involved in the FB p3 program, were working with 50% overcapacity in the OR's: nearly half of the staff in these OR's where considered redundant by hospital management as a result of these benchmarks. Regarding the sensitivity of this outcome, working with centrally defined indicators in this sense was thought to jeopardize the ability of the program to get support from internal hospital teams.
As creating better efficiency in the hospitals was an explicit goal of the program, the anxiety at work floor levels of the discovery of overcapacity was understandable. Rather than focusing on discharging personnel however, the program was focusing on using efficiency gains by further investing in quality of care, raising hospital production and/or postponing capacity increases. This was made clear in an added project on creating business cases for the improvement projects and was discussed with hospital CEOs in the leadership program.
Besides the sensitivity of using centrally registered results for data management purposes, there were two philosophies colliding. Whereas the data management within the FB p3 program aimed to gather as much as possible standardized data with regard to the main goals of the program, the methods used within the program were not fit for this strategy. The quality improvement interventions implemented in the FB p3 program are mainly based upon the Breakthrough method, developed by the Institute of Healthcare Improvement (IHI) (Kilo, 1998; IHI, 2003). This method stands for a very structured way of establishing changes in a short period of time. The goal is to improve health care quality significantly by using methods that have been proven successfully: creating a breakthrough. Within that philosophy, teams set local goals and report on indicators that are of local relevance. Collecting data on a team level targeted at local priorities is inherently a part of the breakthrough method. Therefore the idea of a database with data on centrally defined indicators was thought to fit badly with the 'Breakthrough philosophy' adapted for the majority of the projects. Using standardized key-indicators was felt to corrupt the idea that all quality improvements must be fit towards local circumstances and needs [12, 21]. This clash of philosophies was especially apparent within the consortium itself, leading to contradicting communications on the importance of data management.
In most breakthrough programs the participating teams develop their own data management system and spreadsheet. This is seen as an element of the breakthrough method. However, to insure that all teams would adopt the same standardized indicators, the FB p3 data management team provided all teams in the hospitals with prefab spreadsheets. The teams used the spreadsheets and the central data management indicators proved to be applicable to each team. Although a few teams deviated from the central indicators because of local factors, in general standardization within the breakthrough method worked well.
Also, the importance of the data management project was discussed at length with project leaders and hospital advisors in monthly meetings. Whereas some traces of the clash of the philosophies remained, most of this withered away during the first year of the program.
The start of the FB p3 program was forwarded in time. Therefore the database was built simultaneously with the startup of all the other projects. As a result, there was not enough time to test all indicators in a real life situation. In a few cases this resulted in problems with gathering data for the database later on.
Imperfections from the first series were learned from and used as lessons in later series. In the first series in the first year the spreadsheets were introduced and handed out to the teams before the QI projects started. This meant that a lot of teams were already collecting data before they actually were told for what purpose they were filling out the spreadsheets. In the second year, teams could be better instructed. Also, for the second and third round of hospitals a special meeting was organized three months before the actual start of the program to inform program coordinators of all the things they should have in place before the start. Data management was an explicit part of these meetings, and hospitals were advised to creating a supporting structure for data collection, both concerning possible IT solutions and having assistants for teams to help in data collection.
In line with the progressive startup of the FB p3 program the communication within the program was at times incomplete. As a result, many people were at first not aware of the function of the database. It was considered a foreign body and not an integral part of the FB p3 program. Many teams were not informed on the existence of the database at all. Surprisingly when all was settled it appeared that the teams had been working without any significant complaints with the standardized data forms designed for data management purposes. The biggest bottleneck then occurred at retrieving data from the teams in the hospitals. The initial lack of communication resulted in stagnation of data traffic from the teams to the database. Teams sometimes didn't know where to send their data. There were cases in which secretaries gathered data that never reached the database. There was more data stuck in the twilight zone, than there was in the database during the first year.
The best way to communicate that there were gaps in the database appeared to be periodic feedback reports to the teams. The teams were presented with the available data in the feedback reports and were asked whether this presented a truthful image of their results for the stakeholders in the quarterly progress reports. They would get another two weeks to complete the data when necessary. At first, feedback reports were sent to the hospitals directly. As this created much negative feedback, in further rounds it was decided to send the feedback reports to project leaders of the Breakthrough projects first. This allowed them to complement the database with data they or the secretaries got from the projects. Only after this, the reports were sent to the hospitals. The feedback reports made the teams feel the urgency to complete the data and inspired the teams to be more accurate in delivering their data to the database. The confrontation with empty cells where results were gained made clear that the database could also be a positive way to communicate progress, or at least the effort they put into it to the stakeholders, if only data were send in. Secretaries were also instructed to send incoming data to the database.
In the second and third cohort the hospitals were better prepared. New hospitals to the FB p3 program were advised to install a data management desk. This desk would have a central role in data traffic from the teams to the database.
A number of qualitative and quantitative goals were set at the start of the FB p3 program. The quantitative goals for the FB p3 program were operationalized as key-indicators provided by standardized data. Using standardized data seemed to be in conflict with the Nolan cycle underlying the FB p3 program and the Breakthrough methodology. Other barriers for central data management were the sensitivity of the data and the political climate at the start of the FB p3 program causing severe mistrust in the participating hospitals. The jumpstart made by the FB p3 program, leaving too little time for piloting the indicators and introducing the database made it even harder to retrieve data from the teams working at the FB p3 program. Admitting to these problems, a lot of effort was put into communication. The database was systematically brought into every meeting to be discussed at each level of the program. Results from the teams were reported back in periodical progress reports to make the teams aware that the data in the database would be used as an indicator for the progress of the FB p3 program as a whole. Even though not all teams were aware of it, data has been collected from the onset of the program due to the spreadsheets that were handed out at the kick-of of the projects. The biggest challenge in the first year was retrieving all data to the central database.
Summary of recommendations for designing QIT's
Communicate on all levels, both management as care workers in the teams. Create a transparent design in which each person understands his purpose. Make sure that the QIT design is presented well before onset.
When there is data traffic make clear that it must be secured. Medical data must always be encrypted when it leaves the hospital even when it is anonymous.
Data must be stored in a secure environment, preferably with a third party organization specialized in storing medical data. Ownership of the data must be subjected in a data management protocol.
Working with standardized spreadsheets leads to standardized data. Working with spreadsheets is not in line with the philosophy of the breakthrough method. In large QITs it is however inevitable to have at least part of the data standardized. By providing spreadsheets that are easy to extend with other variables it can even help promote additional data gathering.
Central data management sometimes only seems to create demands for the teams working in QITs. It's therefore essential for data management to provide the teams with valuable feedback on different levels, both on their own teams, as their own hospital as the project they are involved in. Providing useful feedback encourages the teams to deliver their data to the central database.
Quality improvement is about people and their positions. Their must be absolute confidence on how the data is trafficked, stored and on how the results are used. People will not cooperate in a process of which they think it might harm their position or institutions. A data management protocol with rules and regulations on handling the data and ownership of the data can be helpful in creating confidence.
Data management desk
Only a few of the hospitals involved in the FB p3 program had a tradition on standardized and large-scale data management. In the first cohort unfamiliarity with data management was an obstacle. The hospitals in the second cohort were advised to install a data management desk that could assist all participating teams within the hospital with their data traffic.
One obstacle that has not been met in the Faster Better collaborative, and is questionable to be met in any QIC, is that project teams themselves acquire data. This leaves all evaluation based on self report. It is questionable whether this strategy will lead to valid results . Only by being aware of the processes described in this paper the results in the database of the FB p3 program can be used for monitoring and evaluation purposes. When used for external purposes it remains necessary to use both qualitative and quantitative results. Only in that way will it be possible to describe the results of the program in the right context. Evaluating the extremely complex projects that QICs are, thus calls for evaluation methods that do justice to this complexity. Gathering quantitative data however will be part of that endeavor and the experiences with data management in a QIC as expressed in this communication will help in creating better data.
In this paper we have reported on data management within the Faster Better improvement collaborative in the Netherlands, in which 24 hospitals with 515 teams have participated in improving patient safety and logistics, leadership and patient centeredness. A number of issues have been central to data management within this QIC: overcoming resistance to the sharing and publication of data, enabling registration on not normally registered performance, and the tensions between improvement and evaluation and research. Communicating the role of data at all levels of the program, securing data, providing teams with standardized data sheets and sending regular updates on data collection have increased response to comparatively high levels.
The recommendations coming from this study are:
▪ From our experience it is clear that quality improvement programs deviate from experimental research in many ways. It is not only impossible, but also undesirable to control processes and standardize data streams. QIC's need to be clear of data protocols that do not allow for change. It is therefore minimally important that when quantitative results are gathered, these results are accompanied by qualitative results that can be used to correctly interpret them.
▪ Monitoring and data acquisition interfere with routine. This makes a database collecting data in a QIC an intervention in itself. It is very important to be aware of this in reporting the results. Using existing databases when possible can overcome some of these problems but is often not possible given the change objective of QICs.
▪ Introducing a standardized spreadsheet to the teams is a very practical and helpful tool in collecting standardized data within a QIC. It is vital that the spreadsheets are handed out before baseline measurements start.
Quantitative goals of the FB p3 program
Admission times for the policlinics is reduced to less than one week
Passage time reduction by 40-90%
Increase of productivity on the OR by 30%
Reduction of length of stay by 30%
Introduction of the blame free reporting system
Reduction of the number of medication errors by 50%
Reduction of pressure ulcers to a level under 5%
Reduction of postoperative wound infections by 50%
Reduction of postoperative pain to a mean of < 4 on the VAS scale
- Committee on the Quality of Health Care in America IoM: Crossing the Quality Chasm: A New Health System for the 21st Century. 2001, Washington: National Academy PressGoogle Scholar
- Grol R, Grimshaw JM: Evidence-based implementation of evidence-based medicine. Joint Commission Journal on Quality Improvement. 1999, 25 (10): 503-13.PubMedGoogle Scholar
- Mittman BS: Creating the evidence base for quality improvement collaboratives. Annals of internal medicine. 2004, 140 (11): 897-901.View ArticlePubMedGoogle Scholar
- Øvretveit J, Bate P, Cleary P, Cretin S, Gustafson D, McInnes K, et al: Quality collaboratives: lessons from research. Quality and Safety in Health Care. 2002, 11: 345-51. 10.1136/qhc.11.4.345.View ArticlePubMedGoogle Scholar
- Leatherman S: Optimizing quality collaboratives. Quality and Safety in Health Care. 2002, 11: 307-10.1136/qhc.11.4.307.View ArticlePubMedPubMed CentralGoogle Scholar
- Landon BE, Wilson IB, McInnes K, Landrum MB, Hirschhorn L, Marsden PV: Effects of a quality improvement collaborative on the outcome of care of patients with HIV infection: the EQHIV study. Annals of internal medicine. 2004, 140: 887-96.View ArticlePubMedGoogle Scholar
- Shortell SM, Bennett CL, Byck G: Assessing the impact of continuous quality improvement on clinical practice: What will it take to accellerate progress. Milbank Quarterly. 1998, 76 (4): 593-624. 10.1111/1468-0009.00107.View ArticlePubMedPubMed CentralGoogle Scholar
- Horbar JD, Carpenter JH, Buzas J, Soll RF, Suresh G, Bracken MB, et al: Collaborative quality improvement to promote evidence based surfacant for preterm infants: a cluster randomised trial. British Medical Journal. 2004, 329: 1004-7. 10.1136/bmj.329.7473.1004.View ArticlePubMedPubMed CentralGoogle Scholar
- Schouten LMT, Hulscher MEJL, van Everdingen JJE, Huijsman R, Grol RPTM: Evidence of the impact of quality improvement collaboratives: systematic review. BMJ. 2008, 336: 1491-4. 10.1136/bmj.39570.749884.BE.View ArticlePubMedPubMed CentralGoogle Scholar
- Cretin S, Shortell SM, Keeler EB: An evaluation of collaborative interventions to improve chronic illness care: Framework and study design. Evaluation Review. 2004, 28 (1): 28-51. 10.1177/0193841X03256298.View ArticlePubMedGoogle Scholar
- Bate P, Robert G: Where next for policy evaluation? Insights from researching National Health Service modernisation. Policy & Politics. 2003, 31 (2): 249-62. 10.1332/030557303765371735.View ArticleGoogle Scholar
- Strating M, Zuiderent-Jerak T, Nieboer A, Bal R: Evaluating the Care for Better collaborative. Results of the first year of evaluation. 2008, Rotterdam: Dept. of Health Policy and Management, Contract No.: Document Number|Google Scholar
- Schellekens W, Voort Rouppe van der M, van Splunteren P: Steen in de vijver. Ziekenhuizen stimuleren om bewezen verbeteringen in te voeren. Medisch Contact. 2003, 58 (35).Google Scholar
- Dückers M, Wagner C, Groenewegen P: Voorwaarden voor een sectorbreed op kennisverspreiding gebaseerd verbeterprogramma in de Nederlandse ziekenhuiszorg. Acta Hospitalia. 2005, 45: 37-54.Google Scholar
- Dückers M, Wagner C, Groenewegen P: Developing and testing an instrument to measure the presence of conditions for successful implementation of quality improvement collaboratives. BMC Health Services Research. 2008, 8 (172).Google Scholar
- Helderman J-K, Schut FT, Grinten van der TED, Ven van de WPMM: Market-oriented health care reforms and policy learning in the Netherlands. Journal of Health Politics, Policy and Law. 2005. 2004, 30 (1-2): 189-210. 10.1215/03616878-30-1-2-189.View ArticleGoogle Scholar
- Ven van de WPMM, Schut FT: Universal Mandatory Health Insurance In The Netherlands: A Model For The United States?. Health Affairs. 2008, 27 (3): 771-81. 10.1377/hlthaff.27.3.771.View ArticlePubMedGoogle Scholar
- Berg M, Meijerink Y, Gras M, Goossensen A, Schellekens W, Haeck J, et al: Feasibility First: Developing Public Performance Indicators on Patient Safety and Clinical Effectiveness for Dutch Hospitals. Health Policy. 2005, 75: 59-73. 10.1016/j.healthpol.2005.02.007.View ArticlePubMedGoogle Scholar
- Zuiderent-Jerak T: Competition in the wild. Emerging figurations of healthcare markets. Social Studies of Science. 2008.Google Scholar
- Dückers M, De Bruijn M, Wagner C: Evaluatie Sneller Beter pijler 3. De implementatie van verbeterprojecten in het eerste jaar. 2006, Utrecht: Nivel, Contract No.: Document Number|Google Scholar
- Bate P, Robert G: Studying health care 'quality' qualitatively: the dilemmas and tensions between different forms of evaluation research within the UK National Health Service. Qualitative Health Research. 2002, 12 (7): 966-81. 10.1177/104973202129120386.View ArticlePubMedGoogle Scholar
- Zuiderent-Jerak T, Strating M, Nieboer A, Bal R: Sociological refigurations of patient safety. Ontologies of improvement and 'acting with' quality improvement collaboratives. Social Science & Medicine. 2009.Google Scholar
- Langley GJ, Nolan KM, Norman CL, Provost LP, Nolan TW: The improvement guide: a practical approach to enhancing organizational performance. 1996, San Francisco: Jossey-BassGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://0-www.biomedcentral.com.brum.beds.ac.uk/1472-6963/9/175/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.