- Research article
- Open Access
- Open Peer Review
Risk adjustment methods for Home Care Quality Indicators (HCQIs) based on the minimum data set for home care
BMC Health Services Research volume 5, Article number: 7 (2005)
There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs) based on the Minimum Data Set for Home Care (MDS-HC).
A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA) in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a) client covariates only; b) client covariates plus an "Agency Intake Profile" (AIP) to adjust for ascertainment and selection bias by the agency; and c) client covariates plus the intake Case Mix Index (CMI).
The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario.
Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did substantially affect the ranking of many individual home care providers.
In both Canada and the United States efforts are underway to develop systems to assess the quality of health care as a first step to improving services. In the US nursing home sector, the implementation of the Minimum Data Set has been linked to improvements in quality of care [1–4]. Recently, the Centers for Medicare and Medicaid Services (CMS) has developed web pages to give consumers, and the public at large, further information about the quality of nursing homes and home care services across the country. The "Nursing Home Compare" and more recent "Home Health Compare" web sites allow individuals to view several quality indicators (QIs) for individual providers[5, 6].
When comparing health care providers across a set of QIs there is a concern that they may differentially admit clients with a greater likelihood of triggering on quality issues. Since the indicators are defined as events that are preferable to avoid (e.g., skin ulcers, untreated pain, weight loss), in the absence of risk adjustment, these providers would appear to be delivering poorer quality of care. Risk adjustment attempts to adjust for different populations of clients who may be at greater risk for experiencing quality issues that is a function of their clinical status rather than the quality of care.
In the US nursing home sector, the CMS recently funded a large-scale project to review all potential long-term care QIs and the risk adjustment process. The evaluation of risk adjustment included a review of both client-level and agency-level covariates. The research team revised the original set of client covariates to create a set of new models and assessed the effects of using an agency-level covariate, the Facility Admission Profile (FAP). The FAP represents the prevalence of the quality issue for residents newly admitted to the facility and was intended to adjust for potential selection bias (i.e., facilities preferentially admitting clients with a greater likelihood of triggering on the indicator) and ascertainment bias (i.e., ability of providers to detect a quality issue that might increase the QI rate). However, after extensive analyses, the research team did not recommend the use of the FAP, given that the FAP did not appear to be a particularly useful measure of ascertainment bias .
Nevertheless, in October 2002, the CMS included the FAP in three quality measures reported on the Nursing Home Compare web page. As part of this same initiative, Kidder et al. examined the Nursing Case Mix Index from the RUG-III grouping system as well as seven of the RUG scales as potential risk adjusters. In their final list of quality measures, 12 indicators for chronic care residents and four indicators for post-acute care residents, included a covariate related to the RUG-III system.
Home Care Quality Indicators (HCQIs)
The current research was part of a project to develop a set of 22 Home Care Quality Indicators (HCQIs) based on items in the Minimum Data Set for Home Care (MDS-HC). Implementation of the MDS-HC is complete or underway in 15 states and in 7 Canadian provinces and territories, where it is being used mainly as a clinical assessment instrument. In Michigan, the MDS-HC is being used as part of the MI-Choice waiver program to reduce nursing home admissions. In Ontario, Community Care Access Centre (CCAC) case managers use the instrument to determine needs, allocate services and make placement decisions.
The HCQI derivation was accomplished using data from Ontario and Michigan since these were the regions with the largest scale implementation in Canada and the US, respectively. In their paper describing the HCQI derivation, Hirdes et al. recommended the use of client-level risk adjustment for all but four indicators, and suggested the use of the Agency Intake Profile (AIP) as another method for risk adjustment. These various approaches to risk adjustment are the focus of this study.
The set of HCQIs used in this study were developed by members of interRAI, a non-profit multinational organization dedicated to the development and refinement of assessment instruments for older adults and persons with disabilities, and their related applications. The HCQIs represent outcome measures that document an agency's rate for triggering on a quality issue. For example, one indicator measures the prevalence of weight loss among individuals who are not considered to be palliative clients. The HCQIs include a mix of prevalence measures (i.e., measured on a cross-section of clients at one point in time) and incidence measures (i.e., failure to improve or incidence of an event measured across two points in time). All HCQIs are defined as events to be avoided such that a higher rate on the indicator is indicative of poorer performance.
This paper uses a dataset from two Canadian provinces to further explore these three types of risk adjustment for the interRAI HCQIs. It explores the effects of risk adjustment at both the level of the region and at the level of the individual home care provider.
The RAI Health Informatics Project was a two and a half year research study begun in 1999 in which fourteen of Ontario's 43 CCACs used the MDS-HC as part of usual practice for all adult (18 years and older) home care clients. CCACs provide several different services: information and referral, case management and placement in long-term care facilities. CCACs purchase in-home services from external contracted agencies through a "request for proposal" process. Services provided include in-home physiotherapy, occupational therapy, personal support and homemaking, medical supplies and equipment, and care by other professional such as social workers, dietitians and speech-language pathologists. Case managers oversee the assessment process, make referrals to service providers and then monitor the care provided in the home. Funding for CCACs is based on an annual budget provided by the Ontario Ministry of Health and Long-term Care. CCACs are governed by independent, non-profit boards of directors, accountable to the Ministry of Health and Long-term Care.
In Ontario, it is now mandatory for all long stay home care clients to be assessed with the MDS-HC. However, at the time of the RAI Health Informatics project, the MDS-HC was not mandated for use. As such, some clients (e.g., those expected to be on service less than two weeks) did not receive an MDS-HC assessment and were not included in the current project.
At this same time, Manitoba conducted a pilot implementation of the MDS-HC in the fourteen offices of the Winnipeg Regional Health Authority (WRHA). The WRHA is one of twelve health authorities in the province, and is responsible for providing health services to the approximately 646,000 residents of Winnipeg and the surrounding suburban area. The WRHA receives annual funding from the government of Manitoba. The home care program of the WRHA was established in 1974 with a mandate to provide effective and responsive health care services in the community to support independent living and facilitate admission into LTC facilities when independent community living is no longer an option. Home care services include personal care, nursing, counselling, occupational therapy and physiotherapy assessment, referral to other agencies and coordination of services.
In the WRHA region, an MDS-HC assessment was completed by care coordinators only if the client was anticipated to be on service for at least 90 days. As a result, the study sample was focused on a long stay population of home care clients.
All participating sites in the WRHA and in Ontario identified a set of case managers who received a two-day training session led by a member of the research team. They then used the MDS-HC instrument as part of their usual in-home assessment.
The data used for this study represent a cross-sectional cohort of home care clients assessed between November, 1999 and December, 2002. Two CCACs in Ontario and four WRHA sites, were removed from the database because they submitted fewer than 20 assessments, resulting in a total of ten WHRA and twelve Ontario sites. Of these, three Ontario CCACs and eight offices of the WRHA also submitted client reassessments at approximately 90 days, which allowed for the calculation of failure to improve/incidence HCQIs.
The protocol for data collection was reviewed and received ethics clearance through the Office of Research Ethics at the University of Waterloo, Canada.
Risk adjustment attempts to adjust for differences in client populations that may bias the HCQI rates. Organizations that provide care to more impaired clients will tend to have higher unadjusted rates, regardless of the quality of care they provide. As such, risk adjustment methods are used to maximize the ability to make fair comparisons across providers. With any type of risk adjustment, caution must be exercised to prevent over adjustment (i.e., adjusting away poor practice). The choice of risk adjustment covariates is therefore highly important and should not include variables that would be considered to reflect suboptimal clinical care.
There are two generic types of risk adjustment that have been recommended for the HCQIs. The first adjusts for differences in the population at the client-level. Potential adjusters can include both individual assessment items and summary scales embedded within the MDS-HC. The HCQI developers evaluated a large range of potential covariates, considering their distributional properties, strengths of association with the outcomes of interest, consistency of findings across jurisdictions and potential for clinically inappropriate adjustment (e.g., benzodiazepine use was not considered a reasonable adjuster for falls). As a result, from zero to five risk adjusters were recommended for the 22 HCQIs and these client covariates were used in the current project.
The second type of risk adjustment was intended to control for two types of bias at the agency level: a) the agency's ability to identify differences in clients' clinical characteristics and b) differences in who an agency selects for admission. These risk adjustments are performed at the agency level after the individual-level risk adjustments are applied.
The Agency Intake Profile (AIP) was used following the methodology outlined by Morris et al., for the MDS 2.0 quality indicators. The AIP was calculated for each agency based on clients for whom the MDS-HC assessment was their intake assessment or for clients who had been on service for no more than 30 days. This group of clients was considered to be the intake cohort.
More recently, an alternative form of agency-level risk adjustment has been proposed, to control for potential selection and ascertainment bias. This adjustment employs the Case Mix Index (CMI) associated with the RUG-III/HC methodology. In this instance, the CMI for the intake cohort was calculated, and adjustments to the HCQIs were performed similar to those used for the AIP.
The process followed for adjusting HCQI rates was the same as that used by Morris et al. in adjusting the US nursing home QI rates. Three sources of information are required: the agency-level observed rate, the agency-level expected rate and the grand mean across all agencies. The first data element was simply the raw observed HCQI rate for a given home care agency. The next data element involved the creation of an expected rate for each client within a given agency, based on output from a logistic regression model. In this model, each HCQI acts as a dichotomous dependent variable (i.e., triggering on the HCQI or not) and the client-level and/or agency-level covariates are entered simultaneously as independent variables. The expected value for each client is then pooled to create an expected value for the agency. Three separate risk adjusted HCQI rates are thus computed, based on which method is used for adjustment: the expected rate based on the client covariates only (CC), on client covariates plus AIP (+AIP) and on client covariates plus CMI (+CMI).
The final adjusted value can be thought of as an estimate of an agency's HCQI rate if the agency had clients with an average level of risk. This risk adjustment method is similar to the concept of indirect standardization, in which the ratio of the observed to expected events is calculated then multiplied by the crude rate in the standard population. In the current project, the standard population used was the combined set of agencies from both the Winnipeg Regional Health Authority (WRHA) and from Ontario.
For each participating agency, the unadjusted HCQI rates were calculated for all 22 indicators. These rates represent the average rate across all eligible clients for a given agency. For prevalence indicators, the rates were calculated only for clients who had been on service for at least 30 days to avoid penalizing an organization for quality issues that were newly recognized.
Several methods were utilized to assess the impact of risk adjustment, both at the regional and at the agency-level. The unadjusted and three adjusted rates were compared between Ontario and the WRHA, to assess the effects at the regional level. At the level of the individual agency, ranks were calculated for each HCQI and a count was created to examine how often a given agency was among those with the four highest rates. Since higher rates on each HCQI were indicative of higher prevalence or incidence of undesirable outcomes, agencies ranked within the four highest rates were considered to be among the "worst performers." The range between agencies with the highest and fourth highest rates was also examined for both unadjusted and adjusted rates to assess the degree of variation among the group of worst performers.
The two regions were very similar on average age and sex. The WRHA clients were significantly less likely to have some level of cognitive impairment, as measured by the Cognitive Performance Scale (CPS), compared to Ontario clients, although the actual difference was only 2.6% (WRHA: 37.0% vs. Ontario: 39.6%; p < 0.0001). They were also significantly less likely than clients in Ontario to require some assistance with ADLs (22.0% vs. 27.1%, respectively; p < 0.0001), as measured by the ADL Self-performance Hierarchy Scale. Severe daily pain was experienced by 17.2% of the Ontario clients compared to 14.2% of the WRHA clients (p < 0.0001). Although the differences were statistically significant, with Ontario having significantly lower rates of both arthritis and hypertension, the absolute difference between the regions on the most common diagnoses was less than 5% (Table 1).
When comparing the two regions, there were statistically significant differences for five of the 22 unadjusted HCQIs (Table 2). In only one of these five cases (ADL rehabilitation potential and no therapies) was the rate in the WRHA significantly higher than the rate in Ontario. However, the actual size of the absolute difference between regions was small (0.3% on average). Among the prevalence HCQIs, the largest unadjusted difference was for disruptive/intense daily pain, which was 9.1% higher in Ontario. There were no statistically significant differences between the regions for any of the incidence HCQIs.
The AIP was calculated for each home care agency for each of the 19 HCQIs for which client-level risk adjustment was recommended. The AIP values shown represent the HCQI rate among an admission cohort within each region and were used in the risk adjustment models that included client-level covariates together with the AIP covariate. Ontario CCACs had a significantly higher AIP value for eight indicators, demonstrating that they admit or at least assess individuals as more likely to have these conditions on admission (Table 3).
Among new home care clients in Ontario, 52% of clients triggered on the HCQI for the prevalence of hospitalization versus 35% in the WRHA (difference of 17%).
Ontario also had a significantly (p < 0.0001) higher Case Mix Index (CMI) on intake (i.e., the CMI covariate), at 0.84 (95% CI: 0.81, 0.86), compared with the WRHA at 0.70 (CI: 0.68, 0.71).
Risk adjustment at the regional level
In general, the risk adjustment process minimized the differences between the two regions compared with the unadjusted rates. For example, the unadjusted difference between regions for the prevalence of disruptive/intense daily pain was 9.1%, however, the difference was reduced to 8.0% with the CC adjustment, 5.7% with the +CMI adjustment and 2.8% with the +AIP adjustment (Table 4). The +AIP adjusted rates showed the most variability, so that for five HCQIs, the direction of the difference was reversed. For example, the unadjusted rate of falls was 3.6% higherin Ontario than in the WRHA. Following the +AIP adjustment, Ontario had a rate that was 1.4% lower than in the WRHA.
Agencies with the highest rates
A summary measure was created to count the frequency with which an agency was ranked among the worst performers. Overall, only three of the ten offices within the WRHA did not change their ranking when rates were adjusted (Table 5). For example, Office 6 was ranked within the four highest (i.e., worst) agencies for three out of the 19 HCQIs both for unadjusted and for the three adjusted rates. In another six offices, the effect of risk adjustment was a decline in their standing (i.e., poorer quality) so that after at least one type of risk adjustment they were ranked among the worst performers. For example, in Offices 2, 3 and 7, the number of times they were ranked within the four highest rates increased as a result of the +AIP adjustment. The most dramatic effect was evident in Office 7 such that the unadjusted rates ranked this group among the worst performers only once, but the +AIP adjustment increased this frequency to five. There was no instance of an office in the WRHA consistently benefiting from risk adjustment.
In Ontario, only CCAC 900 experienced no change in their ranking following risk adjustment (Table 5). In another nine cases, at least one type of adjustment resulted in an improved standing, with the agency ranking among the four highest rates less often. For example, CCAC 1100 ranked among the worst performers ten times across the 19 HCQIs prior to risk adjustment, but the +AIP adjustment reduced this value to three. A similar effect was observed for CCAC 1200 which was ranked in the worst four performers six times across the unadjusted rates, but only once following +AIP adjustment.
Examining only the ranking of agencies may be misleading if very small changes in the actual rate resulted in an increase or decrease in the rank for a given agency. For example, the change in the rate for a given agency could be very small, say 5%, but could result in the organization moving from the fifth highest rate (i.e., fifth worst performer) to the third highest rate. Examining only the ranking tells one little about the actual magnitude of change among the worst performers. Therefore, it is also important to explore the range in the rates across the four highest agencies.
The unadjusted and CC adjusted rates had the largest degree of variation between the highest and fourth highest agency, with a mean of 11% across the 19 HCQIs. The +CMI adjustment had the next largest amount of variation at 10% and the +AIP adjustment at 8% (Table 6). These results further reinforce the finding that the +AIP method of adjustment consistently had the largest impact compared with the other types of risk adjustment.
Ontario CCACs exhibited several key differences when compared to the WRHA. Ontario sites had significantly higher unadjusted rates across four HCQIs. However, the degree of variation between the sites was small, with a mean difference across indicators of less than 1%. Ontario as a region had significantly higher AIP values for eight HCQIs, indicating a greater propensity in Ontario to admit clients exhibiting quality issues, and they also had a higher mean Case Mix Index.
Overall, the change in the actual rates at the regional level was small following risk adjustment. When risk adjustment was applied, for the client covariates alone or the client covariates with the CMI covariate, there was little effect on the mean difference between the two regions. In each case, the mean difference was positive (i.e., Ontario had a higher rate on average) and less than 1%. The +AIP adjustment, however, resulted in a negative mean difference (i.e., WRHA higher than Ontario on average) across the set of risk-adjusted HCQIs.
At the agency level, there was a greater influence of risk adjustment, as assessed by agency rankings across the set of indicators. In general, sites in Ontario benefited from risk adjustment and were less likely to be ranked among the worst performers. The opposite was true within the WRHA. Again, the +AIP adjustment had the largest influence when compared to the other two types of risk adjustment.
It is possible that the level of variation between agencies on the AIP covariate was greater than the level of variation between clients on the individual-level covariates. Although a detailed analysis is beyond the scope of this paper, use of a single HCQI example may prove beneficial. When examining the prevalence of falls (an HCQI that showed the largest changes in the agency rankings following risk adjustment), the coefficient of variation (CV) for the AIP value was 23.0. The CV is a relative measure of dispersion about the mean and is calculated by dividing the standard deviation by the mean and multiplying by 100. This CV value was much lower than any of the corresponding CV values for the various MDS outcome scales that reflect differences at the level of the individual client. For example, the CV for the Cognitive Performance Scale was 158.5 and for the ADL Self-performance Hierarchy Scale, the CV was 195.6 (data not shown).
Clearly, the +AIP adjustment had the effect of minimizing differences between the individual providers and it also resulted in rates that had less variability, as demonstrated by the drop in the CV value when comparing the unadjusted and +AIP adjusted rates for the prevalence of falls indicator (CV of 27.7 and 19.1, respectively; data not shown). However, this effect cannot be explained by differences in the variances of the adjusters since the AIP had a lower level of variance than the individual covariates.
It is also possible that the AIP covariate serves as a proxy for regional differences in practice patterns. If the Ontario agencies are grouped into four main geographic regions, the AIP value for the falls HCQI ranged from 24.1% to 34.0%. The degree of variation appears modest, but cannot be discounted as one factor in the ability of the AIP adjustment to minimize differences between providers and between regions.
Although there continues to be support for the conceptual notion of risk adjustment, the potential to over adjust remains a concern. In a recent publication, Mor et al recognize the possibility for over adjustment and conclude that in the absence of a simple solution to this problem, researchers must carefully consider each performance measure on an individual basis in an attempt to minimize this issue.
In the current project, the AIP covariate represents a continuous, numeric value that can range from zero to one. Thus, it does not simply serve as an agency identifier, but represents the rate among an intake cohort. Furthermore, in the US, Morris et al. determined that the Facility Admission Profile, the conceptual cousin to the AIP, did not substantially improve the risk adjustment models and they did not recommend its use. This decision was not based on a fear of over adjustment, but rather a concern that the FAP added increased complexity without significantly improving the risk adjustment process.
It is also important that to develop a clearer understanding of what is meant by "over adjustment". For example, it would seem reasonable to differentiate between instances of truly inappropriate risk adjustment where variance is due to poor practices (e.g., adjusted for benzodiazepine use for a falls indicator) and "over adjustment" due to the excessive use of spurious risk adjusters resulting in suppression of variance. Another possible example of over adjustment is the use of individual level covariates that are too closely related to the quality indicator. For example, one might argue that using dressing of the upper body as a risk adjuster for a QI on dressing the lower body is a form of over adjustment because the dependent variable is represented on both sides of the regression equation.
This area of research continues to present many challenges. Given the current results and those from the long-term care sector, it appears advantageous to undertake some form of risk adjustment, even though the definitive method may not yet be best understood. Given that these HCQIs are new and so little is known about the risk adjustment process, it seems appropriate that all three type of risk adjustment be considered by researchers and policy makers. The +AIP adjustment represents the most conservative approach and may therefore be most appropriate for public reporting of HCQI results.
The assessment of quality of care, and ongoing refinement of risk adjustment methods, is ultimately intended to provide information to different audiences to assist in continuously improving the quality of care. To date, many initiatives in the US have taken place to provide additional information on the quality of LTC and home health services. Similar efforts have not begun in the Canadian home care sector.
Several important issues should be addressed prior to moving towards more public reporting of these types of quality data. For example, there needs to be a discussion of the relative importance of the HCQIs. The current project made no attempt to prioritize the indicators, although clearly some would have higher clinical priority than others. To some degree, individual agencies must determine their own priorities. However, a simple system based on prevalence, severity and modifiability may be useful in determining relative importance. For example, pain is a prevalent condition in this population, can be severely debilitating for clients and is clearly modifiable. One might argue that it is of higher importance to address than declining cognitive performance, which is less prevalent and in many cases not modifiable. On the other hand, cognitive decline can have extremely serious consequences for the individual
In addition, a decision is needed regarding the calculation of the HCQIs, in particular, which pairs of assessment are to be used in the calculation of failure to improve/incidence HCQIs and what time period in between assessments is acceptable (e.g., a minimum of 90 or 120 days). There will need to be further analysis to explore the relationship between the length of stay and triggering on the HCQIs. It is anticipated that clients who have been on service longer will show important clinical differences from short-stay clients and would therefore be expected to have differential HCQI rates.
Tracking of the HCQI rates over time will be essential to determine the level of stability in the indicators. It would be useful to measure the amount of variation and to assess methods to summarize the rates over time in order to maximize their stability. For example, it may be important to report annual HCQI rates for a given organization or province if in fact the rates are found to be highly variable over shorter periods of observation. A shorter time frame may also lead to unstable rates if the number of observations is small. In this paper, a minimum of 20 observations was required and a decision would also be needed for the minimum sample size for public reporting of HCQIs.
Finally, it will be essential to assess possible methods to create summary measures across the set of HCQIs. This is not a simple task and one for which little research has been conducted to date. Previous research has pointed to the lack of correlation between measures of quality, but has also suggested that summary measures would be useful for consumers who would likely prefer less complicated information.
Knowledge about the home care sector in Canada remains rudimentary, and our knowledge and understanding of quality assessment in this sector is still in its infancy. The results from the current project serve to enhance the overall understanding of the issues involved in quality assessment and our ability, through the risk adjustment process, to create fair comparisons across providers.
Risk adjustment of quality indicators is important to enable fair comparisons across geographic regions or across home care providers. To date, little research has examined the quality of home care services. This project, using a set of HCQIs developed by interRAI, provides an important first step in assessing quality and the variable effects of different types of risk adjustment.
Mor V, Intrator O, Fries BE, Phillips C, Teno J, Hiris J, Hawes C, Morris J: Changes in hospitalization associated with introducing the resident assessment instrument. Journal of the American Geriatrics Society. 1997, 45: 1002-1010.
Phillips CD, Morris JN, Hawes C, Fries BE, Mor V, Nennstiel M, Iannacchione V: Association of the Resident Assessment Instrument (RAI) with changes in function, cognition, and psychosocial status. Journal of the American Geriatrics Society. 1997, 45: 986-993.
Fries BE, Hawes C, Morris JN, Phillips CD, Mor V, Park PS: Effect of the national Resident Assessment Instrument on selected health conditions and problems. Journal of the American Geriatrics Society. 1997, 45: 994-1001.
Hawes C, Mor V, Phillips CD, Fries BE, Morris JN, Steele-Friedlob E, Greene AM, Nennstriel M: The OBRA-87 nursing home regulations and implementation of the Resident Assessment Instrument: effects on process quality. Journal of the American Geriatrics Society. 1997, 45: 977-985.
Centers for Medicare and Medicaid Services: Home health compare. 2003, [http://www.medicare.gov/HHCompare/Home.asp]
Centers for Medicare and Medicaid Services: Nursing Home Compare. 2003, [http://www.medicare.gov/NHCompare/home.asp]
Morris JN, Moore T, Jones R, Mor V, Angelelli J, Berg K, Hale C, Morris S, Murphy KM, Rennison M: Validation of Long-term and Post-acute Care Quality Indicators. 2002, Cambridge, Massachusetts, Abt Associates Inc.
Centers for Medicare and Medicaid Services: Nursing Home Quality Initiative: Quality Measure Criteria and Selection. 2002, [http://www.cms.hhs.gov/quality/nhqi/final_qm.pdf]
Kidder D, Rennison M, Goldberg H, Warner D, Bell B, Hadden L, Morris J, Jones R, Mor V: MegaQI Covariate Analysis and Recommendations: Identification and Evaluation of Existing Quality Indicators that are Appropriate for Use in Long-term Care Settings. 2002, Cambridge, Massachusetts, Abt Associates Inc.
Hirdes JP, Fries BE, Morris JN, Ikegami N, Zimmerman D, Dalby DM, Aliaga P, Hammer S, Jones R: Home care quality indicators (HCQIs) based on the MDS-HC. Gerontologist. 2004, 44: 665-679.
Hirdes JP, Fries BE, Morris JN, Ikegami N, Zimmerman D, Dalby D, Aliaga P, Hammer S, Jones R: interRAI Home Care Quality Indicators (HCQIs) for MDS-HC version 2.0. 2001, [http://www.interrai.org/applications/hcqi_table_final.pdf]
Kahn HA, Sempos CT: Statistical Methods in Epidemiology. 1989, New York, NY, Oxford University Press
Morris JN, Fries BE, Mehr DR, Haures C, Mor V, Lipstz L: MDS Cognitive Performance Scale. Journal of Gerontology: Medical Sciences. 1994, 49: M174-M182.
Morris JN, Fries BE, Morris SA: Scaling ADLs within the MDS. Journal of Gerontology: Medical Sciences. 1999, 54A: M546-M553.
Mor V, Berg K, Angelelli J, Gifford D, Morris J, Moore T: The quality of quality measurement in US nursing homes. The Gerontologist. 2003, 43: 37-46.
The pre-publication history for this paper can be accessed here:http://0-www.biomedcentral.com.brum.beds.ac.uk/1472-6963/5/7/prepub
The authors would like to acknowledge John Morris, Naoki Ikegami, David Zimmerman and Richard Jones as well as Pablo Aliaga and Suzanne Hammer who were instrumental in the development of the home care quality indicators. The authors are grateful for contributions to the development effort by Mary James, Nancy Curtin-Telegdi, Jeff Poss, Colleen Maxwell, Lori Mitchell, Trevor Smith, Paula Fletcher, Gary Teare, Michael Stones, Kim Voelker, and Judy Bowyer. The authors also wish to thank Roy Cameron, John Goyder and Margaret Denton for helpful comments on an earlier report on which this manuscript is based. The authors wish to thank the following agencies for financial support of this research: Agency for Health Care Policy Research (Grant 5 U18 HS09455), the Health Transition Fund – Health Canada (ON 421), interRAI, the State of Michigan and the Robert Wood Johnson Foundation.
The author(s) declare that they have no competing interests.
DD participated in the coordination of the study, assisted in the conceptual design of the study, carried out the data analysis and took the lead in developing the manuscript. JH designed the original study, oversaw the development of the indicators, gave input into the data analysis and development of the draft manuscript. BF was the Co-Principal Investigator in the development of the indicators and provided feedback in terms of data analysis and choice of statistical methods. All authors read and approved the final manuscript.