Skip to main content
  • Research article
  • Open access
  • Published:

Predictors of failed attendances in a multi-specialty outpatient centre using electronic databases

Abstract

Background

Failure to keep outpatient medical appointments results in inefficiencies and costs. The objective of this study is to show the factors in an existing electronic database that affect failed appointments and to develop a predictive probability model to increase the effectiveness of interventions.

Methods

A retrospective study was conducted on outpatient clinic attendances at Tan Tock Seng Hospital, Singapore from 2000 to 2004. 22864 patients were randomly sampled for analysis. The outcome measure was failed outpatient appointments according to each patient's latest appointment.

Results

Failures comprised of 21% of all appointments and 39% when using the patients' latest appointment. Using odds ratios from the mutliple logistic regression analysis, age group (0.75 to 0.84 for groups above 40 years compared to below 20 years), race (1.48 for Malays, 1.61 for Indians compared to Chinese), days from scheduling to appointment (2.38 for more than 21 days compared to less than 7 days), previous failed appointments (1.79 for more than 60% failures and 4.38 for no previous appointments, compared with less than 20% failures), provision of cell phone number (0.10 for providing numbers compared to otherwise) and distance from hospital (1.14 for more than 14 km compared to less than 6 km) were significantly associated with failed appointments. The predicted probability model's diagnostic accuracy to predict failures is more than 80%.

Conclusion

A few key variables have shown to adequately account for and predict failed appointments using existing electronic databases. These can be used to develop integrative technological solutions in the outpatient clinic.

Peer Review reports

Background

Failure to comply with outpatient medical appointments is a perennial problem, affecting costs, causing scheduling conflicts, and interrupting continuity of care. Failed appointments in different outpatient settings have ranged from 12% to 42% [1–7]. The resulting economic costs range from £65 per failed appointment in the United Kingdom in 1997 [2] to 3–14% of total outpatient clinic income in the United States [8]. This problem may be compounded if non-compliance with appointments is an indication of poorer clinical outcomes [9]. Most studies on failed appointments focused on the socio-economic and demographic factors that affect failures [1, 10–13]. Other factors studied include symptom duration or resolution, illness, long waiting periods, forgotten appointments, and other commitments [13–16]. Successful interventions have included reminders, giving the patient's choice of date, improved communication, and selective overbooking [2, 10, 17]. However, almost all studies were for specific specialties in small-scaled settings [2, 5, 8–13].

We wanted to determine the intrinsic and external factors affecting failed outpatient appointments using only routinely available data. Our objective was to examine the factors most associated with failed appointments in Singapore, and to devise a prognostic index that administrators may use to identify potential defaulters. The findings will allow administrators to account for these factors when scheduling attendances, and provide the platform for problem solving. Such a prognostic index will also allow targeting of patients at higher risk of defaulting hence reducing the costs of intervening in patients who do keep their appointment.

Methods

This was a retrospective cohort study on patients attending all outpatient clinics at Tan Tock Seng Hospital, a 1400 bed general hospital in Singapore. Data was obtained from the hospital's appointment systems database and included 3,212,789 outpatient appointments starting from the creation of the electronic database in August 2000, to July 2004. Cancelled or rescheduled appointments were excluded, and a computer generated random sample of 10% of patients was used.

Outcome measures and input factors

The outcome measure was failure of a patient to attend his most recent appointment, analysed for individual patients who had at least one visit from August 2001 to July 2004. This allowed us to have at least one year of appointment history (starting August 2000) for all patients.

A system-unique alphanumeric patient identifier was then used to sort all appointments by individual patients. The most recent appointment was then selected and coded as "actualised" if the patient registered during the scheduled clinic session, or "failure" if the patient did not attend the appointment. The same process was used to identify the appointment history for each patient. To account for the varying frequency and duration of follow-up between patients, we analysed past history of failed appointments as a proportion of all scheduled appointments, hence allowing us to use the entire database for the predicted probability model. Patients with no record of previous appointments within the entire database period starting August 2000 were classified separately. As the maximum inter-appointment duration is usually not longer than a year, we could assume that cases seen after August 2001 with no prior database records were correctly classified as having no prior appointments.

Other factors studied included the patient's gender, race, age-group, days from scheduling to appointment, percentage of previous appointment failures, provision of cell phone numbers, distance from place of residence, and hospital admission during the appointment or between scheduling and appointment. Reasons for failed appointments were not obtained as there was no routine provision for contacting patients who defaulted. Direct distance from the patient's residence to the hospital was computed from the address zip codes and categorised into 3 groups – less than 6 km (1–2 districts away), 6 to 14 km (3–4 districts away), and more than 14 km (outlying districts). The data was stratified by specialties by categorising all 47 sub-specialty departments into 6 functional groups – medical subspecialties, surgical departments, ear, nose, and throat (ENT), ophthalmology, therapy, and others.

Statistical methods

Data extraction and management was done in Microsoft Access and data analysis was performed using Stata [18]. All tests were conducted at the 5% level of significance and we reported the odds ratios and corresponding 95% confidence intervals.

We started with a univariate analysis on all variables by simple regression. As the effect of confounding has been previously shown to be important [19], multivariate analysis with a multiple logistic regression model was also performed starting from the most significant variable in the univariate analysis and adding the next most significant, using the likelihood ratio test to observe improvements in the model's fit. The coefficients from the logistic regression were used to formulate the predicted probability model. For the final model, we used a receiver-operating characteristic (ROC) curve to assess the model's discriminatory ability for appointment actualisation. The data was then stratified by the six specialty functional groups, and the final multiple logistic regression analysis repeated to observe for possible differences across specialty departments.

Results

Failed appointments accounted for 21% of all appointments in the database. From our sampling, a total of 22864 patients were included and of the most recent visit for individual patients, 39% of these appointments resulted in failures. Table 1 gives the characteristics of the study population. 26% had no previous appointment record and more than 40% of appointments were in excess of three weeks after scheduling. Only a small proportion were actually hospitalised prior to, or during the appointment date (2% and 1% respectively). The majority of patients (60%) provided a cell phone number.

Table 1 Demographic characteristics and univariate factors associated with failed appointments, with the corresponding number of subjects (n), odds ratios, confidence intervals, and p-values (overall n = 22864).

Analysis

In the univariate analysis (Table 1), we found that gender, race, age group, days from scheduling to appointment, previous failed appointments, provision of cell phone number, distance from the hospital, and department were all significantly associated with failed appointments.

From the multiple logistic regression analysis (Table 2), age group, days from scheduling to appointment, previous failed appointments, provision of cell phone number, distance from hospital, and department were independently and significantly associated with failed appointments. Those older than 40 years had significantly lower odds of appointment failure than those below 20. Malays and Indians had significantly higher odds ratio (OR 1.48 and 1.61 respectively) compared to Chinese. Scheduling to appointment time was a good predictor, and longer times increased the likelihood of failure (OR 1.29 for 7 to 21 days, and 2.38 for more than 21 days). Prior appointment history was also strongly predictive of failure. Patients with more than 40% failed appointments had significantly higher odds compared to those with less than 20%. Patients without previous appointments had the highest odds ratio of 4.38. Those residing more than 14 km from the hospital had a significant odds of failure 1.14 times that of those residing less than 7 km away. Those providing cell phone numbers were least likely to have failed appointments, with an odds ratio of 0.10 (95% CI: 0.10–0.11). Compared to surgical appointments, ENT, ophthalmology, therapy, and others had significantly higher odds of failure. Variables which did not improve the model's fit were gender, and hospital admission during or prior to appointment.

Table 2 Multivariate factors associated with failed appointments with the corresponding odds ratios, confidence intervals, and p-values.

Predicted probability model

Based on the final model, we created a prognostic index to predict failed appointments. The predicted probability of failure (pi) was calculated using the equation shown in Figure 1.

Figure 1
figure 1

Predicted probability equation for appointment failure derived from the multiple logistic regression model.

From the final model's receiver-operating characteristic curve (Figure 2), the area under the curve of 0.84 (95%CI: 0.83–0.85) indicates that the model's overall diagnostic accuracy in predicting failed appointments is good. Using a cut-off of p = 0.24, the model had a sensitivity of 80%, specificity of 70%, and an accuracy of 73%.

Figure 2
figure 2

Receiver-operating characteristic curve of the final multiple logistic regression model for failed appointments.

Stratification by department

We also performed a stratified analysis of the final multivariate model for department groups (Table 3). Provision of cell phone numbers was the only factor negatively associated with failed appointments across all departments, while no previous appointments was positively associated throughout. More than 21 days from scheduling to appointment was positively associated for all departments except therapy, where there was an insignificant negative association. Patients older than 40 years were negatively correlated with failed appointments except for elderly ophthalmology patients.

Table 3 Stratified analysis of factors by key departments

Discussion

This study demonstrates that routinely available administrative data can be used to construct a prognostic index for appointment failures. Using a cut-off probability of above 0.24, the model identified defaulters with 80% certainty. Using the same cut-off, 30% of those who actualise their appointments would be wrongly classified. While imperfect, the model enables administrators to predict failed appointments with reasonable certainty for targeted intervention. Interventions have been shown to improve attendances, but certain methods such as personalised phone or postal reminders are manpower intensive [20–22]. With about 1,800 appointments a day in our clinics, the majority of which are actualised without intervention, having such predictions may lead to cost savings by targeting interventions towards patients with higher likelihood of defaulting.

Our analysis concurred with previous studies which showed that long waiting periods, repeat defaulters, and younger age groups are associated with increased likelihood of defaulting [1, 10, 13]. There are several findings of note that have not previously been reported. We found differences in the odds of attendance amongst different ethnic groups, which may reflect cultural differences that are amenable to interventions. Further studies are needed to explore the reasons for higher failure rates in Malay and Indian patients. More importantly, those who provided a cell phone number had an odds of actualising appointments 6 to 17 times higher than those who did not. This finding may be a conglomeration of various factors. Cell phone ownership may be an indicator of higher socio-economic status, which has been shown to be associated with higher rates of actualisation [10]. The provision of cell phone numbers could also indicate a patient's motivational level to attend appointments. Reasons aside, provision of cell phone numbers is an easily available yet robust predictor for appointment actualisation.

Some variables were less significant predictors than expected. We had expected travel distance to influence appointment failures, but the odds ratios were not as large as other variables. This may be due to convenient transportation and relatively short travel times in a small country like Singapore. Hospitalisation before and during the appointment date also did not contribute significantly, which may signify that hospitalisation itself does not preclude the need to seek treatment for other medical problems.

In the stratified departmental analysis, the effect of predictors, apart from cell phone numbers, was not uniform across departments. For example, the effect of duration from scheduling to appointment varies across specialties. This is to be expected because the duration of symptoms, urgency for treatment, and symptom resolution without treatment are different for conditions consulted at different specialties. The presence of this variation necessitates customised algorithms for individual departments in order for optimal predictions of appointment failure to be made.

There are several limitations to our study. We are uncertain if our findings can be generalised to other settings, as inter-institutional and inter-country differences similar to the observed inter-departmental differences may exist. There may also be differences between time-periods. However, while the predicted probability equation is only relevant for this hospital, the analytic process can be replicated using the methods described, since the study relies only on routinely available administrative data, which can be automatically processed for institutions with computerised appointment systems. Detailed data on failed appointments were unavailable and failed attendances may be reappointed as a new appointment if the patient is contactable. In addition, data before August 2000 is unavailable. Increased data definition may help in increasing the predictive accuracy, but the use of aggregate percentages in this study has produced good results. Our study was also unable to analyse failed appointments by clinical condition and symptoms. Other studies have shown that different clinical conditions and health status may be linked to failed attendances [23, 24]. Future studies should include such variables to increase the predictive accuracy, but we note that our methodology already has diagnostic accuracy of more than 80% on the basis of routinely available data alone. This shows that an easily automated and reproducible system can have good predictive ability in spite of not incorporating clinical data, which is not available in most computerised appointment systems.

Our findings can be made operational in several ways. Predictions, based on up-to-date and institutionally relevant data, can be uploaded as automated algorithms into appointment systems. Lists of potential defaulters can then be generated using a desired sensitivity cut-off for targeted interventions to reduce appointment failure. In addition, educational messages can be targeted during prior appointments, based on automated profiling of future failure risk. Another strategy that is commonly used is over-booking to decrease opportunity costs but this can result in increased wait times if overdone. With the forward predictions on the expected appointment failure rate of a future clinic session, over-booking strategies can be optimised.

Conclusion

Failed appointments result in inefficiencies and economic cost and may interrupt continuity of care. We attempted to address the causes in an outpatient clinic and found that a few key routinely available variables could adequately account for appointment failure. The predicted probability model could predict failures with reasonable accuracy. Administrators can use these techniques to uncover factors in their own clinic deserving of further study. In addition, there is potential for incorporating automated algorithms into information systems to achieve better targeting of interventions, as well as to optimise overbooking strategies.

References

  1. Deyo RA, Inui TS: Dropouts and broken appointments. A literature review and agenda for future research. Med Care. 1980, 18 (11): 1146-57.

    Article  CAS  PubMed  Google Scholar 

  2. Hamilton W, Alison R, Sharp D: Effect on hospital attendance rates of giving patients a copy of their referral letter: randomized controlled trial. BMJ. 1999, 318: 1392-5.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. al-Shammari SA: Failures to keep primary care appointments in Saudi Arabia. Fam Pract Res J. 1992, 12 (2): 171-6.

    CAS  PubMed  Google Scholar 

  4. Gatrad : A completed audit to reduce hospital outpatients non-attendance rates. Arch Dis Child. 2000, 82: 59-61. 10.1136/adc.82.1.59.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Chung JWY, Wong TKS, Teung ACP: Non-attendance at an orthopaedic and trauma specialist outpatient department of a regional hospital. Journal of Nursing Management. 2004, 12 (5): 362-10.1111/j.1365-2834.2004.00484.x.

    Article  PubMed  Google Scholar 

  6. Hermoni D, Mankuta D, Reis S: Failure to keep appointments at a community health centre. Analysis of causes. Scand J Prim Health Care. 1990, 8 (3): 151-5.

    Article  CAS  PubMed  Google Scholar 

  7. Macharia WM, Leon G, Rowe BH, Stephenson BJ, Haynes RB: An overview of interventions to improve compliance with appointment keeping for medical services. JAMA. 1992, 267 (13): 1813-7. 10.1001/jama.267.13.1813.

    Article  CAS  PubMed  Google Scholar 

  8. Moore CG, Wilson-Witherspoon P, Probst JC: Time and money: effects of failed appointments at a family practice residency clinic. Fam Med. 2001, 33 (7): 522-7.

    CAS  PubMed  Google Scholar 

  9. Griffin SJ: Lost to follow-up: the problem of defaulters from diabetes clinics. Diabet Med. 1998, 15 (Suppl 3): S14-24. 10.1002/(SICI)1096-9136(1998110)15:3+<S14::AID-DIA725>3.3.CO;2-9.

    Article  PubMed  Google Scholar 

  10. Oppenhiem GL, Bergman JJ, English EC: Failed appointments: a review. J Fam Pract. 1979, 8 (4): 789-96.

    Google Scholar 

  11. Simmons AV, Atkinson K, Atkinson P, Crosse B: Failure of patients to attend a medical outpatient clinic. J R Coll Physicians Lond. 1997, 31 (1): 70-3.

    CAS  PubMed  Google Scholar 

  12. Sanders G, Craddock C, Wagstaff I: Factors influencing default at a hospital colposcopy clinic. Qual Health Care. 1992, 1 (4): 236-40.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  13. Dickey W, Morrow JI: Can outpatient non-attendance be predicted from the referral letter? An audit of default at neurology clinics. J R Soc Med. 1991, 84 (11): 662-3.

    CAS  PubMed  PubMed Central  Google Scholar 

  14. Frankel S, Farrow A, West R: Non-attendance or non-invitation? A case-control study of failed outpatient appointments. BMJ. 1989, 298: 1343-1345.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Pal B, Taberner DA, Readman LP, Jones P: Why do outpatients fail to keep their clinic appointments? Results from a survey and recommended remedial actions. Int J Clin Pract. 1998, 52 (6): 436-7.

    CAS  PubMed  Google Scholar 

  16. Cosgrove MP: Defaulters in general practice: reasons for default and patterns of attendance. Br J Gen Pract. 1990, 40 (331): 50-2.

    CAS  PubMed  PubMed Central  Google Scholar 

  17. Barron WM: Failed appointments. Who misses them, why they are missed, and what can be done. Prim Care. 1980, 7 (4): 563-74.

    CAS  PubMed  Google Scholar 

  18. Stata Corp: Stata Statistical Software: Release 8.2. 2004, Stata Corporation, College Station, Texas

    Google Scholar 

  19. Gruzd DC, Shear CL, Rodney WM: Determinants of failed appointment appointment behavior: the utility of multivariate anlaysis. Fam Med. 1986, 18 (4): 217-20.

    CAS  PubMed  Google Scholar 

  20. Patel P, Forbes M, Gibson J: The reduction of broken appointments in general dental practice: an audit and intervention approach. Prim Dent Care. 2000, 7 (4): 141-4. 10.1308/135576100322578889.

    Article  CAS  PubMed  Google Scholar 

  21. Thomas D: Postal reminders can improve attendance at orthodontic clinics. Evid Based Dent. 2004, 5 (1): 14-10.1038/sj.ebd.6400244.

    Article  PubMed  Google Scholar 

  22. Adams LA, Pawlik J, Forbes GM: Nonattendance at outpatient endoscopy. Endoscopy. 2004, 36 (5): 402-4. 10.1055/s-2004-814329.

    Article  CAS  PubMed  Google Scholar 

  23. Cashman SB, Savageau JA, Lemay CA, Ferguson W: Patient health status and appointment keeping in an urban community health center. J Health Care Poor Underserved. 2004, 15 (3): 474-88.

    Article  PubMed  Google Scholar 

  24. Yassin AS, Howell RJ, Nysenbaum AM: Investigating non-attendance at colposcopy clinic. J Obstet Gynaecol. 2002, 22 (1): 79-80. 10.1080/01443610120101790.

    Article  CAS  PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

The authors would like to acknowledge Mr R Chan for his help in extracting and processing the data and the Tan Tock Seng Hospital Information Technology Department for assistance in the data extraction.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vernon J Lee.

Additional information

Competing interests

The author(s) declare that they have no competing interests.

Authors' contributions

VJL was involved in all areas including conceiving and designing the study, the data collection, the statistical analysis and writing of the paper. AE was involved in conceiving the study, the data collection, statistical analysis and writing of the paper. MIC was involved in designing the study and writing of the paper. BK involved in conceiving the study and writing of the paper. All authors read and approved the final manuscript.

Vernon J Lee, Arul Earnest, Mark I Chen contributed equally to this work.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Lee, V.J., Earnest, A., Chen, M.I. et al. Predictors of failed attendances in a multi-specialty outpatient centre using electronic databases. BMC Health Serv Res 5, 51 (2005). https://0-doi-org.brum.beds.ac.uk/10.1186/1472-6963-5-51

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/1472-6963-5-51

Keywords