Skip to main content

Table 1 Design decisions made during human-centered design

From: Involving end-users in the design of an audit and feedback intervention in the emergency department setting – a mixed methods study

Interview observations

Design solution - initial design

Rationale

Iterative improvement based on follow-up interviews

Metrics

 a) Summary measures

n patients, n patients by area, n patients by shift, n patients by discharge disposition

To give physicians an overview of the patients they saw in a month. Show the area of care and shifts as it might affect case mix

 

 b) Length of stay metrics

Provider to decision time, overall LOSa (median)

Overall LOS is most important for ED, but provider-decision time is easier to influence by provider

‘Provider to decision time’ changed to ‘Room to decision time’ as patients might see another provider in the waiting area (iteration 1)

 c) Utilization of tests

CT, MR, US, lab utilization (%)

Can be affected by physician and is known to affect LOS

 

 d) Outcomes

72-h return rates, deaths (%), LOS after admission (median)

To address concerns about negative outcomes, return-rates and deaths were included. LOS after admission was included as a proxy measure of appropriateness of admissions

Removed deaths as an outcome as it is not feasible to reliably obtain data from EMR (iteration 1)

Comparisons

 a) Over time

Monthly intervals

Balance between too frequent reports with random variation and too infrequent where physicians don’t remember what happened

 

 b) To peers

Blinded ranking (e.g. ‘your ranking 46/60’, with outcomes of peers with better or worse numbers shown).

Overall ED medians are shown on separate page

Blinded since all interviewed physicians agreed un-blinded was not desired/needed. A ranking showing neighboring peers was included to give physicians an attainable goal

Changed the ranking to interquartile range of peers instead, since the optimum rate is likely someplace in the middle, outliers in either direction can be a problem (iteration 1)

Functionality

 a) Ease of access

Monthly email summary with 3 measures that can be selected by ED leadership based on priorities

Easy to access

 

 b) Drilldown functionality

Option to access full dashboard through a link in monthly email

Drilldown functionality

Added tabs to drill down based on the type of shift (e.g. night) and assigned area. (iteration 1)

 c) Customization

Physicians can select measures to show up on their own favorites page

Leadership can select measures in monthly email

Customization options for individual physicians and leadership based on ED priorities

 

Barriers

 a) Adverse consequences on quality of care

Inclusion of outcomes on dashboard

To avoid focus only on throughput and utilization measures, which might result in adverse consequences

 

 b) Conflicting teaching responsibilities

As this was not the goal of the dashboard, no measures related to teaching were included

 

 c) Data accuracy

Extensive validation of data is required

Included definition of the measures on the dashboard.

Made section headers very clear (iteration 2)

 d) Case-mix adjustment

Show total number of patients during different shifts and in different areas of care.

By showing these measures, physicians can put other measures in context.

Added tabs to drilldown by area of care and type of shift (iteration 1)

  1. Abbreviations: CT computed tomography scan, ED emergency department, EMR electronic medical record, n number, MR magnetic resonance imaging, US ultrasound
  2. aOverall LOS is only shown on the overall ED page, not on individual physician dashboards