Effects of a computerized feedback intervention on safety performance by junior doctors: results from a randomized mixed method study

Sabi Redwood*, Nothando B. Ngwenya, James Hodson, Robin E. Ferner, Jamie J. Coleman

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

9 Citations (Scopus)

Abstract

Background: The behaviour of doctors and their responses to warnings can inform the effective design of Clinical Decision Support Systems. We used data from a University hospital electronic prescribing and laboratory reporting system with hierarchical warnings and alerts to explore junior doctors' behaviour. The objective of this trial was to establish whether a Junior Doctor Dashboard providing feedback on prescription warning information and laboratory alerting acceptance rates was effective in changing junior doctors' behaviour.

Methods: A mixed methods approach was employed which included a parallel group randomised controlled trial, and individual and focus group interviews. Junior doctors below the specialty trainee level 3 grade were recruited and randomised to two groups. Every doctor (N = 42) in the intervention group was e-mailed a link to a personal dashboard every week for 4 months. Nineteen participated in interviews. The 44 control doctors did not receive any automated feedback. The outcome measures were the difference in responses to prescribing warnings (of two severities) and laboratory alerting (of two severities) between the months before and the months during the intervention, analysed as the difference in performance between the intervention and the control groups.

Results: No significant differences were observed in the rates of generating prescription warnings, or in the acceptance of laboratory alarms. However, responses to laboratory alerts differed between the pre-intervention and intervention periods. For the doctors of Foundation Year 1 grade, this improvement was significantly (p = 0.002) greater in the group with access to the dashboard (53.6% ignored pre-intervention compared to 29.2% post intervention) than in the control group (47.9% ignored pre-intervention compared to 47.0% post intervention). Qualitative interview data indicated that while junior doctors were positive about the electronic prescribing functions, they were discriminating in the way they responded to other alerts and warnings given that from their perspective these were not always immediately clinically relevant or within the scope of their responsibility.

Conclusions: We have only been able to provide weak evidence that a clinical dashboard providing individualized feedback data has the potential to improve safety behaviour and only in one of several domains. The construction of metrics used in clinical dashboards must take account of actual work processes.

Original languageEnglish
Article number63
Number of pages10
JournalBMC Medical Informatics and Decision Making
Volume13
DOIs
Publication statusPublished - 4 Jun 2013

Keywords

  • Patient safety
  • Clinical decision support
  • Junior doctors
  • DECISION-SUPPORT-SYSTEMS
  • INFORMATION-TECHNOLOGY
  • PRESCRIBING ERROR
  • IMPROVEMENT
  • OUTCOMES
  • AUDIT
  • MODEL
  • CARE

Cite this