Contact us at (866) 265-5575 or sales@realhealthdata.com

Deep learning algorithms to identify documentation of serious illness conversations during intensive care unit admissions.

Deep learning algorithms to identify documentation of serious illness conversations during intensive care unit admissions.

Deep learning algorithms to identify documentation of serious illness conversations during intensive care unit admissions.

Read the original article.

Author information

  1. Department of Psychosocial Oncology and Palliative Care, Dana-Farber Cancer Institute, Boston, MA, USA.
  2. Harvard T. H. Chan School of Public Health, Boston, MA, USA.
  3. Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA.
  4. College of Science and Mathematics, University of Massachusetts Boston, Boston, MA, USA.
  5. Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Women’s Hospital, Boston, MA, USA.
  6. Division of General Internal Medicine and Health Services Research, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA.
  7. Palliative Care, VA Greater Los Angeles Healthcare System, Los Angeles, CA, USA.
  8. Division of Palliative Medicine, Department of Medicine, Brigham and Women’s Hospital, Boston, MA, USA.

Abstract

BACKGROUND::

Timely documentation of care preferences is an endorsed quality indicator for seriously ill patients admitted to intensive care units. Clinicians document their conversations about these preferences as unstructured free text in clinical notes from electronic health records.

AIM::

To apply deep learning algorithms for automated identification of serious illness conversations documented in physician notes during intensive care unit admissions.

DESIGN::

Using a retrospective dataset of physician notes, clinicians annotated all text documenting patient care preferences (goals of care or code status limitations), communication with family, and full code status. Clinician-coded text was used to train algorithms to identify documentation and to validate algorithms. The validated algorithms were deployed to assess the percentage of intensive care unit admissions of patients aged ⩾75 that had care preferences documented within the first 48 h.

SETTING/PARTICIPANTS::

Patients admitted to one of five intensive care units.

RESULTS::

Algorithm performance was calculated by comparing machine-identified documentation to clinician-coded documentation. For detecting care preference documentation at the note level, the algorithm had F1-score of 0.92 (95% confidence interval, 0.89 to 0.95), sensitivity of 93.5% (95% confidence interval, 90.0% to 98.0%), and specificity of 91.0% (95% confidence interval, 86.4% to 95.3%). Applied to 1350 admissions of patients aged ⩾75, we found that 64.7% of patient intensive care unit admissions had care preferences documented within the first 48 h.

CONCLUSION::

Deep learning algorithms identified patient care preference documentation with sensitivity and specificity approaching that of clinicians and computed in a tiny fraction of time. Future research should determine the generalizability of these methods in multiple healthcare systems.

KEYWORDS:

Quality indicators (healthcare); advance care planning; end-of-life care; intensive care units; machine learning

PMID: 30427267
DOI: 10.1177/0269216318810421