The ultimate goal of incorporating technology in clinical decision-making is to reduce the variability in output for any given input. Two patients with the same disease, differing only in treating provider or clinic location, should receive equivalent treatments that reflect best practice. Failure to otherwise achieve this goal provides the opportunity for technology to overcome noise (unnecessary variation) or error (harmful variation) in judgment and ensure reliable healthcare and better outcomes for all. As the most common infectious cause of hospitalization and as a disease with high-quality guidelines to direct care, community acquired pneumonia (CAP) is a prime target for optimization.
The diagnosis of CAP relies on a constellation of clinical symptoms, radiographic findings, and ultimately a provider's judgment. Once diagnosed, the management of pneumonia similarly depends on a provider to identify patient risk factors, determine pneumonia severity, initiate antimicrobials, and determine appropriate level of care. Risk stratification tools such as the Pneumonia Severity Index (PSI), CURB-65, and the ATS/ISDA severe CAP (sCAP) criteria are available; however, use is limited by the necessity of manual tabulation, context dependent accuracy, and correlation with different outcomes (such as mortality in PSI) [1,2]. Furthermore, current scoring tools utilize factors selected by expert clinicians and may not fully realize the potential of available data and of artificial intelligence systems [2,3].
While imperfect, the use of risk-stratification scoring systems to predict mortality, need for hospital admission, or need for the intensive care unit (ICU), improves patient outcomes, possibly through reduction of variability error by standardization. However, barriers to appropriate use of these systems exist. These include knowledge that the scoring system is available, the ability to gather necessary input data, enough time to accurately gather this data, the ability to understand the results, awareness of model limitations, and the ability to appropriately incorporate results into clinical decision making. Each of these barriers are characteristics of innovations to consider and address, thereby optimizing the implementation of clinical decision-making tools [4].
Retrospective data have shown that manual scoring of PSI has low provider compliance and low accuracy [5]. Compliance and accuracy can be improved through rigorous implementation techniques and such work is associated with significant improvements in outcomes. This provides a compelling incentive for systems to implement risk stratification tools via technologies that encourage their use [6]. Automated PSI models accurately provide predictive scores that are comparable to manual PSI scoring [7], and an automated CURB-65 model was shown to more accurately predict 30-day mortality when compared to the traditional CURB-65 model [8].
While automated models can follow logic-based decision rules to supply providers with actionable data [9], the vanguard of technology are multi-modal systems that alert providers to the possibility of pneumonia, aid in diagnosis and risk stratification, and incorporate patient specific details to construct an individualized, evidence-based treatment plan. This can be facilitated by deep-learning neural networks that improve speed of detection of pulmonary infiltrates on chest imaging [10,11], Bayesian networks developed to use real-time electronic health record data to gauge likelihood of illness, automated data extraction for severity of illness tabulation, and mechanisms that enable communication between providers, making correct decisions easier to enact. Such real-time electronic pneumonia clinical decision support was first implemented in Intermountain Health emergency departments in 2011 [12] and subsequently demonstrated reduced mortality [13], improved safe outpatient disposition of low-risk patients, decreased ICU utilization and decreased delayed ICU admissions [14].
Another area primed for technology to influence pneumonia management is image processing. A deep learning model, CheXNeXt, was able to identify pneumonia on chest radiographs with performance comparable to board certified radiologists, and a shorter interpretation turnaround time [11]. Similarly, a 2020 study in Germany showed that there was not a significant difference in accuracy of pneumonia detection on supine chest radiographs between their artificial intelligence model and practicing radiologists [15]. Comparison of CheXNeXt processing of chest radiographs with natural language processing of radiology reports showed superiority of the former with important implications for future clinical adaptation [10].
Use of any innovative technology should be accompanied by appropriate caution. Although automated models can standardize the way data is collected and processed, they have the potential to propagate biases rooted in model development. Additionally, such models must consider provider workflows, time demands, and incentives. While comprehensive systems may reduce noise in judgment, they may also worsen provider alert fatigue or habituate use of outdated tools if not appropriately maintained.
Despite evidence that use of clinical decision support tools improves clinical outcomes [16], broad use remains limited by siloed electronic health records and challenges with large scale implementation due to differences in technological infrastructure. These barriers may be addressable with the use of standards-based approaches such as the SMART on FHIR [17], developed to allow automated support tools to work seamlessly across available platforms. While promising, it is important to consider that artificial intelligence tools relying on data input in one system inherently reflects the input process of that particular system. Data gathered via other means in a new system have to be validated for this new system's specific workflows and data collection tools, increasing the cost of adaptation and implementation. Additionally, providers, systems, and payers must consider how incentives influence the use of clinical decision support. If use of clinical decision support benefits one group and burdens another, then implementation and adoption will face additional barriers. By clarifying and aligning incentives, systems are better equipped to implement technology that improves patient outcomes and ensure all groups remain invested in the use of technology.
Technology in the form of clinical decision support tools, electronic medical records, automated support tools, and artificial intelligence processing has significantly improved the management of CAP. The use of deep learning and artificial intelligence models is especially alluring due to the potential to reveal new facets of disease detection and care personalization. Further research is warranted to evaluate the impact of using these tools in automated clinical decision support on provider workflow and electronic medical records. Next steps include promoting their adaptability to various workflows in different settings and seamlessly integrating them into the clinician user interface to foster usability within a variety of electronic medical records.
Artificial intelligence involvementThe material of this article was entirely produced without the use of artificial intelligence.
FundingThis research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Conflict of interestThe authors declare not to have any conflicts of interest that may be considered to influence directly or indirectly the content of the manuscript.