Opinion

Canadians deserve potentially life-saving early warning health-care AI

As an academic who writes about the ethical and legal issues with Artificial Intelligence (AI), I admit to being concerned that it will be applied largely to the detriment of socioeconomic stability and society.

Commercialization of AI threatens to disempower and disenfranchise the labour force, further concentrate wealth and power, and destabilize democracy by making it nearly impossible for folks to distinguish what’s real from what’s fake.

But it’s not all bad news. The CHARTWatch AI tool developed by researchers at Unity Health in Toronto recently made headlines with new research showing it was associated with 26 per cent reduced risk of unanticipated death among patients hospitalized in general internal medicine at St. Michael’s Hospital. The machine-learning early warning system analyzes the interaction over time of “more than 100 aspects of a patient’s medical history and current health status” found on patients’ electronic medical records and alerts staff about risk of sudden decline.

CHARTWatch shows how AI could be used to save lives rather than destroy them. But these findings also illustrate how hospitals are in an ongoing state of slow-motion collapse. Some of the deaths averted almost certainly could have been prevented with proper staffing and improved patient oversight by workers who aren’t utterly burned out. This demonstrates how medical error and neglect, driven by a host of pressures that we have watched accelerate rapidly since 2020, is a large and potentially growing problem in Canadian health care.

That being said, it would also be misleading to imply that well-rested humans with time to thoroughly review patients’ charts would find the same problems or raise the alarm the way CHARTWatch does. AI systems are known for using statistical approaches that humans do not, and for detecting patterns that humans miss. (To be fair, they can also be more prone to certain types of errors). The inability of humans to easily understand the internal methodology of AI systems is known as the “black box” problem. This opacity also poses an issue for their oversight, which many researchers are addressing by developing interpretable AI and enhancing reporting on underlying models.

CHARTWatch exemplifies one of the key beneficial applications of AI. Rather than replacing entire vocations and expanding the ranks of the unemployed, intelligently designed AI can replace or fill gaps in the work of key personnel we simply don’t have but desperately need, while also improving the quality of work by leveraging inhuman capability.

Intelligently designed AI can replace or fill gaps in the work of key personnel we don’t have but desperately need.

Notably, the 26 per cent reduction in unanticipated death was compared to patients in the same hospital from an earlier time period. Some patients in the control group who were not in palliative care and died likely would have lived had CHARTWatch existed and been implemented at the time. Given that we aren’t talking about, for example, a drug with serious side effects, this is a big deal.

A key concept in the ethical oversight of human health research is ensuring clinical equipoise, defined as genuine uncertainty as to which arm of a trial is most effective for a given condition. Though there remains a degree of doubt and the possibility of confounding variables, it can be argued that there would no longer be clinical equipoise for CHARTWatch. In other words, running a trial where some of a hospital’s internal medicine patients were monitored with CHARTWatch and others were not would be unethical.

This implies its use should conditionally become the standard of care in this context. It’s possible there may be some harmful downsides to using CHARTWatch, but given it only warns staff and doesn’t, for example, direct what medical care ensues, these downsides are likely to be limited in relation to its benefits. There is a strong ethical argument that it should be implemented broadly in general internal medicine units, while continuing to be studied extensively. Ongoing research into its effectiveness should continue comparing with historical rates of unanticipated death, or to rates in other jurisdictions where the technology is not implemented.

Of course, even if CHARTWatch-style AI systems eventually become standard of care accepted by physicians and regulating bodies, that doesn’t necessarily mean they will be funded and implemented. There is no legal right to health care in Canada, in part because that would imply the need to fund all interventions no matter how expensive and weak their efficacy.

If we don’t continue to develop, test and implement our home-grown solutions, American big data corporations will eventually become the only source of these products, and Canadian health care outcomes may become somewhat dependent upon lopsided software contracts, some of which could also pose serious privacy risks for the personal information in Canadians’ health records.

Hopefully we can find the time and resources for computer programs that just might save a lot of lives.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

1 Comment
  • Ediriweera Desapriya says:

    Yes, I completely agree with these perspectives. However, for AI systems to make a meaningful impact on patient safety, they must be carefully integrated into healthcare settings. There is a need for careful consideration of their role within the broader healthcare system, including proper training for clinicians, transparency in AI decision-making, and a collaborative approach that ensures AI complements rather than replaces human expertise. This also requires rigorous ethical oversight, particularly in balancing AI use with human intuition and expertise. Transparent, interpretable AI models will be critical in maintaining the trust of both healthcare providers and patients, ensuring that AI-driven decisions are aligned with ethical standards and patient safety.

    Moreover, the socioeconomic implications of AI adoption must be managed to avoid exacerbating inequalities. Ensuring that AI tools are accessible to all healthcare settings, not just those in well-funded or technologically advanced institutions, is crucial for ensuring equitable patient outcomes. As AI continues to evolve, regulatory frameworks should evolve as well, balancing innovation with patient safety, privacy, and equitable access.
    No doubt that AI holds great potential to enhance clinical decision-making and improve patient safety.

    However, its implementation must be approached with care, ensuring that ethical considerations, transparency, and equity are prioritized. By doing so, AI can serve as a powerful tool in advancing healthcare, reducing medical errors, and improving patient outcomes while maintaining the central role of healthcare professionals in the decision-making process.

    Yes, I fully agree with these perspectives. However, for AI systems to truly enhance patient safety, they must be thoughtfully integrated into healthcare settings. It’s important to recognize that many AI tools, including chatbot systems, are still not fully trustworthy. Issues like algorithmic biases, hallucinations, gaslighting, parroting, and the “black box” problem-where we cannot understand how these AI systems work-remain significant barriers that must be addressed before AI can be reliably used in critical healthcare decisions.

Authors

Blake Murdoch

Contributor

Blake Murdoch is a lawyer and research associate with the Health Law Institute at the University of Alberta Faculty of Law.

Republish this article

Republish this article on your website under the creative commons licence.

Learn more