The question facing Ontario’s health-care system is no longer whether artificial intelligence (AI) will play a role in care delivery. It already does – AI tools are being used in exam rooms, embedded in electronic medical records and accessed by patients directly, often before a physician is ever consulted.
The real question is whether AI will be integrated in a way that strengthens patient care, supports physicians and upholds the core values of medicine or whether it will be adopted haphazardly, driven by commercial interests and system pressures, leaving physicians reacting to technologies they did not design and do not govern.
AI holds enormous promise. When thoughtfully designed and carefully implemented, it has the potential to reduce administrative burden, improve clinical decision-making, enhance system planning and support patients in navigating an increasingly complex health-care landscape. Yet without clear, focused, physician-led and patient-centred governance, the AI we get may not be the AI we need.
AI is gaining momentum in Ontario in part because our health-care system is under extraordinary strain. Community-based and primary care are increasingly fragile. Patient volumes are rising, complexity is increasing, physicians are facing relentless administrative demands from forms, documentation and non-clinical tasks. These pressures contribute directly to burnout, early retirement and reduced access to care.
Against this backdrop, AI has presented itself as a compelling solution, promising efficiency, scalability, cost containment and convenient care. It offers the possibility of doing more with fewer human resources in a system already struggling to meet demand. Aggressive marketing, often directed not only at health-care organizations but also directly at patients, has boosted expectations. Many of these tools are being adopted quickly, often without independent evaluation or robust local validation.
Some early applications of AI have delivered tangible benefits. AI scribes, for example, have demonstrated the potential to reduce time spent charting, allowing physicians to focus more fully on the patient in front of them. Even modest time savings, when multiplied across thousands of clinical encounters, can translate into meaningful reductions in workload and burnout. Other jurisdictions are piloting AI-assisted prescription renewals and administrative triage, pushing beyond simple transcription toward more complex clinical-support tasks.
The integration of AI into electronic medical records (EMRs/HISs) raises additional possibilities. An embedded AI clinical decision support system (AI-CDS) that can surface relevant guidelines, summarize evidence or flag potential concerns during a patient encounter may improve efficiency and consistency of care. AI-enabled literature review and evidence synthesis platforms offer clinicians timely access to evolving medical literature, helping address a long-standing challenge in clinical practice.
At the population level, AI has the potential to support health system planning by identifying trends, anticipating demand and highlighting gaps in care. Used responsibly, these tools could improve resource allocation and help policymakers respond more effectively to population health needs.
These are real and meaningful opportunities. Ignoring them would be a mistake. But embracing them uncritically would be an even greater one.
Despite its promise, AI carries significant risks that cannot be ignored. Patient safety remains the most immediate concern. Large language model-based chat tools, now widely accessible to the public, can generate convincing but incorrect medical advice. There are documented cases of missed diagnoses, inappropriate medication recommendations and serious harm associated with overreliance on AI-generated health information.
Despite its promise, AI carries significant risks that cannot be ignored.
AI systems are prone to so-called “hallucinations,” producing outputs that sound authoritative but are factually wrong. In a clinical context, these errors can be dangerous. Even AI scribes, often viewed as low-risk tools, can introduce inaccuracies into the medical record, with downstream consequences for patient care and medicolegal risk.
As AI systems become more sophisticated, they also are becoming more “empathetic.” Chatbots can spend unlimited time with patients, responding patiently, validating concerns and mimicking the language of care. While this may feel supportive, these systems are trained within largely opaque “black box” models that inherently reflect biases including gender, racial and societal biases and operate without meaningful checks and balances or a human in the loop. This risk fosters a false sense of a therapeutic relationship, where patients may attribute understanding, accountability or clinical judgment to systems that possess none of these qualities. The potential harm may be greater than with human interaction, as the public often perceives AI as more accurate, objective and infallible than it truly is.
Increasingly, patients arrive at clinical encounters having already consulted AI tools and formed fixed expectations about diagnosis or treatment. Managing these expectations requires time, communication, and trust. When AI recommendations conflict with physician judgment, the potential for confusion and erosion of trust increases on both sides of the encounter.
Without clear guidance for patients and clinicians alike, AI risks complicating rather than simplifying care.
Perhaps the most profound risk posed by poorly governed AI is its potential to erode primary care, the foundation of Ontario’s health-care system. Family physicians provide far more than episodic diagnosis and treatment. Longitudinal primary care is built on continuity, context, relationship and, most importantly, trust. That trust is established over time through consistent presence, accountability for outcomes and a deep understanding of patients’ medical histories, family dynamics, social circumstances and unspoken cues with the shared expectation that every decision is grounded in the patient’s best interest.
A patient’s downward gaze, a hesitation in response or a subtle change in affect can signal something significant. These are not data points easily captured by algorithms. Family physicians often care for multiple members of the same family, integrating information across generations and contexts. This depth of understanding supports safer, more effective care.
AI chatbots, by contrast, see only snapshots. They lack continuity, relational memory and accountability. They cannot hold responsibility for outcomes, nor can they navigate the ethical and emotional complexity inherent in clinical care. Used as adjuncts, they may provide useful support. Used as substitutes, they risk fragmenting care further and undermining the very foundations of effective primary care.
In a system already struggling to recruit and retain family physicians, overreliance on AI as a replacement rather than a complement could accelerate decline rather than alleviate pressure.
Equity concerns also loom large in the AI adoption. AI systems are only as good as the data on which they are trained. If training datasets do not adequately represent Ontario’s diverse populations, including Indigenous communities and marginalized groups, algorithms may perform poorly or generate biased outputs.
Bias in AI is not merely theoretical. It can lead to misclassification, underdiagnosis or inappropriate recommendations for certain populations. Without rigorous local validation and ongoing monitoring, AI risks worsening existing inequities rather than reducing them.
Equity concerns extend to access. Rural and remote communities may face connectivity challenges that limit effective use of AI tools. Non-English speakers and individuals with limited digital literacy may be excluded from benefits that others enjoy. If AI becomes embedded in pathways of access to care, these gaps may widen the divide and cause even more inequity.
Ensuring equitable implementation requires intentional design, inclusive data practices and policy oversight. It cannot be left to market forces alone, especially since AI in health care depends on vast quantities of personal health information. How this data is collected, stored, used and shared has profound implications for trust in the health-care system.
Clear rules around data ownership are essential. Patients must understand how their information is being used, and clinicians must have confidence that data is handled responsibly. Transparency around commercial interests is critical. The sale or secondary use of health data for purposes unrelated to patient care undermines trust and raises serious ethical concerns.
Ontario’s existing privacy laws and institutional policies provide some protection, but they are fragmented and not designed with AI-specific risks in mind. As AI systems become more integrated into care delivery, governance must evolve accordingly. Strong security standards, accountability mechanisms and clear limits on commercial exploitation are non-negotiable if public trust is to be maintained.
Yet governance frameworks in Ontario have not kept pace. While professional accountability structures, privacy legislation and hospital policies exist, they are not cohesive, comprehensive or AI-specific. There is no unified vision for how AI should be used in health care, nor clear guidance for clinicians or patients.
This governance gap is not benign. In the absence of clear standards, decisions default to vendors, institutions under pressure or individual clinicians navigating risk alone. This fragmentation increases variability, exposes patients and physicians to harm and undermines professional autonomy.
What is needed is clear, physician-led governance that positions AI as a complement to medical practice, not a replacement. Physicians must work alongside patients, ethicists, data scientists, legal experts, and policymakers to develop standards for safe, ethical and effective AI use in healthcare.
AI literacy also is essential. Clinicians need training to understand the capabilities and limitations of these tools. Patients need education to use AI safely and appropriately. Without shared understanding, misunderstanding and misuse are inevitable.
And without clear legal and financial frameworks, who is responsible when AI fails? Vendors, institutions or physicians? Should physicians risk their professional judgment to comply with commercial pressures?
An emerging concern is the environmental impact of AI, not just for its energy use but for its effects on planetary health and human well-being. Training and running large AI models requires significant computational power, and as AI spreads in health care, these demands add up.
Responsible implementation must consider these environmental and health costs alongside clinical benefits. Climate-related health impacts can increase illness and downstream health-care costs. Energy-efficient algorithms, careful procurement and sustainability should guide AI adoption to ensure innovation does not create future health crises or undermine broader societal goals.
AI is not a passing trend. It is a transformational technology that is shaping the future of medicine. Used wisely, it can help address system pressures, reduce administrative burden and support high-quality care. Used poorly, it risks misdiagnosis, fragmentation, inequity and erosion of trust.
The outcome is not predetermined. It depends on governance.
Without firm, physician-led leadership, physicians risk being left behind, reacting to technologies imposed upon them rather than guiding their development and use. Medicine cannot afford a future in which clinical judgment, professional autonomy and patient relationships are secondary to efficiency metrics and commercial priorities.
Physicians must claim a central role in shaping how AI is integrated into health care. Doing so is not about resisting innovation. It is about ensuring that innovation serves patients, supports clinicians, and strengthens the profession.
AI is the future of healthcare. The real question is who will guide that future: will physicians lead, ensuring technology serves patients or will we be forced to adapt after the fact, risking the trust and relationships that form the foundation of all care? Choice and responsibility lie with us.

Chandi and Jane, thank you for raising these issues and for taking a stab at analyzing AI from the eyes of physicians— the governance gap is real, and the urgency is legitimate. But as a family physician who has been implementing AI tools in active clinical practice, I want to respectfully challenge some of the article’s core assumptions.
First, the framing of “physician-led” governance, while intuitive, risks recreating the same professional insularity that has slowed health system transformation for years. Effective AI governance must be patient-centred and interprofessional. Physicians are essential voices — but not the only ones. Nurses, pharmacists, data scientists, patients, and ethicists have equally legitimate stakes in how these tools are designed and deployed. And frankly, the article’s central call — that physicians must lead AI governance — sidesteps the harder and more important question: governed by whom, exactly? The OMA? The CPSO? Health Canada? A new dedicated body? Without naming the institutions, mechanisms, and accountability structures, “physician-led governance” remains an aspiration rather than a plan.
More critically, it is neither fair nor realistic to expect individual physicians to evaluate every AI tool they encounter, assess its safety profile, scrutinize its training data, and navigate its medicolegal implications — all while managing panels of increasingly complex patients. That burden cannot and should not fall on clinicians alone. What we actually need is for governments, medical associations, and regulatory bodies to do the heavy lifting: establishing independent evaluation frameworks, setting minimum standards for clinical validation, and creating clear liability pathways so that physicians are not left exposed when tools fail. The responsibility for safe AI integration belongs to the system, not the individual practitioner.
And here lies perhaps the most urgent challenge the article underplays: the pace of AI development is outstripping our collective ability to respond. By the time a governance framework is consulted upon, drafted, reviewed, and implemented, the technology it was designed to address has already moved on. Our institutions must become genuinely nimble — capable of iterative, responsive policy-making rather than the slow consensus-building processes that characterize most health system change. That is a significant cultural shift for organizations not historically known for speed.
Second, I want to push back on the characterization of AI as a threat to the physician-patient relationship. My experience is precisely the opposite. Patients who arrive having researched their concerns with AI tools are often better prepared, asking more focused questions and engaging with more evidence-based information than in the era of symptom-searching on Google, where advertising algorithms routinely shaped what people found. That shift strengthens the therapeutic encounter rather than undermining it. It creates a more informed starting point for shared decision-making and, in my experience, deepens trust rather than eroding it.
Finally, the article conflates radically different technologies — consumer chatbots, ambient scribes, clinical decision support systems, and diagnostic AI — under a single risk umbrella. Their governance requirements, risk profiles, and appropriate oversight mechanisms are not the same, and a framework that treats them identically will be simultaneously over-restrictive in some areas and dangerously under-protective in others.
The call to action is right, and these are exactly the conversations our profession needs to be having — with more precision about the path forward. We know there are risks, now I would like help finding solutions.
Dr. Darren Larsen, MD — Family Physician, Collingwod, ON
Adjunct Professor, Master’s Health Information, Institute for Health Policy Management and Evaluation, University of Toronto.
Sounds like you are demanding physician control in order to control the guild. AI will radically overhaul medical practice, just like other technology has done (e.g. radiology or lab physicians), often reducing the need for and incomes of physicians, and/or demanding higher productivity (e.g. more patients seen, reduced wait times).
We cannot assume that physician-led AI is in the best interests of patients.
The question has to be: what is in the best interests of patients? Better outcomes, fewer doctor visits, shorter wait times.
Not: what is in the best interests of physicians!
Very well said Adam, could not agree more. We are not living in a submissive patriarchal world any longer, thank goodness.
A very important topic and thank you for writing on this and hopefully this does get addressed by policy makers without quesitons. One line jumped out at me where you mentioned “AI systems are prone to so-called “hallucinations,” producing outputs that sound authoritative but are factually wrong”. This made me think of systematic reviews and meta-analyses this dovetails to some extent with this topic which you may already know of but worth mentioning again. :
“Due to the aforementioned limitations involving unnecessary, misleading and conflicting systematic reviews and meta-analyses, Ioannidis concludes that these flawed studies are not promoting evidence-based medicine and health care. In fact, he estimates that only 3 percent of all meta-analyses represent good and truly informative studies.”
https://www.citizen.org/news/trouble-with-systematic-reviews-and-meta-analyses/