In Canada’s rapidly evolving digital health landscape, questions about how health data is accessed, stewarded and commercialized are becoming harder to ignore. Recent calls from Artificial Intelligence (AI) experts and federal advisors for greater data transparency underscore a growing tension: How do we unlock the power of health data while protecting public trust?
Too often, the debate becomes polarized. On one side, there are urgent calls to safeguard personal (patient) health information from misuse or commercial exploitation. On the other, there’s pressure to leverage patient data for innovation, improved population health and system-wide efficiency. These positions are often treated as mutually exclusive, but there is truth that lies in this tension.
In today’s digital ecosystem, patient data can be used to inform marketing, influence prescribing or determine eligibility for clinical trials, often without the patient’s full awareness. These practices may be technically legal, (or ethical depending on one’s interpretive lens) particularly when data is de-identified, but they’re frequently hidden in the fine print of consent forms or buried under layers of corporate partnerships.
For one, there is a lack of transparency, which in most (if not all) cases reveals an ethical failure that shouldn’t be easily chalked to a policy oversight. Patients deserve to know: Who has seen their data, for what purpose and with what consequences? If people cannot easily access their own records, yet private actors can mine that same data for commercial benefit, whose interests is the health system serving?
Still, this conversation shouldn’t stall at the resistance. Secondary use of aggregated data has extraordinary potential when used responsibly. We saw this in action when it enabled real-time pandemic response, accelerated public health insights and connected patients to life-changing clinical trials. But responsible use means transparent governance, patient representation and rigorous oversight.
The emergence of AI intensifies these tensions. AI models trained on vast amounts of health data offer the promise of clinical decision support, more personalized care and earlier detection of disease. They have the potential to expand access, enhance patient engagement and support equity-focused design, particularly when deployed thoughtfully in primary care, mental health and chronic disease management.
Whose data is being used to train these models? Who governs how the models are applied?
But these same systems introduce new risks and raise critical questions: Whose data is being used to train these models? Who governs how the models are applied? And who benefits – patients, providers or private actors? Without careful stewardship, we risk entrenching inequities, widening gaps in trust and reinforcing a system in which patients are subjects of innovation, not partners.
This is where the debate becomes most urgent, where we must consider moving beyond the simple binary arguments of protection vs. progress, (or privacy vs. potential) and reimagine co-developing a framework that acknowledges both the risks and the promises of health data.
First, we need patient-centered data governance. Informed consent should evolve from a passive checkbox to an active, ongoing conversation supported by public awareness and built on trust. Opt-out mechanisms for secondary data use should be clear, accessible and widely publicized.
Second, transparency should be the standard, not the exception. Health systems, data brokers and even publicly (or privately) funded research institutions should be required to disclose how patient health data is collected, used and shared; especially if used for commercial purposes.
Third, we need public oversight of commercial partnerships. Not all collaborations with industry are harmful, but when clinical (or AI-generated) recommendations are shaped by proprietary algorithms trained on patient data, the line between care and commerce grows dangerously blurred. Ethical review processes should prioritize patient well-being over profitability.
Fourth, AI systems must be evaluated through an equity lens. That means ensuring training data is representative, outcomes are audited for bias and patients themselves are involved in shaping these tools – not just subjected to them.
Finally, we must stop treating data as a commodity and start treating it as a relational good. Data reflects a person’s health journey, identity and vulnerabilities. Stewardship, not ownership, should guide our use of it. This means stewarding patient data as a privilege not a right, honouring Indigenous data sovereignty, embedding culturally responsive governance practices and including patient voices on decision-making boards.
For too long, patients have been positioned as subjects of innovation, not partners in it. Canada now has an opportunity to choose a different path, one that respects privacy, earns trust and still supports innovation in a way that is ethical and inclusive.
This is not about picking sides, it is about finding balance (a place of relational commonality). It is about learning how we can move forward equitably, transparently and responsibly while recognizing the tensions that exist with dealing with patient data. Because patients deserve to benefit from the power of their data, but they also deserve to know, to choose, and to trust.
