Opinion

Self-regulation, eh? Regulatory colleges’ Quality Assurance a time tax on health professionals

In Canada, health profession regulators promise self-regulation in the public interest, establishing standards and investigating complaints. But daily Quality Assurance (QA) – activities geared toward enhancing patient care and minimizing errors – still feels like registration: forms, hours, annual cycles. We measure participation, not contribution. We create a hidden time tax for professionals and staff, while duplicating QA and professional-development activities already happening in hospitals, associations, simulation centres and communities of practice.

It doesn’t have to be this way. Right-touch, network-governed QA based on evaluations of risk, with the focus on outcomes and public safety, can cut duplication and improve learning – without new bureaucracy. And yes, Canada’s self-regulatory model has the capacity to deliver it if we strengthen, not abandon, the “self.”

When QA is built from the regulatory colleges’ registration toolkit, it counts portfolios and credits but can’t show how any of it changes practice, improves equity of access or builds trust. Meanwhile, health systems already produce credible evidence of learning and quality – grand rounds, M&M rounds, simulation, team debriefs, supervision records, association-run programs, patient-experience data – that regulators often ignore or re-create.

The result is compliance theatre. We ask busy clinicians to re-enter what they’ve already done, in formats that serve the regulator’s database rather than clinical learning. It’s demoralizing and expensive – and the public doesn’t get the assurance it deserves.

There is a practical alternative health profession regulators can implement now.

First, adopt a network stance. Treat the regulator as one QA node among many, not the hub that specifies, collects and judges everything. Publish a simple recognition rubric for external evidence (e.g., simulation logs, team debrief notes, partner QA attestations) and use memorandums of understanding so roles are clear. When credible evidence already exists, recognize it instead of recreating it.

Second, make QA the driver of professional development. Quality is produced by individuals and groups. Keep a core spine – continuing professional development, self/peer/practice assessment and participation monitoring to meet statutory requirements – but add flexible, specialized streams to handle common and emerging issues with the right partners at the right level (think micro-modules and facilitated peer huddles co-run with institutions or associations).

Third, listen to the public continuously, not as a bolt-on survey. Maintain a few targeted channels – commend/report options, simple open feedback, occasional patient/client micro-surveys – and use that information for both selection (who needs support, where) and improvement (what we change in the program). Don’t duplicate hospital surveys; pull what’s already collected when appropriate.

Fourth, rather than starting with a “prove-you-complied” focus on individual practitioners, design a meaningful program in which every module contributes to professional development and professional QA. Require each component to state its contribution hypothesis up front: which public-interest dimension it serves – protection, access, equity, trust or knowledge – and how that will be demonstrated. Then run short evaluation cycles (six-12 months) to check adoption, usefulness and context-specific effects. Publish a brief “what we changed and why” note and retire low-utility tasks.

Fifth, design equity as support, not labels. Some groups need more support: early-career practitioners, those transitioning to retirement, small/rural/independent settings, internationally trained professionals. Offer low-cost submission routes, asynchronous peer sign-off and focused guidance. For experienced practitioners, recognize prior learning and expertise – credit mentorship, participation in communities of practice and real-world project work. Stop asking everyone to take the same credits again and again.

Sixth, trust – then verify. Credit “learning in the wild” through simple attestation (self- or peer-attested) with proportionate spot checks. Where risk is low, attestation plus post-hoc verification is enough; where risk is higher, add brief peer sign-off or a small evidence sample. Keep metadata minimal (who, what, when, context) so artefacts are auditable without burden.

Finally, treat knowledge mobilization as part of the QA program, not an afterthought. Invest in regulator staff skills (evaluation, facilitation, KT, data literacy). If you need consultants, choose partners who build internal capacity and leave playbooks, not dependencies.

So, what changes if we do this?

Professionals will spend less time duplicating paperwork and more time learning. Regulators will publish clearer public-interest accounts – not just “X audits completed,” but “here’s how this component improved access or strengthened trust.” The relationship between regulators and professionals will shift from adversarial to co-productive because evidence generated in practice is welcomed and recognized. And the public will get what it was promised: credible assurance that the system is learning, not just logging.

Crucially, this is a bet on self-regulation, not a retreat from it. Health profession regulators already convene standards, complaints and QA. By acting as learning regulators – mobilizing knowledge across the network and testing their own contribution in short cycles – they can lead a model that’s lighter, smarter and fairer. Organizations like HPRO (Health Profession Regulators of Ontario) make this shift easier by coordinating common tools, messages and public-facing information across regulatory colleges.

Workloads are high, trust is fragile and budgets are tight. We can’t afford QA that taxes time without buying quality. Recognizing credible external evidence, focusing on contribution over compliance and publishing what changed after each cycle are practical moves regulators can implement this year – no statute change required. If we want self-regulation to mean what it says, this is how we get there.

Leave a Comment

Your email address will not be published. Required fields are marked *

4 Comments
  • Larry Getman says:

    Interesting perspective. Please consider sharing at the Canadian Network of Agencies for Regulation Conference.

    • Igor Gontcharov says:

      Thanks, Larry – I appreciate the invitation! I’ve always valued CNAR’s balanced interest in both the theoretical and applied sides of QA and continuing competence. I recall the 2022 CNAR RFP asking what makes effective QA/competence programs and how best to assess and measure competence; those questions remain spot-on.
      What’s changed is the pace: with AI moving quickly into documentation, decision support, simulation and patient-communication tools, regulators may need to act proactively and start updating QA now – so it measures contribution, not just participation, and supports safe AI-enabled practice. We can also anticipate new forms of practice (AI-enhanced teams, synthetic/robotic assistants), which makes networked, evidence-based QA even more important.

  • Igor Gontcharov says:

    Thanks, Derek – totally agree on the sociotechnical piece. In my experience, the pushback often isn’t to the insights themselves but to the perception that new tools/metrics are surveillance linked to rewards and punishments. If the tech is seamless, consultative, and explicitly for quality improvement and supportive training (not performance policing), barriers drop.

    Helpful safeguards: Decouple QA analytics from HR/performance management; state this plainly in policy; Start with opt-in pilots and co-design indicators with clinicians; share aggregate signals first; Build non-invasive prompts into EHRs/AI systems (e.g., “Consider asking about X for patient Y” when something may have been missed) and allow quick dismissal with reason…
    Done this way, digital/AI tools feel like a practice support, not a trap – which is exactly the mindset shift continuous quality improvement needs.

  • Derek Ritz says:

    Our investments in digital health should afford us a useful insight into care quality. Analytic techniques can determine rudimentary “guideline adherence” metrics from encounters’ transactional data. Sadly, there is often pushback from our clinicians against generating and surfacing these insights. We should, I think, explore ways to address the sociotechnical barriers that impede our better use of IT in supporting continuous quality improvement.

Authors

Igor Gontcharov

Contributor

Igor Gontcharov, PhD, is a policy researcher and consultant in professional regulation and quality assurance based in Toronto. He has worked with health profession regulators across Canada.

Republish this article

Republish this article on your website under the creative commons licence.

Learn more