Opinion

Professional self-regulation can still work if we run it in the public interest

What is professional “self-regulation,” really? Society sometimes concludes that certain fields are so complex and context-bound that external, arm’s-length control would be blunt and slow. We’re all better off, the reasoning goes, if professionals set standards, assess competence and monitor one another – while the state provides the legal framework and oversight. In Ontario, for example, the Regulated Health Professions Act sets out how health professions govern themselves, including quality assurance (QA) requirements.

That’s the theory.

In practice, however, professional self-regulation in health care is under pressure. Some argue it’s outdated, conflicted or too soft; others defend it as essential to public protection. But the debate often skips key questions: do we actually practise meaningful self-regulation – and does it deliver public value?

My argument: self-regulation can still work if we stop running quality assurance like licensing compliance and start running it like professional development in the public interest.

As required by the Act, every health profession regulator must run a QA program with three components: continuing education/professional development; self/peer/practice assessment; and monitoring compliance. On paper, that sounds reasonable. In practice, because it is mandated in that form, many regulators build QA as a checklist.

The result is familiar: count hours, complete an online module, submit a form, attest. Audits are done, files are closed, boxes are checked. Annual cycles roll forward.

What usually isn’t asked – and this is the core problem – is the public question: does any of this change practice, improve safety, increase access, strengthen equity, build trust or generate usable knowledge?

Instead, regulators often report activity: numbers of portfolios audited, assessments completed, professionals “in compliance.” That is licensing logic. It tells us who submitted what, not whether the program helped anyone practise better or give the public receive better care.

In 2019-2020, Ontario introduced the College Performance Measurement Framework to standardize reporting and push regulators to show how they act in the public interest. That was a step forward. It nudged regulators toward transparency and beyond purely procedural descriptions.

But it also exposed a deeper issue. When law defines a QA program narrowly, responses are narrow. Continuing professional development gets reduced to hours. Peer and practice assessment can feel like surveillance rather than support. Compliance monitoring drifts from “show what works in your setting” to “prove you did what we asked.” And because the dominant approach targets individuals, most programs are built for individuals – even though quality is made in teams, sites and systems.

In short: we’ve asked QA to behave like licensing, then fault it for not acting like quality improvement.

There’s a second design flaw: regulators still tend to act as the sole hub of QA. But they aren’t. Hospitals run simulation programs, morbidity and mortality rounds, structured debriefs. Professional associations run learning collaboratives. Interprofessional teams review critical incidents together. Patient partners generate ongoing lived experience data. Supervisors mentor and sign off on performance. A lot of high-quality learning already exists in the system.

When regulators don’t recognize that work – and instead recreate it inside their own portals – they duplicate effort, burn time and alienate professionals. The cost of compliance gets pushed onto the practitioner and the organization, without clear added value to the public.

There’s also a conceptual drift in how “the public interest” is used. In practice, it’s often equated with “public protection.” Protection matters, of course. But public interest is broader and more contested than that. It also includes access to services, equity in who can enter and remain in the profession, trust in regulators and in care and the production and sharing of knowledge. A QA component that improves access – for example, helping rural practitioners maintain competence in low-volume but high-risk procedures – is serving the public interest just as much as one that detects unsafe practice and intervenes. But most current QA programs don’t name which dimension they’re serving, or how their contribution will be shown.

So where does that leave us? The instinct to scrap self-regulation and centralize everything under government is understandable – but it’s premature. There is still a viable path for self-regulation if regulators are willing to modernize QA.

That modernization doesn’t require rewriting the Act or blowing up the model. It requires treating QA as professional development in the public interest and running it accordingly.

First, regulators need to shift from “we are the central hub” to “we are one node in a network.” That means recognizing credible learning evidence that’s already being generated in practice – simulation logs, structured team debriefs, documented supervision, association-led training, patient feedback programs – instead of forcing people to recreate it for regulatory purposes. Regulators can do this with simple, transparent criteria and light memoranda of understanding. This reduces duplication and begins to rebuild trust.

Second, QA should clearly serve professional development, not just documentation. The statutory spine can stay in place (continuing professional development; self/peer/practice assessment), but it needs to be expanded so that it supports learning for both individuals and groups. Quality is made in teams; QA should acknowledge teams. That means creating “specialized streams” for common or emerging issues – where the regulator works alongside employers, associations and patient partners to rapidly design practice supports when something new shows up, instead of layering on more review and discipline.

Third, regulators should stop measuring QA in terms of participation and start measuring it in terms of contribution. For each component of the QA program, ask up front: which dimension of the public interest is this meant to serve – protection, access, equity, trust or knowledge – and what would count as a signal that it’s doing so? Then run short contribution tests, not five-year evaluations. If a component isn’t helping anyone practise better or improving public value, retire it. Publish what changed and why.

Fourth, equity needs to be treated as support, not suspicion. Early-career practitioners, internationally trained professionals, those practising in small or remote settings, and those transitioning toward retirement often need more targeted developmental support, not more bureaucratic pressure. At the same time, experienced practitioners should have their expertise and mentoring recognized as evidence of contribution, rather than being asked to keep earning the same generic credits every year.

Finally, regulators should view knowledge mobilization as part of the QA function. This means publishing what works, sharing methods, sharing tools and being transparent about “what we changed and why.” It means choosing consultants who build internal capacity rather than creating dependence. It means treating regulators as learning institutions, not just enforcement bodies.

Self-regulation is only defensible if it can show, in plain language and public numbers, that it is improving practice in ways that matter to the public. Today, many QA structures don’t do that. They can – without legislative overhaul – by moving from compliance to contribution. If regulators take that step, self-regulation isn’t a privilege; it becomes a credible model of shared accountability in the public interest.

Disclosure: No conflicts of interest to declare.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Authors

Igor Gontcharov

Contributor

Igor Gontcharov, PhD, is a policy researcher and consultant in professional regulation and quality assurance based in Toronto. He has worked with health profession regulators across Canada.

Republish this article

Republish this article on your website under the creative commons licence.

Learn more