Article

Why aren’t new health care models studied more?

The comments section is closed.

9 Comments
  • Civisisus says:

    Stuff isn’t measured because stuff might not succeed as well as promised/predicted/hoped, and no one who proposes, or operates, or backs a project wants that.

    As a matter of science, failure is practically expected.

    As a matter of invention, failure is an ‘investment return’ (Edison’s view was that he never failed; rather he discovered 10,000 methods that wouldn’t work).

    As a political matter, failure is toxic.

    • len says:

      This is the truth. Do you think when the Ontario Government roles out a program they want to consider the possibility it didn’t work several years later. No, they want a program that in the next election cycle they can say, “Look at the great things we did for you”

  • Jim Dickinson says:

    The sad thing is that new programs are usually set up in places where things are going well, where there is a ceiling effect: it is difficult to improve further. More needs to be directed to the places where things are not going well, so much larger gain might be obtained, and the patients need the help more.

  • Boris Sobolev says:

    Why? Because evaluation is a thing, which takes money, which takes time, which takes effort. What we see however is basic scientists flooding the funders with aggressive, take-all, propaganda machine that essentially serves their personal and group idiosyncrasies. With no regard to serving the sick and ill. In Canada, how many unversities offer a degree in the science of health care delivery?

  • MC Lenard says:

    Completely agree.
    For all the rhetoric that we follow evidence based medicine (and I am biased towards science based, but that is another discussion), we fail miserably to determine the outcomes of our programs.
    What was the business case for the program?
    How were the results to be measured?
    What are the funding consequences if you don’t follow through or the results are poor?
    Case in point: Ontario’s Shared Service Organizations (SSOs).
    After being formally in place for 7+ years, now the Ministry wants to measure how well they are collaborating. Surveys that take significant man-hours to complete, asking for data SSOs don’t have, pulling resources away from operations, eventually resulting in poor data that will take months if not years to interpret, if at all.

  • Tom Closson says:

    Check out the work being done by the Canadian Foundation For Healthcare Improvement (CFHI) on Pan-Canadian spread of effective models of care and their evaluation http://www.cfhi-fcass.ca/Home.aspx

  • Dr. Judith Glennie says:

    While I agree with most of the points made in this article, I would argue that we are often trying to measure too much. Our focus on trying to measure “everything – including the kitchen sink” is a big reason for the delays, and has also probably created a bit of an evaluation cottage industry. Perhaps the challenge we (policy makers, stakeholders, etc.) need to take on is identifying the 3 to 5 most important parameters that need assessment and focusing efforts there. This would align well with the iterative approaches identified in the article.

    • Adam Smith says:

      You are spot on. As the article says, this is partly researchers fault as the evaluations are too time consuming, they are too often trying to be academic studies that cover everything and the kitchen sink to control for all variables, collect all kinds of information, and ultimately promote the careers of the researchers for publication in some journal. No one seems interested in rough and ready/tentative research. It seems like we have to always build a Bentley when perhaps a cheap Ford Focus would do.

  • Dr. Franklin Warsh says:

    It’s not so much the economics as the political economy of large-scale evaluation initiatives. Sooner or later, people start to shake their fists at how much is being spent on evaluation as opposed to direct patient care. Unlike the internal machinery of a Ministry, an arms-length agency has no insulation from the political winds of the day. Moreover, there’s no guarantee that decision makers will act on the results of an evaluation.

    One raised eyebrow from the press or the Auditor General and that’s that. Sad but true.

Authors

Wendy Glauser

Contributor

Wendy is a freelance health and science journalist and a former staff reporter with Healthy Debate.

Michael Nolan

Contributor

Michael Nolan has served Canadians through many facets of Paramedic Services.  He is currently the Director and Chief of the Paramedic Service for the County of Renfrew and strategic advisor to Healthy Debate

Jeremy Petch

Contributor

Jeremy is an Assistant Professor at the University of Toronto’s Institute of Health Policy, Management and Evaluation, and has a PhD in Philosophy (Health Policy Ethics) from York University. He is the former managing editor of Healthy Debate and co-founded Faces of Healthcare

Republish this article

Republish this article on your website under the creative commons licence.

Learn more