Why aren’t new health care models studied more?


Leave a Comment

Enter the debate: reply to an existing comment
8 comments

  1. Dr. Franklin Warsh

    It’s not so much the economics as the political economy of large-scale evaluation initiatives. Sooner or later, people start to shake their fists at how much is being spent on evaluation as opposed to direct patient care. Unlike the internal machinery of a Ministry, an arms-length agency has no insulation from the political winds of the day. Moreover, there’s no guarantee that decision makers will act on the results of an evaluation.

    One raised eyebrow from the press or the Auditor General and that’s that. Sad but true.

  2. Dr. Judith Glennie

    While I agree with most of the points made in this article, I would argue that we are often trying to measure too much. Our focus on trying to measure “everything – including the kitchen sink” is a big reason for the delays, and has also probably created a bit of an evaluation cottage industry. Perhaps the challenge we (policy makers, stakeholders, etc.) need to take on is identifying the 3 to 5 most important parameters that need assessment and focusing efforts there. This would align well with the iterative approaches identified in the article.

    • Adam Smith

      You are spot on. As the article says, this is partly researchers fault as the evaluations are too time consuming, they are too often trying to be academic studies that cover everything and the kitchen sink to control for all variables, collect all kinds of information, and ultimately promote the careers of the researchers for publication in some journal. No one seems interested in rough and ready/tentative research. It seems like we have to always build a Bentley when perhaps a cheap Ford Focus would do.

  3. MC Lenard

    Completely agree.
    For all the rhetoric that we follow evidence based medicine (and I am biased towards science based, but that is another discussion), we fail miserably to determine the outcomes of our programs.
    What was the business case for the program?
    How were the results to be measured?
    What are the funding consequences if you don’t follow through or the results are poor?
    Case in point: Ontario’s Shared Service Organizations (SSOs).
    After being formally in place for 7+ years, now the Ministry wants to measure how well they are collaborating. Surveys that take significant man-hours to complete, asking for data SSOs don’t have, pulling resources away from operations, eventually resulting in poor data that will take months if not years to interpret, if at all.

  4. Boris Sobolev

    Why? Because evaluation is a thing, which takes money, which takes time, which takes effort. What we see however is basic scientists flooding the funders with aggressive, take-all, propaganda machine that essentially serves their personal and group idiosyncrasies. With no regard to serving the sick and ill. In Canada, how many unversities offer a degree in the science of health care delivery?

  5. Jim Dickinson

    The sad thing is that new programs are usually set up in places where things are going well, where there is a ceiling effect: it is difficult to improve further. More needs to be directed to the places where things are not going well, so much larger gain might be obtained, and the patients need the help more.

  6. Civisisus

    Stuff isn’t measured because stuff might not succeed as well as promised/predicted/hoped, and no one who proposes, or operates, or backs a project wants that.

    As a matter of science, failure is practically expected.

    As a matter of invention, failure is an ‘investment return’ (Edison’s view was that he never failed; rather he discovered 10,000 methods that wouldn’t work).

    As a political matter, failure is toxic.

Submit a comment