Article

Missing the target on health care performance?

Setting targets has long been a mechanism in industrial psychology to motivate managers and workers to achieve specific organizational objectives. In the last decade, targets have become important methods of driving performance improvement in health care. However, deciding where and when to set targets is a challenge facing health care decision makers.

Politics of performance targets in Alberta

Alberta Health Services (AHS) has been releasing quarterly performance reports, which include updates on provincially-established targets, since September 2009. Chris Eagle, past AHS CEO  noted in 2011 that health care performance targets have been set to be “intentionally ambitious and aggressive”. However, the past two quarterly reports – for September and December 2013 – have not been posted.

Last month AHS made headlines after posting, and then within hours removing, an ‘Annual Report on Performance’ which included revised targets for 15 performance measures of the health care system, and fewer measures than previous quarterly performance reports.

The deleted report noted that “the targets associated with each measure represent a goal and standard to be achieved over time.” When pressed by media about why the report was taken down, AHS issued a statement noting that the targets were still in draft format.

Raj Sherman, Liberal opposition leader and emergency medicine physician says that the targets have been “watered down” from their initial goals. For example, the initial emergency department wait time target for patients presenting with urgent needs was that they would be admitted or discharged within 8 hours of arrival. This target was revised to 19 hours in the most recent report.

Sherman said targets were scaled back to achieve  political aims, “they want a political win by saying they’ve achieved their target.”

Establishing health care performance targets must be done carefully and cautiously. Targets play an important role in highlighting key policy goals and helping to motivate organizations and providers to achieve these goals. However, experts warn that “careless target setting” based on poor data or unrealistic expectations can be discouraging and stressful to the organizations and staff aiming to achieve them.

Bill Ghali, physician and director of the University of Calgary Institute for Population and Public Health, says “setting targets can be a challenge because there is always a question of feasibility within a context.” Ghali notes that for emergency department waits, “no one person in the system can in isolation tackle the target.” He emphasizes that reducing emergency department waits is often beyond the control of emergency departments and hospitals, and is connected to availability of primary care and community-based care after hospital discharge.

An overview of target setting in health care

The World Health Organization describes targets as incentive mechanisms where an “objective to be met in the future” is established. Incentives for organizations to meet targets can include carrots or sticks – for example, governments can provide organizations with extra funding to dedicate more resources, such as staff or operating room time, to meet targets. In contrast, if organizations don’t meet targets, money can be clawed back or senior executives can risk losing their jobs, or portions of their pay.

Another incentive can be reputational, where organizational achievement of targets are publicly reported, an approach described as “naming and shaming”.  An often referenced example of this is the star rating system in the United Kingdom’s National Health Service (NHS). From 2001 to 2004 a system was put in place where organizations were rated on a scale of 0 to 3 stars based on their achievement of an identified set of ‘key targets’. Organizations with high scores received public praise as well as additional funds to target towards priorities determined internally.

The star rating system resulted in dramatic improvements in performance around key targets, and reduced emergency department and elective surgery wait times. However, experts have also pointed out that organizations’ focus on meeting targets came at a cost – as some falsified data, or neglected areas of patient care not associated with targets.

This led to public debate in the United Kingdom around whether targets did more harm than good in the NHS. Those who support targets say that they set forth patient and provider expectations around “what we ought to expect from a modern, well funded NHS”. However, those who are opposed say that a focus on meeting measures takes away from patient care, arguing “targets suit politicians, not patients.”

Target setting in Ontario to improve performance

One high profile example of using targets to achieve health system objectives in Ontario is the Wait Times Strategy. Starting in 2004, the Ministry of Health and Long-Term Care committed to reducing wait times for selected diagnostic procedures and surgeries. The Ontario Wait Times Strategy provided financial incentives for hospitals to help meet these targets.

Alan Hudson, a retired neurosurgeon and past Ontario Wait Time Strategy Lead reflected on the use of setting targets to reduce wait times. “Targets are not perfect but worked perfectly well for the wait times project” he says. Though Hudson notes, referring in part to the United Kingdom experience, that “the downside of targets is that people work to the targets and may neglect other areas where there are no targets or incentives.”

Since its’ inception, the Ontario Wait Time Strategy has expanded to include all surgeries done in the province, as well as wait times across Ontario’s emergency departments. This strategy includes public reporting of wait times by hospitals, and the establishment of targets for these waits.

In addition to targets set at the provincial level, the 2010 Excellent Care for All Act mandates hospitals to develop annual Quality Improvement Plans (QIP) which set forth indicators of organizational performance, as well as targets. While the Ontario Ministry of Health and Long-Term Care suggests that QIPs should include areas that are linked to provincial priorities, hospitals use their own discretion to choose indicators and develop targets. The QIP Guidance Document, available on the Ministry of Health and Long-Term Care website, notes that targets should “represent what the organization aspires to, first and foremost.”

Hospitals are required by the Excellent Care for All Act to make QIPs publicly available, and are accountable to their boards of directors to meet targets set forth in QIPs. However, QIPs are currently documents aimed at internal quality improvement and targets are not formally shared or compared between hospitals. There are also no Ministry of Health and Long-Term Care financial or policy incentives tied to these targets.

Jeremy Veillard, Vice President of Performance Research and Analysis at the Canadian Institute of Health Information highlights inconsistencies across Ontario when it comes to QIP targets. “In some cases you’ll have leaders putting in place stretch targets, and using the target as a tool for improvement” says Veillard. However, he also points out that “strategies vary between hospitals with some not wanting to put in place stretch targets, and then be accountable for them.”

Target mania, indicator overload

Leslee Thompson, CEO of the Kingston General Hospital, argues that while setting targets is an important mechanism for accountability between provincial ministries of health and hospitals, sometimes “we get too focused on the numbers in isolation.”

Thompson says Ontario is deep in “target mania” where pressure to achieve targets can be discouraging for staff and organizations . She says that in her experience as a hospital leader “accountability systems do not rest only with performance measurement and target setting.”

Experts like Michael Schull, CEO of the Institute for Clinical and Evaluative Sciences, agree with Thompson saying “measurement is required, but not sufficient to bring about change to health care performance.” Schull says that while “you need to measure to know that you are improving,” more than measurement is needed to improve. Schull points to emergency department waits, which are impacted by many factors outside emergency departments, such as the availability of timely primary care. “Measurement won’t solve fundamental problems around health care integration” he says.

Thompson also points out that many targets set for hospital performance are indicators impacted by factors beyond a hospital’s four walls. This includes indicators like readmission rates, and alternate level of care.  “As a hospital on your own you can’t influence all the things that you need to do. You can contribute, but you can’t move the needle on your own” she says.

Health Quality Ontario is the government agency to which hospitals submit QIPs. It is currently in the midst of developing a strategy known as the ‘Common Quality Agenda’ for performance measurement and target setting in the province. This strategy has stated goals of focusing on “a small number of priority areas” and to “improve quality through partnership.” Whether this initiative will help to reduce ‘target mania’ remains to be seen.

Bill Ghali emphasizes that target-setting and performance measurement is essential to ensuring accountability for health care. He says “it is untenable to have our system be in the dark about performance.”  However, Ghali acknowledges that the “political climate around performance measurement” and  target setting is a challenge.

Judging from the recent experiences in Alberta and Ontario, there is a need to balance politics and organizational performance requirements.

“Having the ability to locally set targets and performance priorities makes sense” says Michael Schull. Though he cautions that “at the same time, there is a need to measure and report at the health system level what constitutes high quality care.”

The comments section is closed.

7 Comments
  • Peter G M Cox says:

    Excellent article and perceptive comments. Ms Kenefick’s point that reaching targets (or at least progress towards achieving them) cannot be made without improving the systemic causes of “shortfalls” is, in my view, “spot on”. Unfortunately, we do not seem to be doing this very successfully – the Health Council of Canada published a report last month based on the 2013 Commonwealth Fund’s international survey, summarizing that “… in the (Canadian) health care system, (over the past decade), access to care has not substantially improved … and … (patients’) care is (not) better integrated or more patient-centred. And we show … disappointing performance compared to other … countries … which have made impressive progress.” Equally discouraging – in other international studies (such as those done by the Euro-Canada Consumer Health Index, WHO and OECD), Canada has ranked very poorly, consistently for at least the last couple of decades, compared to other developed countries whose per capita spending is similar and often less, i.e. we have an unusually cost inefficient system.

    Perhaps, instead of setting our own targets (and “playing politics” with them) we should adopt targets based on what other countries with similar per capita spending ACHIEVE. (The idea of targeting 19-hour waiting times in emergency departments would be laughable if the consequences were not so potentially serious.)

    “Targets” aside (because it would probably take well over a decade to catch up with the Continental Europeans’ performance), what I think we need to focus on is the rate of improvement (or lack of it) in key “outcome” measures. This would tell us exactly what Ms Knefick is referring to – How well are we addressing the causes of our poor overall performance.

  • Beverley says:

    This is a wonderfully informative and refreshingly unbiased article.

    I agree first of all, with the comment about numbers in isolation being meaningless. Related to that issue, the comment about emergency wait times being impacted by other factors outside the hospital walls is certainly a valid point. The access to primary care services needs to be sufficient to accommodate patient needs; it needs to be accessible where and when the public are most likely to use it. Therefore, one idea I am wondering about would be having interdisciplinary clinics connected physically to Emergency Departments (EDs), offering the same hours, and having the capability of addressing any of the non-emergency situations that present in an ED. Extensive patient information and education would be available at all of these sites, to inform patients as to which conditions would be best seen in these clinics, and which should proceed to the emergency department. The ED Triage would then further classify those patients who do ultimately present in the emergency department. Public healthcare funding would of course be allocated accordingly for these on-site clinics.

    Some of the obvious advantages of this idea would be that patient conditions would be more apt to be addressed by the right professional, (and therefore the most cost-effective), in the most appropriate and convenient location for the patient, and in a timely manner.

    I agree with the point made in the article that “careless target setting” based on poor data or unrealistic expectations can be discouraging and stressful to the organizations and staff aiming to achieve them. I believe this can ultimately erode positive outcomes, particularly if many staff perceive that they have little or no continuing input into the targets set, and/or minimal control over the variables.

  • Judy Birdsell says:

    Performance targets are complex and difficult, but what is the alternative? One beginning step may be to inform health system workers (and the public if there is any benefit) of operational data and then at least there could be increased knowledge about system operations. My husband has worked in the oil patch for 40 years. When they are drilling an oil well, he knew every morning what the operations had cost the previous day and what had contributed to higher or lower than normal cost, and what the operation was costing if they were not ‘drilling ahead’. Do our unit managers in hospitals (or even a hospital CEO) know daily (or even weekly costs) with enough detail to adjust where adjustment is desirable or necessary?

    As a member of the public I would appreciate receiving knowledge about key indicators that may be helpful. Having had family members in hospital recently, I went looking for hospital infection rates and was able to find rates (in Alberta) by hospital (albeit from June as I missed the short window for the December rates!). But as your article attests, the report is extremely long, complex and one has to be more than casually interested to find an indicator of interest. It would be much more helpful if rates were posted at the entrance to hospitals (with comparators to other similar hospitals or even the provincial average. And clear reminders about ways to reduce hospital infections posted predominantly nearby may be helpful.

    Because it is difficult does not mean we should quit trying to find performance indicators that are helpful. I hope someday there are true public forums where citizens can comment and be involved in developing indicators and influencing how and where results are shared so that all citizens can be involved in improving the indicators. Patients and families and citizens without current involvement have a role to play too.

    Although imperfect as well, I would hope that one or two indicators that come straight from clients of the system (and hence in my opinion somewhat? less open to gaming) are part of a publicly reported set of indicators. Perhaps in Alberta it would be helpful to have a group of interested citizens with varied backgrounds critically assess the quarterly with respect to meaning, usefulness etc from a citizen point of view. This would add value in my opinion. to the critical input of those working in the system. Bottom line is to have knowledgeable commentators who are not primarily concerned with political positioning.

    • Dr. Douglas Woodhouse says:

      Dear Judy,

      I appreciate your thoughtful comments and wholeheartedly agree that the Canadian public (patients and families) should be involved in both defining indicators and measuring healthcare performance.

      Patient Reported Outcome Measures (PROMs) are a formal component of measurement in the UK, and in some other European health systems (http://www.nhs.uk/NHSEngland/thenhs/records/proms/Pages/aboutproms.aspx). I believe that some hospitals in Canada may include some form of PROMs but I’m not aware of any province where this is standardized or mandatory.

      Proceeding with patient-reported outcome measures may not, in fact, have to wait for hospitals or governments to facilitate it. Groups of citizens could begin to independently measure and publish health system performance, and possibly may find sponsors in the health system as partners. The CBC recently took an initial stab at this, and I hope that initiatives such as this will continue, becoming more statistically valid and clinically useful over time.

      There are, of course, concerns about validity and potential negative repercussions of greater transparency but these could be overcome if the will to increase safety, decrease costs and improve performance is truly present. I work in the health system, and I know I am not the only one who would welcome additional, relevant feedback on my performance.

      I agree that we need to start somewhere, and I for one don’t mind receiving feedback from my patients, possibly in the form of PROMs, if the health system itself cannot or will not provide this information for me.

  • Anne Wojtak says:

    This is an excellent debate to have. The capacity to evaluate and report on quality is a critical foundation for system-wide improvement of health care delivery and patient outcomes. Reliable, comparative data on quality can motivate providers to improve the quality of care by tracking their performance against national and regional benchmarks and facilitating competition on quality. However, all of us in health care can relate to Don Berwick’s comments in his report last year on lessons learned in patient safety in England’s NHS expressing caution in the over-use of quantitative targets and how this can take focus way from patient care.

    The most important question to ask about performance reporting and use of targets is not whether it should be done, but rather how it can be done more effectively. Two areas we need to consider are setting a few strategic system-level goals (HQO’s Common Quality Agenda is a step in this direction)that focus the entire system on a few critical priorities and then….evaluate. We need to evaluate the impact of current reporting practices on driving improvement, only then will we understand whether our current approach to reporting and targets is helping us achieve our objectives for system improvement.

  • Bonnie Lynn Wright says:

    Measurements are a wonderful and challenging addition to the Health Care System (HCS). They serve a good purpose because they are very sensitive to a multifactorial context. Unfortunately, that is also their downfall. What they do is act as a ‘barometer’ or early warning system. Measurement results indicate when things need to be examined for improvements or lauded and shared for excellence – probably! A $10 bill is worth the same whether it is in a wallet or a pocket or the bank. It can be compared to every other $10 bill with confidence. Items measured in an open social system are not as concrete. Performance measurements are so sensitive to context that conclusions cannot be drawn from merely reviewing the numbers. Investigation is required into causality from multiple sources. The investigation should result in several strategies being implemented one at a time with small tests of change for each strategy to identify which ones are effective and which ones need modification. That’s the quality improvement process. Measurements then provide sensitivity while quality improvement provides specificity. Both are necessary to optimize the efficiency and effectiveness of the HCS. Above and beyond performance should be a shared vision that identifies the people being served as the sole purpose for the HCS to exist. When we lose that vision, we become the pile of ambiguous measures that we seem to have become.

  • Brenda Kenefick says:

    Very interesting article.
    I believe that targets are essential for providing focus to teams trying to improve the system. That said, targets are not enough. Teams also need to focus on the process to achieve targets. Healthcare teams need to be provided with a strong background in process management, problem solving and root cause analysis. Without this foundation, it is possible to generate a lot of activity without improving the patients experience.
    A key question for a team is, “How many times have you solved the same problem” If the answer is multiple times we need to provide help in understanding root causes so that targets can consistently be achieved.

Authors

Karen Born

Contributor

Karen is a PhD candidate at the University of Toronto and is currently on maternity leave from her role as a researcher/writer with healthydebate.ca.

Terrence Sullivan

Contributor

Terrence Sullivan is an editor of Healthy Debate, the former CEO of Cancer Care Ontario and the current Chair of the Board of Public Health Ontario.

Sachin Pendharkar

Contributor

Sachin Pendharkar is a respiratory and sleep doctor and an Assistant Professor of Medicine and Community Health Sciences at the University of Calgary.

Republish this article

Republish this article on your website under the creative commons licence.

Learn more