At Jack Kitts’s first performance review as CEO of The Ottawa Hospital in 2003, he was able to report that the budget was balanced and that he was “feeling good” about the hospital’s finances. He also had a plan in place to improve morale at the hospital. When the Chair of his board asked him whether the hospital was now providing quality care, Kitts replied “of course.”
But then his Chair asked him a question he couldn’t answer: “How do you know?”
Kitts realized that he couldn’t answer, because while he felt that The Ottawa Hospital was staffed by excellent, dedicated doctors and nurses, the hospital wasn’t systematically measuring its quality of care.
And The Ottawa Hospital was not alone – ten years ago, few Canadian hospitals measured quality of care, and there was limited measurement of health system performance (including primary care, home care or long term care).
A decade later, progress has been made. At the provincial level, health systems report publicly on wait times and some quality measures. Hospital quality is also measured and reported by the Canadian Institute for Health Information (CIHI). Yet there are still large gaps in what is measured in our health care system, and much of what is measured is only useful to top-level system managers, not to the front-line clinicians whose day-to-day work is so important to the overall quality of the system. This leads experts to question whether measurement is being used effectively to improve the quality of Canadian health care.
Little standardization in measurement across health systems
In Ontario, the Ministry of Health and Long Term Care now publicly reports wait times for emergency departments, MRI/CT scans and some surgeries. However, much of this wait time data is incomplete since it does not include wait times to see specialists.
Information on health system performance is monitored by Health Quality Ontario (HQO). HQO’s annual Quality Monitor reports on a range of measurements for hospitals, primary care, home care and long term care.
HQO reports dozens of metrics, including measures of wait times, adverse events and patient satisfaction. While some of the measures are reported every year – such as the proportion of home care patients with pain that is not well controlled – other measures vary from year to year – such as the rate of deep vein thrombosis after surgery, which was reported in 2010 but not in 2012.
While HQO’s quality monitor provides a snapshot of health system performance, the most recent report acknowledges it has “major gaps.” According to the report, “in some cases, the data [on quality] exist but are inaccurate or difficult to access, while in other cases, there are no data at all.”
Alberta Health Services (AHS), the authority responsible for administrating Alberta’s health care system, publicly reports 55 performance metrics. These include such different measures as life expectancy, childhood immunization rates, workforce absenteeism, wait times, adherence to budgeting and patient satisfaction.
AHS reports these performance metrics quarterly, and has been reporting on the same measures since 2010.
In addition to AHS’s reporting on the health care system, Alberta’s ministry of health (Alberta Health) also publicly reports on health care utilization and population health.
Transparency alone not enough to drive quality improvement
There is no doubt that reporting health system performance measurements on the web can make a health care system more transparent (assuming the measurements are accurate). However, there is limited evidence to date that public reporting – at least in its current form – is contributing to meaningful improvement.
Kitts believes strongly in the power of transparency. “Unless you can compare yourself to others and benchmark against best practice, quality improvement is very slow going,” he says. But he acknowledges that transparency alone is not enough to drive quality improvement.
Transparency may be ineffective at driving quality improvement if the information being publicly reported isn’t accurate. “With something like CIHI’s report on hospital quality, doctors and nurses are very concerned that the data is old and that the comparisons aren’t ‘apples to apples’ – because everyone is reporting the data differently,” Kitts says. This certainly appears to be true of some quality indicators, such as hand washing, where there are large discrepancies between the rates of hand washing reported by some hospitals versus the rates observed by researchers.
Kitts’ concern is that questions of accuracy can be used as an excuse to not focus on quality improvement. “We have to take this away,” he says. “We have to get health professionals to take quality data seriously.”
This means more effort must be made to ensure quality data is reported the same way by hospitals and other health care facilities.
Measuring what matters
Cy Frank, CEO of Alberta Innovates Health Solutions and Chief Medical Officer of the Alberta Bone and Joint Institute, believes part of the gap between measurement and quality improvement is due to relying too much on “administrative data” rather than doing the hard work of measuring quality directly. “You need good data to make good decisions,” says Frank, “if you use data that was generated for other purposes, to track billing for example, you’re not getting good data about quality.”
Stafford Dean, Vice President of Data Integration Measurement and Reporting for AHS, agrees. “We’ve been really successful at making the system a lot more transparent – and that’s great. Now we need to focus on making sure that we’re measuring the right things to really drive quality improvement.”
Frank believes a key part of good quality measurement is not to rely on a single metric or focus on one part of a continuum of care. “Focusing on one thing can have perverse effects,” he says. “If you measure only one part of a continuum of care, the system will find ways of pushing patients out of that part of the continuum. You have to have continuum approaches, multiple data sources, multiple metrics and timely analysis.”
Tom Briggs, Vice President of Health System Priorities for AHS has a similar perspective. “What you report publicly tends to determine what the system focuses on improving, and we want to focus on the real game-changers.”
For Briggs, many of these “game-changers” lie beyond the “big-dot” measures of health system performance.
Measurement from the “bottom up”
Briggs thinks one of the keys to using measurement to drive quality improvement is to provide clinical staff with data that is relevant to them. “There’s not much a front-line practitioner can do to move a ‘big-dot’ measure of health system performance,” he says.
Instead, he believes clinicians need a finer-grained level of data that helps them identify how they can improve their practices. “If clinicians throughout the system are using their own data to improve on the quality in their own practices, that’s what’s going to move the big measures of overall system performance.”
“Most of the measures we have in Canada right now are top down,” says Dean, “but to improve quality on the front lines, we also need measurement from the bottom up – we need a whole layer of clinically-relevant measurement underneath the big health system performance measures.”
Alberta has already had some experience using data in this way. For the last eight years, Alberta Bone and Joint Institute has collected information on quality of care, including patient reported outcomes, and provided it directly to both providers and administrators.
“The key,” says Frank “is packaging. We analyze and package the data in a way that is useful to clinicians that can help them improve their care.” He stresses that this information isn’t used for reward or punishment, but to help identify opportunities to improve outcomes.
Dean hopes to use the work of the institute as a model for the rest of Alberta. He believes clinicians are eager for this kind of information, saying “I’ve seen a shift in the attitude of doctors over the last ten years –they want to know their performance, they want to know things like whether their patients are winding up back in the emergency department after they’ve seen them.”
Dean doesn’t see “bottom-up” measurement as a replacement for public reporting of high-level system performance. Rather, he thinks that measurement at all levels of the health care system (not just the top) will allow measurement drive quality improvement.
Frank, Dean and Briggs all acknowledge that it has yet to be proven that a “bottom-up” approach to measurement can work on a provincial scale. However, they’re hopeful that Alberta is on track to use measurement to drive performance at all levels of the health care system.
The comments section is closed.
Bottom up is a exciting measures but it is very important on how to make it really effective as in different hospital or institution, the standard is varying and the first and important is to take same standard, otherwise it is making no sense.
Standardized clinical questions need to be embedded into assessments that are completed by clinicians and this information needs to be available to clinicians in real time so that they can use this information to understand what practices are leading to improved outcomes – need for EHRs. HOBIC in Ontario is starting to do this with a limited suite of clinical outcomes that have been approved by OHISC.
The problem with “outcomes measurement” is that, in terms of medical endpoints, some are inevitable, some are subjectively defined, and some require patient compliance. These factors would probably mitigate any usefulness such data recording could provide.
As for “patient satisfaction”, “staff politeness” and other customer-service gobbledegook, I say we dispose of it ASAP. Medicine is not a fast-food business where the “customer is always right”; in other words, what a patient wants might not be what they need, and vice versa. The USA is already inundated with such vapid, worthless “customer service” metrics like Press-Ganey surveys which provide nothing but to act as punitive tools against physicians.
Wait times can be reduced by more public investment into infrastructure, or by permitting physicians to own and operate their own hospitals, surgicenters, and the like. Wasting money on a “bottom up” approach is good PR, but solves nothing.
Accountability is still not shared. Physicians bear the brunt of it, as they should. But CEOs and administrators bear none. Poor management leading to outbreaks of C.diff, lab problems etc. roll off the backs of the administrators onto the lowly ward managers or lab rats.
Instead of having this fairly mundane discussion, we should be empowering health care providers to provide health care rather than figuring out new ways to count beans. That means more physicians in leadership roles, and less MBA executives. Simple as that.
I am amazed that anyone thinks that staff politeness is “gobbledegook”.
It is when patients are ranking the quality of the care they are receiving on the perceived politeness of those providing the care that irks me. Such measures can only be used to punitive ends. You clearly misunderstood the meaning of my post.
Sometimes we don’t have the luxury of being polite. Sometimes it takes effort that must instead be utilized to coordinate care and save the patient. Sometimes we just have a bad day. Sometimes a patient can be vindictive because I did not provide him with what they wanted, even if it is completely contraindicated. Does that make me a bad doctor? They’d say so.
I am not supporting that we as physicians be mean to patients. No. I am saying that to use “politeness” as a metric for care quality is downright foolish, and removes a small piece of the already scaveneged husk we call “physician autonomy”.
In passing, your passive-aggressive retort was not appreciated, but unlike you, I am not amazed by it, nor am I surprised by your complete lack of addressing my other salient points.
Still no reply Dr. Laupacis? I am waiting for your thoughts.
I think the physician who we all want to care for us is up-to-date, highly technically competent, has excellent clinical judgment, and is polite and empathetic. I think these are all important. I agree that one shouldn’t make judgments on the basis of isolated clinical encounters.
I’m wondering if a provincial level of clinical evaluation is appropriate. Hospitals, clinics, etc. could do the evaluation (preferably the patients/clients could complete a report on a computer and submit it directly to the provincial authority for analysis). My additional concern relates to what is evaluated. It seems that the main focus is on how the person was treated by staff, how long did you have to wait, etc. I would prefer to see the patient/client be given a form with which they evaluate the encounter with the health care professional and that includes reason for visit/appointment/surgery; the expected full and comprehensible explanation of advice/treatment/prescription/surgery; and possible effects of the advice, etc…………. A recent example in which the above would have been helpful – a son of a friend had a vasectomy 2 days ago. He was given no post operative information and subsequently spent 5 hours in the E.R. with a hematoma – a waste of health care $ and his time. Had he known what might happen and how to deal with it, the outcome would have been much more reassuring and cost-free.
Leslie…..
your statement:
‘ Had he known what might happen and how to deal with it, the outcome would have been much more reassuring and cost-free.’
You are too polite ( this is not Longwoods stroking the Chiefs). It is perfectly OK to say:
“You know how to do the job right and you fail to follow through and you compromised the health of my friend. What the hell is going on?”
This is what makes input more real. Patients are not peons or lesser people. They are central. Go ahead and ask someone to ” prove it”.
In some areas, bottom-up measurements are alive and well. %featured%It takes time to implement such a system, engage the front line staff and teach them how to use the information to create change that improves quality. However, the results are amazing and the energy and focus in the providers and staff is exactly what will turn the system around.%featured% People benefit from more timely screening for preventable conditions, fewer health emergencies for chronic conditions and a higher level of satisfaction with the health care system.
I have been working as an Evaluator in Alberta’s Primary Care Network (PCN) system for 6 years. Not all PCNs are at the same level at this point but most are making great strides. In the Calgary Zone, we have begun to produce reports from all seven PCNs on three critical frontline indicators across the zone. Right now, we are making baby steps but the Zone is committed to seeing this blossom into a sustainable and growing activity.
Generally, the disconnect is sharing any information broadly. The Health Care System as a whole does not or cannot share information between all its many parts. This involves three things; improved IT capability for all of the system to access as they need, the effort to help front line providers to see themselves as an equal and contributing part of the whole system and the resources to actually do the measure and improve activities in a meaningful and non-threatening way so that improvement will happen as a natural evolutionary result.
Jermey. Palmer, Tierney ……..
….. this excellent article both assuages and empassions while profiling that APA format no longer reigns as the absolute :)
Moving past the constraints of understanding that:
– MBO and MBR have been replaced by the payor who uses Strategic Management and knows that the same inputs do not necessarily give predictable outcomes
– Understanding the organization of care is now horizontal , not vertical, making “everyone “ accountable”..AND…removing the words top and bottom ( which must also be followed into action)
Forward into …
1. how to introduce everyone on that horizontal line into meaningful dialogue for innovation ( versus alternatives or equivalents)
2. how to create a PDAG ( Program design Advisory Group) [ informal ]that complements every phase of Master Plan Development [formal ]
Will inclusion of client at the LHIN level mean much if the 60/40 or 70 /30 power ratio is the key indicator? or the approach is tokenism ?
Does this beg the question that the initiative should be more clearly defined in terms of a sliding scale of “involvement” … versus “agreement” (remember Watson -IBM)?
Time is wasting …bemoaning the missing parts to effect traditional measuring is not presently revelant or relevant.
In real life events of Health Planners they must follow the steps of others who have moved mentally into the concept of community leadership and accommodating client needs AND requests …and the physically entering community ( not vice-versa).
They must think more like the Courts….. where “facts” matter little if the “issues” have not been properly identified