The controversy around estimating deaths from medical error
In May 2016, the British Medical Journal (BMJ) published an article with the headline: Medical error—the third leading cause of death in the U.S. The article estimated that as many as 250,000 deaths per year in the United States were caused by medical error. This was a sharp increase from the first major report on error-related deaths.
Not surprisingly, this garnered instant and worldwide attention from news outlets including CNN, the Washington Post, CBC, and the Guardian. The attention of physicians, hospitals, and patients was focused yet again on medical error. The “third leading cause of death” statistic has been referred to by organizations like Leapfrog Group whose business includes a focus on medical errors. It has also been used by others such as pro-gun activists to attack the credibility of doctors and the health care system.
More quietly, and starting almost immediately after the publication, some in the health safety and quality improvement field have expressed concern that the BMJ article grossly inflated the magnitude of deaths from medical error. They are concerned that the article is inaccurate and might lead many to question the scientific credibility of the field of health safety and quality improvement.
This article explores the controversy.
What are medical errors?
The term medical error typically refers to a preventable adverse event (negative outcome) that was caused by an error, such as the administration of the wrong medication. However, the term is also used by some to include all adverse events rather than just those caused by a health worker’s error, such as an allergic reaction to a medication. While some adverse events are clearly preventable errors, for others it is less clear. For example: a complication such as bleeding from surgery could be due to a surgeon’s error or because the patient was prone to bleeding. While the vast majority of errors are not fatal, errors causing death have been used as a proxy for the magnitude of the problem of medical error.
The field of health safety aims to minimize medical errors. Because most medical errors should at least theoretically be preventable, it is understandable why the patient safety movement has grown in the last years.
Measuring medical error
The BMJ article arrived at the number of 250,000 deaths per year in the United States by averaging the rate of preventable deaths from medical error from four previously published studies. The authors called for a better system of tracking deaths from medical error, and suggested that medical error be included on death certificates.
Many experts in the field of health safety were surprised by the high number of deaths attributed to medical error in the BMJ publication. Kaveh Shojania, a quality improvement researcher in Toronto and editor of the journal BMJ Quality and Safety, was one of them. He thinks a more accurate number is in the neighbourhood of two percent of deaths, based on studies which estimate deaths due to medical error make up 5.2 percent, 3.6 percent, and less than one percent of in-hospital deaths. This would correspond to 15,000–35,000 deaths per year in the U.S., an order of magnitude lower than the BMJ estimate.
Shojania and co-author Mary Dixon-Woods were concerned enough to publish an opinion piece shortly after the BMJ article was published, outlining the problems with the 250,000 figure. They pointed out that the four studies selected by the authors were not designed to measure deaths from medical error and did not accurately determine which deaths occurred because of medical error. The number of deaths that were included in three of the four studies were small (with only 14, 12, and nine deaths), and were extrapolated by the authors of the BMJ paper to much larger populations, leaving room for considerable error. In addition, the extrapolations were sometimes done incorrectly: The patient populations used to measure the rates of medical error and death excluded those admitted for childbirth or mental health, and yet, those rates were extrapolated to every hospitalization in the U.S. (of which childbirth is the most common).
The criticisms of the BMJ study included an issue that is common to all studies of death from medical error, which is that there is often a high degree of subjectivity in determining how much an error contributed to a patient’s death and to what extent that death was preventable. “If a frail older person comes to a hospital with pneumonia, gets antibiotics, and as a result gets C. difficile colitis (an infectious diarrhea that is a known complication of antibiotics) and dies, the question is: What caused that person’s death?” asks Alan Forster, vice-president of Innovation and Quality at the Ottawa Hospital. “Was it the antibiotic-associated diarrhea? Was it their frailty? Was it the pneumonia? And secondly, even if the death is classified as due to the antibiotic, was it really preventable, or not?” Many people wouldn’t consider this death preventable because C. difficile is a known complication of antibiotics, and the antibiotics were needed to treat the pneumonia. In the studies used to arrive at the 250,000 value, this death would have been considered preventable.
The authors of the BMJ articles responded to some of the initial criticisms, and continued to stand by their estimate. We reached out to them for comment but did not receive a response.
Fiona Godlee, editor of the BMJ, acknowledges that there are flaws with the methods used in the 2016 article. “The article itself perhaps doesn’t strongly enough state that the methodologies for trying to do this are problematic,” she says. But she believes readers of the BMJ will interpret the estimate with appropriate caution. “The research community recognizes [this article] is for debate,” she says. “The media may have picked it up and gone with it as a statement of fact, and the headline didn’t help with that.”
Forster speculated why the BMJ study may have been published despite the flawed methodology. “Some people in the patient safety world feel like they have to generate these statistics to help wake people up to the scale of the problem,” he says. The argument goes that as long as people recognize the flaws, the attention it generates is a good thing. Godlee seems to agree. “The methodology is a little bit misguided, and it’s hard to agree on the right methodology, but that shouldn’t stop us from trying,” she says. “It is a good thing that there is awareness and that this is raised.”
The risks of inaccurate estimates of deaths caused by medical error
While few would argue about the importance of drawing attention to the field of patient safety, some members of the health safety research community are concerned that there are repercussions from being inaccurate.
Over the past three years, the value generated by the 2016 paper has been used by some groups to undermine physicians and their credibility. In a recent online dispute between the National Rifle Association and physicians regarding gun control, supporters of the NRA used this statistic to argue that doctors are more harmful than guns, and to de-legitimize the concerns doctors were raising about gun safety. A similar argument has been made by some naturopathic organizations and reported by alternative news sites to “warn” readers about the dangers of the health care system.
The Leapfrog Group acknowledged that they are “aware that there is great debate over the true number of deaths due to medical errors.” However, they continue to feature the numbers from the BMJ article on their website currently. In their response to questions from Healthy Debate they said, “What we can all agree on is that any death from preventable medical harm is one too many.” They did not comment when asked about any potential harms from using contested values.
Some worry that an inaccurate and inflated estimate of preventable deaths risks reducing the credibility of the field of health safety within the health care and research community. “[It] can cause a backlash from providers who read it and think ‘it doesn’t make sense,’” says Forster. Rather than bolstering enthusiasm for tackling medical error, a grossly inflated statistic may hurt the field instead. “If you keep shouting inaccurate statistics, then maybe people will become increasingly skeptical and not take the field seriously,” says Shojania.
In-hospital deaths from medical error are a small subset of all medical errors, and non-fatal errors cause considerable harm to patients. Considering that most of health care occurs in the ambulatory setting, there is an even larger potential for error to cause harm outside of hospitals. The potential problem with focusing too much on in-hospital deaths from error is that hospitals may move resources away from other areas of quality improvement that deserve attention. Common problems including diagnostic delay, medication errors, under-treatment, and over-treatment run the risk of being neglected because they often do not lead to death. “When you keep trumpeting these deaths due to medical error you force the hospitals to take the precious time, human, and financial resources they have for doing any sort of quality improvement work to looking just at deaths,” says Shojania.
Despite the controversy about the number of deaths from in-hospital error, all would agree that medical errors occur more frequently than they should and their prevention is important. How errors are measured, tracked over time, and how much attention should be focused on deaths continues to be discussed. What appears clear is that deaths from medical error provide only part of the picture.