A dispute between some researchers and the Canadian Institutes of Health Research (CIHR – Canada’s largest health research granting agency) about how many large clinical trials should be funded by the CIHR has recently gone public. The scientists believe that more large clinical trials should be funded in Canada, and they should decide which ones.
I agree with them on the first point but not the second. The purpose of this opinion piece is to argue that the public should have a major say in which large clinical trials we invest in.
What are large clinical trials, why are they important and why should Canadians care? The short answer is that they change health care for the better. For a description of large clinical trials and how they benefit people, please click here.
I totally agree that Canada needs to fund more large clinical studies from the public purse – these studies change the way patients are managed for the better, and we currently fund too few. The pharmaceutical industry conducts excellent studies that have appropriately changed practice, but their financial interest means we can’t leave the playing field solely to them.
However, the number of really important large studies that could be done will always be greater than the resources available to pay for them, even if there is a considerable increase in funding from the federal government or the provinces. Who should make the decision about which studies should get preference?
The current answer to that question is “we the researchers”. The argument is that designing and conducting studies that will give answers that can be trusted is complicated, and only researchers can choose the best ones from among different studies. They are also experts in the disease being studied, and some are doctors or nurses who treat patients with the disease. Therefore, the argument goes, they know what is important to these patients.
However, I am not so sure. In addition to the upsides mentioned above, there are downsides to leaving this decision with researchers.
Many researchers are likely to consider the disease they are interested in as the most important. This isn’t a knock on researchers; it is a fact of life that all of us value the things that we are closest to the most highly. Researchers can sometimes get so focused on their research that they fail to recognize that it isn’t as important as they think it is. For example, one of the reasons some studies are very large is that researchers are trying to detect very small differences between two treatments, sometimes for outcomes that aren’t nearly as important as stroke. Those differences are sometimes so small that one can legitimately ask whether they are really worth detecting. Also, some large research studies don’t end up changing practice nearly as much as the researchers expected – could this be because they are too close to their particular scientific question to design their studies in ways that will maximally influence health care delivery and health policy?
An important part of deciding which large studies to fund is weighing the likely importance of the study results to patients and the health care system. This is incredibly difficult to do, but once studies are above an acceptable bar of scientific excellence, I would argue that this choice has little to do with science and much to do with values, fairness and legitimacy. The decision requires consideration of the current impact of the disease on patients, how many effective treatments for that disease already exist, the likely impact of the new treatment (if it works and doesn’t have many side effects) upon the lives of patients and their families, whether the disease has been “neglected” or studied a great deal, whether the treatment will help decrease the existing socioeconomic disparities in health, whether the health care system will be able to afford and accommodate the treatment, and many other factors. (By “treatment” I include preventive, health system and social interventions, not simply drugs and devices).
I think it is time to change the way we decide which large studies to fund. We should separate the assessment of scientific excellence from the assessment of the research’s importance to patients and the health care system. Scientists should continue to play the lead role in the former, and the public should take the lead in the latter.
I would suggest that all large studies go through a two stage evaluation process. They would first be evaluated by independent scientists to make sure they exceed a high bar of scientific excellence. Those that pass would then be evaluated by a committee consisting largely of members of the public whose main role would be to rank the various studies in order of importance to patients and the health care system. They would have access to independent scientific and clinical expertise if they have questions about the science of the studies and the diseases being studied.
This is a lot to ask of members of the public. However, they provide the funds for the CIHR through their taxes. They, their family and friends use the health care system. And if carefully selected, they would provide a broad and fair perspective of what is important to the public.
Lots of details would need to be sorted out for this idea to become reality. How would the public members be selected, should they be supplemented by some clinicians (nurses, doctors, other professionals) and individuals who manage our health systems (I think yes), should the committee be able to question the researchers applying for the funds in a face-to-face meeting (I think they should), and many others.
There will be many critics of this proposal. Some will argue the science is just too complicated for members of the public to understand. However, by ensuring that all research proposals considered by the committee are above a high bar of scientific excellence, assessing excellence will be less of an issue (although there will definitely be a learning curve for committee members). Others will argue that no matter how carefully they are chosen, committee members will always have biases in favour or against certain disorders. This is true of any member of any committee, and I am confident that fair minded members of the public can be found who will evaluate the proposals as impartially as possible. Some will worry that this is a huge burden to put on members of the public. While this is true, I am sure some members of the public will see this as an important contribution to society and one they are keen to take on. And some will argue this proposal will be expensive and will add yet another layer of review to an already onerous process. But given the expense of each large study and the importance of the decision, I think this extra step is worth it.
The CIHR is committed to a Strategy for Patient-Oriented Research that “…is about ensuring that the right patient receives the right intervention at the right time.” What better way to demonstrate its commitment than to involve members of the public in some of its most important decisions. It may be that the public should be involved in other decisions, such as the proportion of research funding that should be devoted to discovery or basic science research versus health services research versus the large clinical studies I have discussed here. Or, how much of the provincial and federal health care budgets in Canada should be allocated towards research to make the system better. However, because large clinical studies are meant to be of direct relevance to patients and the health care system, I see this as a very good place to start.
The comments section is closed.
May I also add, as a former health professional and current medical malpractice lawyer, that once research is funded by tax-payer dollars, the raw data should be readily available to anyone who requests it.
I have a number of cases where my clients have been injured as a result of chiropractic neck manipulations. A study done out of Toronto Western Hospital, and funded entirely by tax-payer dollars (including funds from CIHR) is used by the defence in all of these cases, to attempt to “prove” that chiropractors don’t cause strokes. The study was done using OHIP data.
I have spent a significant amount of resources and time attempting to obtain the raw data underlying this study, so that other epidemiologists can test it (as is the norm with good science). I have been stymied in every direction, including from the Ministry of Health itself, who apparently has a copy of the raw data.
I will eventually obtain this data because the courts will not permit the use of the results of a disputed study if the opposing side has not been given the raw data in order to test it and to enable proper cross-examination on it. However, not every lawyer will go to the lengths I have gone to get this information, since “on its face” the study seems credible enough (unless you have the scientific background and experience in this area of law to spot the flaws). Therefore, I have significant concern that legitimately injured patients will have lawyers turn away their cases as unprovable because of a scientific study being shielded from proper scrutiny.
If a study is funded by the taxpayers, there is no excuse at all why the data gathered and in the information obtained, is not fully available to the public at large. All good science should be, anyway, but what possible excuse could there be when the money for the study comes from you and me, that you and me aren’t allowed to see the data?
Hello Dr. Laupacis,
I very much enjoyed reading this timely article; I agree that research prioritization needs a shift in thinking to incorporate patient and caregiver perspectives. I understand there are quite a number of methods in the field, and that there is now a Cochrane Methods Group dedicated to this area (http://capsmg.cochrane.org/).
To me the big question is how we define outcomes and how we measure them: in other words, how do we know that we have done it “right” when we try to incorporate patient values and preferences? Would it be through the implementation of research findings by end-user groups? As you noted, a big question is whether the research questions generated through an iterative, rigorous process involving 60-90 disease group representatives is representative of the community as a whole.
Thank you for this thought-provoking piece!
Hi. Thanks for your comment.
The James Lind Alliance in the UK (http://www.lindalliance.org) works with patients and caregivers to identify research topics that are a high priority to them. I have been involved with a Canadian group that used their methodology to identify the research priorities of patients on dialysis and their carers – see: http://www.ncbi.nlm.nih.gov/pubmed/24832095
The JLA has also identified, as you point out, that sometimes the outcomes that are important to patients differ from the outcomes commonly used in clinical trials – see this for a somewhat dated discussion of the issue: http://www.lindalliance.org/pdfs/Publications/2008_Outcomes%20in%20Clinical%20Research_HSCNews_JLA%20Seminar.pdf
I think the issue of how to make our research relevant to the needs of patients with a particular disorder is somewhat different from making the tough decision about which large clinical trials should be funded from a limited budget. In the former, one is focused WITHIN a disorder (e.g. dialysis) while in the latter one is focused ACROSS disorders (e.g. should we fund this trial in dialysis or that one in childbirth?). For the latter, i think we need input from thoughtful citizens who aren’t aligned with any particular disease or disorder.
How do we know that we have got it right? Good question! I doubt that we’ll ever have a “gold standard” against which our grant funding process can be measured. We certainly change our grant review processes (that to date almost exclusively involves scientists) all the time (note the large change under way currently at the CIHR) and to the best of my knowledge we don’t have a gold standard for that. I suspect the process will always be iterative.
Thank you Andreas for (re-)opening this debate. I agree there should be a public voice here, but would perhaps reverse the decision-making sequence. Rather than judge the scientific merit first – which means the full-scale proposal has to be prepared and submitted before relevance is considered – I would tend to assess the relevance and then invite those ideas that seem promising to go through the arduous task of preparing the full-scale proposal. That’s what Letter of Intent and similar processes do, and by and large I think it works fairly well.
More fundamentally, while I on principle support the SPOR initiatives and a more central role for the public and patients in the research enterprise, I think it would matter (somewhat) less who is making the decisions if there were clear and transparent criteria for assessing relevance. That usually turns out to be a contentious issue: my idea of relevance may be different from yours and both may be defensible. What seem like intuitively reasonable criteria – e.g., epidemiological burden, absence or existence of effective treatments, whether the improvement to be tested promises modestly incremental or breakthrough improvement, the hypothetical cost-benefit scenarios, the degree of intrusion and discomfort inherent in the novel agent to be tested, whether the trial would be unique in the world or a replication, etc. – are all contestable. In theory the solution is a balanced scorecard incorporating the main dimensions of relevance, but as with all such scorecards, it all comes down to how one assigns values to each of the dimensions, and how the dimensions are weighted against each other.
These matters are usually resolved by a “wisdom of crowds” approach, and since relevance is invariably subjective and even somewhat arbitrary, if consensus is impossible, we should at least strive for transparency. And of course, as Andreas notes, it’s a question of which crowd. The difficulty with mixed crowds is that the non-expert members will tend to defer to the cognoscenti, which is not inherently wrong and may indeed be prudent. It’s not that any rank-ordering of relevance will be impregnably sound; what we want, in a democracy where public dollars fund large trials, is a transparent reasoning process so we can all understand not just what was deemed relevant, but why. Broadening the decision-making base makes it less likely that a purely insiders’ game will produce an agenda that ill reflects health and health care realities. Insisting on, developing, and testing a set of decision criteria on relevance is essential to a process that is not only inclusive, but rigorous and aligned with that other elusive concept – the public interest.