This week, I’m attending the annual meeting of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) in Baltimore. Yesterday evening, I attended an interesting “smack-down” presentation between advocates for Randomized Clinical Trials (RCTs) vs. advocates for Observational Research. The session was packed with hundreds of people. In one corner was Michael Lauer, MD, the director of Cardiovascular Sciences at the National Institutes of Health (NIH). In the other corner was Marc Berger, MD, the Senior Scientist of Ingenix’s Life Sciences Division. Both gave examples when their research framework was right and the other got it wrong. Both acknowledged that RCTs are really essential, and that observational studies will play some role. It was an interesting session, but I was generally disappointed that neither side recognized the importance of the uninvited model builders. I’ve been involved in trials, observational research and modeling over the years. But, in the tongue-in-cheek debate about which methodologists should be kings of the hill, I’m with the modelers. Let me explain.
How Trialists See the World
The religion of the medical profession includes some strongly-held core beliefs. One of those beliefs is that the profession should be “evidence based.” On one level, the premise of Evidence-Based Medicine (EBM) is indisputable. We should be a profession that makes decisions based on facts, rather than emotions, myths, traditions, self-interest, or whatever else would take the place of facts. But, the EBM belief system, particularly as preached by the clinical trialist clergy, goes beyond that to imply a decision-making framework that should define the way we determine which treatments to offer or not offer to patients. The implied EBM decision-making process can be summarized in the following diagram.
When one reads the abstracts of scientific manuscripts describing RCTs published in all the best medical journals, this implied decision-making framework is clearly visible in the conclusions that are drawn and the logical basis supporting those conclusions.
It is important to note the implicit assumptions that underlie this traditional EBM decision-making framework:
- There are no important differences between the ideal conditions in a tightly controlled clinical trial vs. real-world use
- Health outcomes other than the one selected as the primary endpoint are not important
- 95% certainty correctly reflects decision-maker’s values regarding the trade-off between benefit and certainty
- Resources are unlimited, so costs need not be considered
Advocates of observational studies often start by questioning one or more of these assumptions. They point out that the way surgical procedures are carried out or a patient’s adherence to prescribed medication can be different in community settings. They point out that side effects or complications need to be considered when deciding whether a treatment should be offered. They question the evidence threshold, particularly for potentially life-saving treatments for patients not expected to survive long enough for the definitive RCT to get published. And, they point out that our health care system costs far more than other countries without a corresponding improvement in life expectancy or other measured health outcomes, and question whether high tech, new versions of drugs and biotechnology are worth the high cost.
But, how do modelers fit in?
Modelers are those that build computer-based models that use explicitly-stated, quantitative assumptions as the basis for mathematical calculations to estimate outcomes thought to be relevant to decision-making. Models come in many forms, including decision-analytic models, Markov models, discrete event simulation (DES), and agent-based models (ABM). The assumptions used as inputs to such models can be supported with data, based on expert opinion, or a mixture of both.
In my experience, both clinical trial and observational study enthusiasts sometimes put down computer-based models as being unrigorous. They point out that models are based on assumptions, as if that was incriminating. But, when you take a closer look at the relationships between models, RCTs and observational studies, you notice that modeling logically comes first. And modeling also logically comes last. These relationships are illustrated in the following diagram.
Clearly, an RCT is a waste of resources if it can be shown that it is implausible that the proposed treatment would add value. It is also wrong to do an RCT if it can be shown that it is implausible that the treatment would not add value. Dr. Lauer explained this point last night in an entertaining way with a reference to a sarcastic paper pointing out that parachutes have never been proven in an RCT to be effective in treating “gravitational challenge.” But, how does one rigorously assess plausibility other than by offering a model with explicit assumptions that represent the boundaries of what knowledgeable people consider to be plausible?
When the clinical trialist is designing the RCT, they must decide on the sample size. To do this, they do power calculations. The basis of power calculations is an assumption of the effect size that would be considered “clinically significant.” But, if you acknowledge that resources are not unlimited, “clinically significant” is a synonym for “worthwhile,” which is a synonym for “cost-effective.” But how can one assess “worthwhile-ness” or “cost-effectiveness” in a rigorous way, in advance of an RCT, without constructing a model based on explicitly-stated assumptions?
Once the trial is done, the results must be incorporated into practice guidelines. Again, if you accept that health outcomes other than the primary endpoint of the trial might be important or that resources are not unlimited, one needs to use a model to interpret the trail results to determine for which subsets of patients the treatment is worth doing.
Then, if observational studies are done to assess the outcomes of the treatment in real-world use, described as “effectiveness studies,” one needs to again use a model to interpret and determine the implications of the data obtained from such studies.
So, if we really want to be logical and rigorous, models must precede RCTs. And models must also be done after RCTs and observational studies to properly and rigorously determine the implications of the data for practice guidelines.
For next year’s ISPOR annual meeting, I propose to include modelers in the methodology smack-down session.