Three Recent Reversals Highlight the Challenges of COVID Science
Science is hard. It's harder when politics gets in the way.
The COVID-19 pandemic continues to remind us just how messy and maddening science can be. At times, it seems that we’re making real headway, and then abruptly, the progress is stalled. Or worse. Three recent examples of high-profile reversals highlight the challenges of working on urgent problems at the cutting edge of medical science.
The most recent, and perhaps most conspicuous reversal is the FDA’s recent revocation of the Emergency Use Authorization (EUA) it has originally issued for hydroxychloroquine (and chloroquine) on March 28.
These drugs were developed decades ago for the treatment of malaria. Hydroxychloroquine is also used to treat some autoimmune conditions, such as rheumatoid arthritis and lupus. The EUA for the treatment of COVID-19 was granted on the basis of embarrassingly little scientific evidence, but in the presence of exceptional political pressure from President Trump.
Encouraged by aides, including his U.S. Trade Adviser Peter Navarro, the president believed there was real promise in the medication, and comparatively little risk, given that the drug had been around for so long and common side effects had been well described.
When data, including a gold-standard randomized-control study, revealed that the drug apparently offered no benefit to COVID-19 patients, the EUA was withdrawn.
While most scientists agree with the latest decision, there is almost universal consensus that the original approval represented a political, rather than a scientific, decision.
As Dr. Ashish Jha, Faculty Director at the Harvard Global Health Institute, told the Washington Post, “Until six months ago, I never worried that the FDA decision-making was being driven primarily by political considerations, even when I disagreed with some of that.” In the context of the hydroxychloroquine decision, he says, the FDA must “try and do something to restore faith in their agency.”
One particularly worrisome consequence of the apparent political meddling is that future medicines—especially vaccines—are less likely to be trusted. Before COVID-19, there was already a distressing amount of vaccine hesitancy, and this despite the robust data supporting their use. Add to this Trump’s fervent desire to announce a vaccine success prior to the election. (Vice President Mike Pence suggested in a recent Wall Street Journal op-ed that “we are well on the way to having a viable vaccine by fall.”)
The obvious concern is that under intense pressure from the administration, a “viable vaccine” could get the green light from the FDA based on inadequate data around safety and efficacy, potentially placing healthy future recipients at risk.
And an equally worrisome concern is that a truly safe and effective vaccine could be heroically developed, but then struggle for adoption given the widespread distrust of Trump, and the ever-expanding number of processes and agencies over which he is now thought to exert undue influence.
While an excessively permissive FDA fails to serve the public good, an overly rigid and conservative FDA isn’t what we need either—especially during a pandemic.
The FDA by its very nature exists between two opposing forces: one encouraging it to expedite approvals of good drugs for patients, the other urging it to protect patients from bad drugs.
Yet these tensions tend to be manifest asymmetrically since the failures—products that were approved but then discovered to be dangerously flawed—are by nature more visible than the products never developed because of an excessively cautious review process.
You might think of the FDA as a place constantly torn between committing sins of commission and sins of omission. So we ought to have sympathy for how hard the institution’s job is.
Some critics on the libertarian side of the spectrum have wondered whether the country might do better by skipping most of the review processes, letting more products reach consumers, and essentially letting the market determine quality.
In the context of COVID-19, we were briefly treated to a preview of what this parallel libertarian universe might have looked like, when the agency initially authorized a number of coronavirus antibody tests without requiring the customary quality check by the agency—essentially, relying on manufacturers for the evaluation of their own tests and mandating only a disclaimer.
The result was a mess.
Many tests didn’t work as advertised. The New York Health Department had to warn doctors not to use them, Politico reported. Former FDA Commissioner Dr. David Kessler commented, “If the FDA is not looking at validation studies, then FDA’s not doing its job.” The agency soon reversed course. On May 4 it announced that such tests must be submitted to the FDA, and are required to meet pre-specified performance criteria.
As Scott Becker, CEO of the Association of Public Health Laboratories told Politico, “We’ve long been concerned that allowing tests on the market that have not been approved and authorized for use is a recipe for disaster.” He added that the “revised policy makes a lot of sense and should have been in place over the last six weeks."
This makes sense, given the indefensibly poor quality of some of the antibody tests hitting the market.
But the flip side of this coin is that, had this process worked—had the tests been solid—then right now we would be crediting the FDA for thinking creatively and for cultivating the sort of innovation the some crises require.
Certainly, this is how the FDA sees its initial failure. On a recent panel discussion I moderated at this year's (virtual) BIO2020 meeting, FDA Principal Deputy Commissioner Dr. Amy Abernethy told me that in the context of COVID-19, “We have all had to figure it out really fast.” Abernethy noted that companies have recognized that,
. . . it’s really important to take a few risks, especially if those risks are calculated and you have understood how to best manage them. Meanwhile, on the FDA side, we’ve realized that we need to have regulatory flexibility in many different directions, and then learn from that regulatory flexibility on the fly and actually be clear about that.
Added Abernethy, “We have issued a number of EUAs that have taught us what is okay to do. And sometimes it doesn’t work, so sometimes we have to pull back.”
She cites the change of policy around COVID-19 antibody tests as a prime example: “As FDA, we’ve been right-sizing right along with companies and I think COVID-19 gives us this really important opportunity to try and ultimately figure out how do we innovate better, especially in areas that are historically incredibly risk-adverse.”
The challenge facing regulators is in a sense similar to that facing other large legacy organizations, as Safi Bahcall detailed in his recent book, Loonshots. The dilemma is that innovation requires risk, and even calculated risk includes the possibility of being wrong.
If we insist on punishing all mistakes, we’ll also disincentivize the inescapable risk that innovation and change require.
A third recent scientific reversal involves not the FDA but prestigious medical journals: The Lancet and the New England Journal of Medicine. Each journal recently retracted a COVID-related paper that relied on a questionable dataset aggregated by a company no one had heard of ten minutes ago.
The Lancet paper purported to examine the outcome of hospitalized patients treated with hydroxychloroquine or chloroquine (or neither), and concluded that these medicines didn’t help patients and seemed to increase the risk of cardiac arrhythmias and death.
The NEJM, meanwhile, had found that hospitalized patients with underlying heart problems were more likely to die from COVID than similar patients without heart problems—but the risk didn’t seem to be aggravated by several categories of hypertension medicines that had previously raised concern.
In some ways, this reversal is perhaps the most worrisome of all, in that it threatens an emerging and important category of information known as real-world data.
Real-world data basically means data from all the patients routinely engaged with the healthcare system. It encompasses everything except for typical clinical trials, which generally exist outside the system of routine care.
Most randomized clinical trials—and certainly those involved in the initial evaluation of novel therapeutics—involve very deliberate and systematic approaches to the evaluation and treatment of patients.
A patient enrolled in a clinical trial is studied according to a highly detailed, meticulously crafted, rigorously reviewed, and (hopefully) stringently followed protocol. The goal is to conduct a high-quality experiment that seeks to establish whether a defined intervention can impact a specific disease, and do so in a way that is relatively safe—that does appreciably more good than harm.
The randomization and the rigor are required to maximize the chance of detecting a meaningful efficacy signal.
But here’s where things get weird: Once an approach is established—whether a new drug that’s approved, or an innovation to an existing therapy that’s adopted—the approach is essentially out there, to be used or not, as providers or hospital systems see fit.
In other words, you have remarkably little visibility into how patients are faring in the real world. Even though this is what matters most.
Is a medicine working as intended? Are there patients who are doing better than expected? Or worse? Are there tweaks introduced by front-line providers that may help a medicine work better, or be better tolerated? Most of the time, the providers have only their inevitably limited experience (plus what they may have heard anecdotally) to go on as they try to assess whether a particular approach is actually effective in their hands.
Even most hospital systems lack the ability to assess how well a particular approach may be working. (Though they can usually determine exactly how many of a particular procedure were done, and calculate how much revenue was generated.)
There are reporting requirements for severe adverse effects and there are often follow-up clinical studies performed. But the vast, vast, majority of clinical experience is not meaningfully incorporated into our collective understanding.
While isolated aspects of an individual’s interaction with the healthcare system are captured (through a combination of insurance data, lab reports, and physician notes) it is often prohibitively difficult to track the trajectory of most individual patients longitudinally, through health and disease.
This is rarely done effectively at individual hospitals and the task is complicated by patients receiving care at multiple institutions (and also by changing insurers relatively frequently). Information in electronic medical records—much of it in service of billing responsibilities—is notoriously inaccurate and often represents prepopulated templates (such as physical exams) or sections of previous notes simply cut and pasted.
On the other hand, if you had a high-quality, comprehensive, up-to-date dataset, imagine how useful it would be—especially in a crisis like COVID-19.
This is exactly what the FDA itself is seeking to facilitate through the COVID-19 Evidence Accelerator because with this sort of information at your fingertips, you could rapidly evaluate whether categories of commonly administered medicines—such as hydroxychloroquine—were effective and whether other categories of medicines—like some anti-hypertensives—made things worse.
In other words, we would be able to routinely evaluate exactly the sort of questions tackled by the recent Lancet and NEJM papers.
Real-world evidence, in other words, represents the critical part of the feedback loop required for learning in medicine.
It’s what’s required to help healthcare systems iteratively optimize and figure out how to offer better treatments. It could also help tune AI algorithms seeking to identify subtle clinical patterns.
But the full promise of real-world data—and the dramatic improvements in care it might enable—may be prohibitively difficult to realize, given the challenges of pulling together all the required information, and ensuring the quality and accuracy of what’s assembled. In fact, it was the unusual completeness of the dataset used for the Lancet and NEJM papers (along with other methodological and data integrity concerns) that roused the suspicion of outside experts, who called on the journals to investigate.
The company providing the relevant data was Surgisphere, an obscure Chicago-based outfit whose “handful of employees,” according to the Guardian, “appear to include a science fiction writer and an adult-content model.” The company, the Guardian continues, “has provided data for multiple studies on COVID-19 co-authored by its chief executive, but has so far failed to adequately explain its data or methodology.” Surgisphere, for its part, says “we stand behind the integrity of our studies and our scientific researchers, clinical partners, and data analysts.” But they have reportedly refused to cooperate with independent investigations.
Over time, we’re sure to learn more about Surgisphere, as well as the review processes that led two top journals to overlook such critically flawed papers. It’s possible that the editorial appeal of the conclusions—indictment of Trump’s favorite drug, acquittal of two common blood pressure medicines—helped speed things along.
Because it turns out that Donald Trump is instinctively right about one thing: Science, as practiced, can be as political, as personal, and as petty as every other area of human endeavor.
But science—unlike our chief executive—also tends to be self-correcting. And for all its flaws, science still remains our best hope to defeat this deadly virus.