COVID-19 and the Limits of Science
If you want to use science wisely, you must understand its limits.
As attractive as it is to think that science exists on a distinctive, untarnished, untrammeled plane, this idealization is dangerously misleading. Science is carried out by real people, within complex social organizations. Debates about science—often civil, occasionally acrimonious—on methods and meaning are the rule. Which is as it should be: That’s how knowledge advances.
Ignoring what scientists have to teach us about COVID-19 would be a mistake. The virus is not a “hoax.”
But it’s also a mistake to default to the idea that we must “listen to Science,” as if there’s an unambiguous perspective that all researchers share and that all scientific data are established with an equal degree of certainty. This isn’t how science views itself. So we shouldn’t view it that way, either.
Within the universe of the present pandemic, some information seems very well established—the identification of the virus responsible for the condition, for example. Other data, including some very important essential facts, aren’t as clear. We need to recognize and acknowledge these limitations.
Some people see these limitations are reason not to trust any of the information that comes out of the scientific establishment. For them, the failure of models to perfectly predict the trajectory of the pandemic was enough. What we’ve learned, said New York Post columnist Miranda Devine, is that “computer models are unreliable when it comes to predicting the future.” Instead of relying on supposed experts and their supposed models, she says, we should instead “trust the innate common sense of the American people.”
But there are more responsible ways to understand the problems with modeling.
Respected biostatistician Ruth Etzioni, at the Fred Hutch Cancer Research Center in Seattle, recently wrote that the latest version of the Institute for Health Metrics and Evaluation (IHME) model from the University of Washington “makes me cringe.” The changes revised the death projections significantly higher, and Etzioni argues that the modelers got there by making a number of obscure changes to the model, then presented these updates as reflecting simply the consequences of reduced social distancing.
“It makes a nice story,” Etzioni says, “to tell the world that the reason your model’s predictions have changed is because the population’s behavior has changed. The implication is it’s not the model’s fault, it’s the politicians and the people’s shifting behavior.”
The problem Etzioni says, isn’t the modeling team—which she believes is doing it’s best to “deliver results that match the data and produce ever-more-complex predictions about the future.” Rather, it’s the way the information is communicated. Including “acknowledging your model’s limitations and uncertainties.” Which she says “is never front and center in the IHME’s updates.”
The kicker here is that Etzioni strongly agrees with the public policy message many have taken from the latest IHME update: She worries about states “opening prematurely,” and says “if some change their minds as a result of this update, it will be a good thing.”
But it’s not clear that anyone will pay attention to that last part. Because while it can be sub-optimal when science pushes its way into the world of politics, the reverse is also true.
When the brilliant Stanford statistician John Ioannidis pointed to problems with existing data around COVID-19, he was so vilified that two academic physicians who are often at odds with each other—Oregon oncologist Vinay Prasad and former Harvard Medical School Dean Jeffrey Flier—united to write a commentary headlined “Scientists who express different views on COVID-19 should be heard, not demonized.”
That’s because scientists—unlike most of our contemporary political actors—believe that debate is a helpful part of their process and that insisting on a sanitized story for the sake of the public good is less helpful than acknowledging areas of ambiguity and uncertainty.
Not that the scientific world is perfect.
If medicine has hurt itself by overstating the certainty of what it knows, it’s also hurt itself by failing to recognize the boundaries of knowledge and the tendency to offer narrowly defined solutions for what are often complicated problems. Such medical advice may be accurate as far as it goes, but lacks what John Podhoretz describes as “real world logic.”
For example, Podhoretz notes, a doctor might advise an overweight patient to eat less. In theory, this is correct. In practice, it might well be ineffectual, because the advice is nearly impossible to durably implement. (There is a reason that long-term weight loss has been famously difficult for most patients to achieve, and for most doctors to successfully deliver.)
Well-intended medical advice focused relentlessly on the utility of quarantine risks being perceived the same way: scientifically “correct”—if everyone quarantined perfectly, it would staunch the spread of the virus—but impossible to maintain.
That said, I’m not sure I’ve heard any physician advocate for perpetual quarantine, because doctors see every day the secondary health consequences of the crisis: the acutely ill patients who are apparently now dying at home because they’re too afraid of visiting a clinic; the cancer patients who are missing their chemotherapy; the children who are missing their vaccinations—to say nothing about the many patients with mental health conditions aggravated by the pandemic.
The broader issue—highlighted in this classic dialog between psychologists Daniel Kahneman and Gary Klein—is that experts may have phenomenal intuition in certain domains (generally considered “kind,” or predictable) yet average intuition in others (“wicked”). Some professional domains lend themselves to such intuition, others don’t; medicine turns out to be the canonical example of a domain characterized by “fractionated expertise,” involving areas where intuition and experience can be extremely powerful guides, as well as areas where a doctor might be profoundly overconfident. Worse: These boundaries can be as abrupt as they are indistinct.
A recent New Yorker piece argued that New York went wrong in its COVID-19 response because, unlike Seattle, it didn’t immediately place a scientist in charge. Epidemiologists, after all, are experts in the management of pandemics, the article argued.
But is epidemiology the only expertise required in dealing with a monumental public health crisis? While there’s no question epidemiologists should be a critical part of the leadership team, managing a pandemic is a good deal more complex than the biology, even.
Think of it this way: We don’t let generals make every decision about war. We have civilian control of the military because the use of force is a political question that, in order to be effective, must be married to statecraft, economics, domestic concerns, and a dozen other considerations.
The same is true for the management of a pandemic in a liberal democracy. What we need is not scientific control over the whole of public policy, but gifted leaders who can integrate and orchestrate—who are able to hear what multiple experts from a range of domains are saying, and then deliver a message that citizens are willing to follow. So far, Governors Andrew Cuomo and Gavin Newsom seem to have been extremely effective at this task.
President Trump, not so much.
In a perfect world, society would not be forced to make a binary choice on “science.” We’d be able to engage the expertise of scientists while acknowledging and embracing—as scientists themselves do—the limitations and uncertainties of their work.
I’ve been struck by how seriously even famously independent scientists and thinkers such as Etzioni and Nassim Taleb take the risk of the coronavirus. As Etzioni writes, “I am in total agreement that the premature relaxation of social distancing will lead to an explosion of new cases and deaths, you don’t need a model to know that.”
Not a lot of hedging there. Which, coming from Etzioni, ought to tell us something.
I heard something similar when I spoke with Nassim Taleb last weekend to discuss a recent paper of his arguing that we may not be able to predict the details of a pandemic, but we should be sure not to underestimate the potential for harm.
I was intrigued by Taleb’s argument. I was even more struck by the protective gear he told me he was wearing as he prepared to head into Costco: double mask, gloves, sanitizer, bleach at the ready.
As Taleb told me, it’s a situation where you can’t afford to be wrong even once.
To me, this seems to be the right message: Yes, there is a great deal scientists still don’t know about the coronavirus and much that is hard to predict. Scientists should be unafraid to acknowledge these uncertainties and recognize the boundaries of their expertise. They should aspire to offer advice grounded in real-world considerations—as the best physicians often do.
But even given these limitations, there is also a set of shared facts:
The virus is contagious, dangerous, and in some cases fatal.
Social distancing plus basic measures (such as wearing a mask) reduces the chance of spread—particularly the “superspreader” events Taleb told me he was especially worried about.
Our ability to treat the disease is likely to improve incrementally as we understand the illness better, and as more medicines come to market.
Hopefully, a safe and effective vaccine will eventually be developed.
But, despite optimistic estimates, huge resource investment, and the best of intentions, that could still be a long way off.
In the meantime, we’ll need to find to way to live in the presence of COVID-19.
Developing a healthier and more grounded relationship with science could make this fraught period considerably safer.