Back in 2005, a humble and highly accomplished Stanford University bio-statistician became a media sensation. He had written a referred article called “Why Most Published Research Findings Are False.” John Ioannidis had taken a careful look at the reproducibility of findings only to discover that most fail. He wanted to know why, and delineated five factors that give rise to fake science. The media briefly celebrated him as a debunker of fake science. However, the professor is not some kind of iconoclast. He is just careful, believes in facts, and believes that science should be rooted in integrity and truth as first principles.
Fifteen years later, the coronavirus was making its way to the United States. There was talk in the air of millions upon millions of dead, and the need for universal quarantines and shutting down economies and lives. Ioannidis saw the coming storm and wrote an epic article, one which will for many years be considered prophetic: “A fiasco in the making? As the coronavirus pandemic takes hold, we are making decisions without reliable data.”
“The data collected so far on how many people are infected and how the epidemic is evolving are utterly unreliable,” he warned on on March 17, several days after predictive models based on unreliable data were forecasting millions dead in the U.S.
Draconian countermeasures have been adopted in many countries. If the pandemic dissipates — either on its own or because of these measures — short-term extreme social distancing and lockdowns may be bearable. How long, though, should measures like these be continued if the pandemic churns across the globe unabated? How can policymakers tell if they are doing more good than harm?
Vaccines or affordable treatments take many months (or even years) to develop and test properly. Given such timelines, the consequences of long-term lockdowns are entirely unknown.
The data collected so far on how many people are infected and how the epidemic is evolving are utterly unreliable. Given the limited testing to date, some deaths and probably the vast majority of infections due to SARS-CoV-2 are being missed. We don’t know if we are failing to capture infections by a factor of three or 300. Three months after the outbreak emerged, most countries, including the U.S., lack the ability to test a large number of people and no countries have reliable data on the prevalence of the virus in a representative random sample of the general population.
This evidence fiasco creates tremendous uncertainty about the risk of dying from Covid-19. Reported case fatality rates, like the official 3.4% rate from the World Health Organization, cause horror — and are meaningless. Patients who have been tested for SARS-CoV-2 are disproportionately those with severe symptoms and bad outcomes. As most health systems have limited testing capacity, selection bias may even worsen in the near future….
In the absence of data, prepare-for-the-worst reasoning leads to extreme measures of social distancing and lockdowns. Unfortunately, we do not know if these measures work. School closures, for example, may reduce transmission rates. But they may also backfire if children socialize anyhow, if school closure leads children to spend more time with susceptible elderly family members, if children at home disrupt their parents’ ability to work, and more. School closures may also diminish the chances of developing herd immunity in an age group that is spared serious disease….
One of the bottom lines is that we don’t know how long social distancing measures and lockdowns can be maintained without major consequences to the economy, society, and mental health. Unpredictable evolutions may ensue, including financial crisis, unrest, civil strife, war, and a meltdown of the social fabric. At a minimum, we need unbiased prevalence and incidence data for the evolving infectious load to guide decision-making.
It was a powerful warning that should have been heeded. But it was not. So he started doing interviews, some of which have gotten millions of views. But did governments consult him? I do not have the answer to that. Regardless, they should have.
As an economist studied in various modeling techniques, I know, as do my many colleagues, of the grave dangers of enacting policies based on models with unreliable data, too many unknown assumptions, and the presumption that people will obey like mechanical objects. The real world rarely complies. Economists know this. It’s Ioannidis’s view that models in the world of bio-statistics and epidemiology often suffer from the same problems.
I feel sure that we will look back on these days with shock at what our leaders have done in the name of disease control. Many reputations will suffer. Respect for the intelligence and moral courage of John Ioannidis will rise.