On the ghoulishly appropriate date of 31 October – Halloween – Boris Johnson announced new lockdown measures. “These measures above all will be time-limited,” the Prime Minister assured the people. “They will end on Wednesday 2 December.” Only a day later, however, Michael Gove told Sky News that the lockdown could be extended. “We’re going to review it on the 2nd of December, but we’re always driven by what the data says.” In Parliament on Monday, Johnson insisted again that “These measures are time limited. They elapse on the 2nd of December.” That sounded definitive until he added, “Our intention is to use this period to get it [the infection rate] down to one.” Good intentions are unobjectionable, but not very reassuring.
Also well-intentioned, but less than reassuring, was the PM’s Saturday meeting with MP Steve Baker. Given his very public concerns over lockdown, we can guess that Baker was called in to argue against the new measures. While it is nice to see such a display of evenhandedness, the meeting seems to have been called too late to have any substantial effect on Number 10’s policy. Baker was asked on his way in “whether it’s going to be lockdown.” He replied, “We’re here to talk about it.” But the government’s lockdown plans had already been leaked to the press. At that point, what was there to talk about?
Like governments across Europe, Number 10 is toggling lockdowns on and off on like a faulty light switch. This toggling creates uncertainty, which is bad for investment and job creation, without providing a clear health benefit. Number 10 must be afraid of the terrible cost of doing nothing and afraid of making things even worse with bad policy. What would you do? You would probably do what they seem to be doing, consulting the scientific experts. Ironically, that is the problem. The minutes of the SAGE meeting of 21 September let slip the not-so-secret secret of pandemic policy prescription: the experts do not know what to do. “The existing evidence base for the effectiveness and harms of individual interventions is generally weak.” As the generally weak evidence shifts, so does expert opinion, and their advice toggles back and forth in the process.
When political leaders get scientific advice that flips, flops, and flounders, policy too will flip, flop, and flounder. Expert failure is at the root of the problem. Whether lockdowns are salvation or damnation, expert failure harms Covid policymaking. Whether Number 10 deserves exoneration or excoriation, they need better scientific advice.
It would be a good start if we could all remember what we were taught as children: there are at least two sides to every story. The current organisation of scientific advice gives SAGE a disproportionate voice. The government has sought out multiple voices, as illustrated by the Steve Baker’s Halloween meeting. But, as that same meeting illustrates, SAGE has a disproportionate influence. Though not a strict monopoly, they have what economists call “monopoly power.” Number 10 is getting mostly just one side of the story, which renders their advice correspondingly less useful. We should want the government to be more fully exposed to multiple points of view. We should want them to have a well-rounded understanding of the problems they face.
SAGE should be reorganised to simulate a market for expert advice using competing experts. In particular, there should be three independent opinions on critical policies. SAGE should bring multiple expert areas together, thus breaking down expert silos. And it should use “red teams” or devils’ advocates to challenge prevailing expert opinion.
The British government might have gotten better advice from the start if SAGE had organised three Covid teams, rather than just one. Each team would have had multiple expert areas represented, including epidemiology, economics, psychology, and social work. With multiple areas represented on each team, they would have been forced to deal with the complex interactions linking infection rates to other things that matter, such as joblessness, substance abuse, and suicide rates. And the multiple teams might have drawn out differences within areas of expertise as well as across them. For example, policy makers might have been made more fully aware of the differences between the epidemiological models coming from Oxford and Imperial College. They might have been made to better understand that “The existing evidence base for the effectiveness and harms of individual interventions is generally weak.”
Without competition among experts, we can expect more expert failure in the future. It is time to reform SAGE and reduce the monopoly power of its one, largely unchallenged, team of experts.