Our world is awash in information. New tools have appeared to make sense of the quantitative deluge. The acquisition of data has produced new insights which have promoted both productive and efficient practices among private enterprises and public sector organizations. But as innocuous data is grouped into all-encompassing metrics, there is a mounting peril: over-reliance upon them can foster perverse incentives.
The lure of quantification is seductive; it lends an air of sophistication to a problem and can provide incentives for certain types of decisions. Individuals respond to incentives, of course, and the incentives posed by a handful of metrics – and, in particular, a single metric – can create unintended consequences.
The romance with data dates back many decades, and so do the warning signs.
The McNamara Case
After becoming the youngest professor at Harvard in 1940, Robert McNamara spent time in the Pentagon’s Office of Statistical Control during WWII, after which he and his formerly military colleagues were hired as civilians to turn around the faltering Ford Motor Company. With the nascent skill of applying statistical rigor to business problems, he and his team – the “Whiz Kids” – cut losses, brought new efficiencies to bear, and quickly turned Ford around. McNamara became the President of Ford shortly before accepting the position of Secretary of Defense in the Kennedy Administration.
In previous roles, McNamara and his systems analysis team were removed from the focus of their work: they’d never been in combat, nor did they know anything about actually building cars. Collecting data and evaluating it against metrics, it seemed, permitted a remoteness that fostered objective, disinterested decision-making. And so unsurprisingly that same hyper-rationalist approach was applied to analyzing the escalating U.S. involvement in Vietnam as the 1960s wore on.
Searching for a key metric to evaluate the course of the war and assess progress, McNamara and his number-crunchers settled on...body counts. Field commanders were ordered to collect numbers of enemy dead after battles and bombing campaigns – estimating if necessary – and send them up the chain of command to the Pentagon for McNamara and his team to analyze.
Three fairly predictable outcomes followed. First, soldiers on the ground in Vietnam counted the “dead” liberally; one can imagine the grisly circumstances in which, considering the superior firepower of the U.S., body counts would invariably be inflated. Second, it is conceivable that the existence of such metrics encouraged the use of excessive firepower whether it was tactically justified or not. And third, the U.S. lost tens of thousands of lives and countless resources fighting a war which, for some period of time, was supported and encouraged by a deeply flawed metric.
In his 1995 book “In Retrospect”, McNamara commented that it is “true enough that not every conceivable complex human situation can be fully reduced to lines on a graph,” but remained unrepentant about the body count metric. Indeed, the practice is enduring.
Years later it was discovered that while McNamara and the Whiz Kids’ tenure at Ford was successful, underlings occasionally found themselves flummoxed by the unfettered metricization of the workplace. By one account, when managers were told that a certain number of parts of an older model car had to be used before new parts could be used, rather than deal with the complications of storage and management, they simply threw the excess parts into the Detroit River.
The Financial Markets
Just over ten years ago most of the major investment banks and hedge funds were using a single metric – the “4:15 number” (on the “4:15 report”) – to, using Value-at-Risk calculations, get an impression of the potential for losses they faced within the next 24 hours.
Knowing that the 4:15 number was the standard by which they’d be judged, traders would sometimes take off or hedge positions before the number was calculated, replacing them or removing hedges on after the report was generated. An enterprise risk number also, over time, led senior executives to think about combined effects, rather than where (in terms of departments or specific asset classes) losses were coming from. And the effect of a one-day metric led managers to focus on extremely short-term goals and risk exposure.
The economist Charles Goodhart noted that a measure, and by extension a metric, ceases to become a good measure when it becomes a target. Indeed, a number of the banks that employed value-at-risk methods are now gone, victims of the mindset that arises when attention and practices are increasingly based upon a one-dimensional number – however complicated its derivation.
The effects of the fetishization of metrics are everywhere: obsessing over digital engagement metrics can easily lead to distraction from content production. An emphasis on standardized test scores leads teachers to focus more on test-taking techniques than course materials. Making positive patient outcomes the exclusive evaluation criteria for medical practices invariably leads to the rejection of sicker patients.
The recognition of the limitations of data and the seductive, misleading role that metrics can take on – particularly, single metrics - is rapidly catching on. Wielded in conjunction with qualitative measures, metrics can add value to decision-making processes.
Quantitative tools do have a role in predictive or analytical frameworks. But remember: numerical measures are by their very nature retrospective. Epistemologically, information derived exclusively from data analysis will by be both backward-looking and prone to the shortcomings associated with the snapshot fallacy. Which is to say: they are essentially without context.
Alongside judgment and experience, metrics can aid in navigating the uncertain future of a business, project or policy implementation. Unquestioned and/or without updated context, metrics become at best unhelpful, and at worst, pernicious; grist for fools.