We live in a world inundated with, and crafted in turn, by data. It’s gathered (or created), used, stored, and often used again, alone or in conjunction with other data elements as inputs into simple decisions and exceptionally complex models. We accept it as is. What else can we do?
And yet, quantitative undertakings, despite their nominal exactness, carry all of the same failings of the individuals and organizations that create, gather, and employ them. This goes for healthcare, the media, sports, finance, economics, gaming, and beyond.
To understand the inaccuracies and their implications, consider the case of baseball.
Yogi Berra might have said: “For a simple game, baseball is complicated.”
And he’d have been right: baseball is deceptively straightforward, with a handful of positions lending itself to a wide variety of strategies, plus archaic and analytical approaches to player evaluation. At the professional level, the rewards are considerable for players, teams, and the cities which host the teams. And for the season champions (and even near-champions), those already-considerable rewards are piled upon.
For these and other reasons, there is always some temptation to cheat. The most recent accusations (they seem to occur almost annually) involves alleged activities on the part of the Houston Astros – stealing signals between opposing catchers and pitchers and relaying them to batters – which took place during 2017 (a point in time which coincides with their sudden shot to the top of certain rankings). Additionally, there are suspicions that the alleged activity continued until recently. Although ultimately bested by the Washington Nationals in this year’s World Series, the Astros have been among the winningest teams in baseball over the last half decade.
But as innocuous as it may seem, cheating casts a long shadow. Walking through the chain of causality, making batters more effective immediately impacts OBP (on-base percentage) for both the players individually and the team overall. (Whether or not the alleged cheating is the cause, the Houston Astros had the highest OBP this year.) Houston also had the highest team batting average, slugging percentage, OPS, and was in the top three in runs, overall hits, home runs, total bases, and RBI.
Conversely, advantaged batters negatively impact the statistics of the pitchers they face in several ways: with around 1,200 professional baseball players in any given season, and another 5,000 to 6,000 on minor league rosters, a handful of bad games can change a player’s career trajectory. And that’s just cheating. The broader issue of inaccuracies and errors in baseball has, in recent years come under scrutiny, and by dint of one of the common agents nowadays: technology.
By one estimation umpires make almost 35,000 errors in a single season (an average of 14 mistakes per game) which like the alleged cheating described earlier, immediately flow through performance statistics to skew and impact game outcomes, players’ careers, fan experiences, wagers, and nearly every other aspect of the game. (Foreseeably the umpire’s union, which has itself come under scrutiny at times, is not especially welcoming of the robo-umpires.)
That same research shows that at certain ball/strike counts umpires tend to favor pitchers over batters; that there are places within the strike zone that umpires tend to call wrong more than others (between 2008 and 2018 pitches to the top right and left portions of the strike zone were called incorrectly 27% of the time); and stranger still, that younger, less experienced umpires make better calls than older, more experienced ones. By other accounts, umpires often make calls which are decidedly biased in favor of the home team.
But this is not where it ends. Beyond the earnings of teams and the career prospects of players, the influence of sabermetrics on scouting decisions and millions of bettors placing billions of dollars worth of real and fantasy sports wagers will also be negatively impacted. Sports agents, endorsement deals, and merchandisers: all will feel, to some extent or another, the adverse effects of disingenuous underpinnings of bad data.
To be sure, the Moneyball revolution is now the way the business of baseball is done:
In 2003, there were maybe four to five clubs that had analytics-dedicated people on their payroll, and typically they were in an office down the hall working on recommendations to people who may or may not pay any attention. I think what’s changed today is that every general manager has some background or interest in analytics, and the typical size of the group in the front office is probably somewhere between 12 and 15 full-time people who all have advanced degrees, whether it’s computer science or physics or mathematics or some other discipline. Along with that, there are data departments in organizations. Most organizations now have database folks and data scientists that are on their payroll and that are helping them not only store the information and organize it properly but also evaluate what it means.
For this reason, bad data – whether borne of cheating, erroneous calls, or subjective judgments – has both short- and long-term consequences of a systemic nature. It is not unduly dramatic to suggest that somewhere, each year, a handful (or more) of great players who might have been drafted, or moved from the minors to the majors, are overlooked due to the direct or indirect effects of inaccurate data or faulty calculations. A veritable cascade of mislaid, invisible consequences thus propagates throughout the entire baseball ecosystem for years to come, mistakes perpetuated through historical databases and the memories of decision-makers.
Now: does anyone doubt that the US economy, indeed any economy (even a virtual economy), is vastly more complex than baseball?
In 1963 Oskar Morgenstern published “On the Accuracy of Economic Observations,” (a 2nd edition vastly expanded from the 1950 edition) in which he took to task the growing complacency with which economic data, in particular that which is tracked, compiled, analyzed and released by governments, was increasingly regarded. Noting that the growth of computing and spread of econometrics were rapidly lending a physical science sheen to what is fundamentally a social science, he outlined both the common problems with economic statistics (purposeful misrepresentations, errors, lack of definition, problems of classification, an inability to design experiments, bespoke interpretations, inappropriate aggregations, and so on) as they manifest in several particular areas: foreign trade, prices, unemployment, national income, and growth rates.
The very least that could and should be done, Morgenstern concluded, is (somewhat ironically) to emulate the physical sciences, albeit in one of its decidedly less glamorous habits: prominently displaying error estimates alongside all economic data releases.
Perhaps the greatest step forward that can be taken, even at short notice, is to insist that economic statistics be only published together with an estimate of their error. Even if roughly estimated this would produce a wholesome effect. Makers and users of economic statistics must both refrain from making claims and demands that cannot be supported scientifically. The publication of error estimates would have a profound influence upon the whole situation … [Economists and students] must be brought up in an atmosphere of healthy distrust of the “facts” put before them. They must also learn how terribly hard it is to get good data and found [sic] out what really is “a fact.”
Big data and the consequent analytical methods have certainly found their way into the planning and designs of the Federal Reserve System, the United State Department of the Treasury, the Bureau of Labor and Statistics, and other repositories of economic statistics. And while the patina of state expertise often gives the reporting body the “benefit of the doubt” with respect to data integrity and quality, a bit more skepticism is in order. There is plenty of reason to doubt the quality of government data, whether of its own production, compiled from outside sources, or poorly handled.
Consider that in December of 2018, the Federal Reserve raised the target range of its benchmark (Fed Funds) rate from 2.25% to 2.50%.
On July 31, the Federal Reserve cut the federal funds by 25-basis points to a range of 2% to 2¼%, and announced that it ended “quantitative tightening” immediately, two months before the scheduled end. The FOMC gave fuzzy guidance on future cuts, which sets up stock market uncertainties … The Fed under the orchestration of Fed Chair Jerome Powell has been flip-flopping, which has questioned FOMC independence and transparency.
And more recently, at least according to the President, discussions have included negative interest rates. If the Fed, with its massive troves of economic and financial data, untold sums of computing power, and veritable hordes of Ph.D economists, statisticians and computer sciences find reason to completely reverse direction – whether driven by or in spite of data – any and all observers or users of said information should take pause.
Oskar Morgenstern summarized his findings: “[T]he statistical problems of the social sciences cannot possibly be less serious than those of the natural sciences [and thus] the situation is far more complicated and therefore requires the most detailed scrutiny.”
At the very least, some humility is in order. We could do worse than Yogi Berra recommended when he said, “If you ask me something I don’t know, I’m not going to answer.”