May 28, 2019 Reading Time: 5 minutes

My name is Max Gulker. I’m an economist, and I’m biased. I bring my own unique combination of experience, temperament, and training to bear on any economic or policy question. That alone might lead me to a different conclusion than another capable economist operating in good faith but with different biases. I also like winning, being right, and gaining the approval of my colleagues. As an economist, it’s on me to be constantly vigilant about biases stemming from especially these less noble sources, but I’ll never come close to doing a perfect job.

I’m not sheepish about admitting that I’m biased because I believe the last paragraph could have been written about virtually any economist working today. Perhaps no other discipline has as fraught a relationship with the idea of bias as economics. Because economics is more complex than most of what’s studied in the natural sciences, no set of rules of inquiry will ever fully eliminate the need for judgment calls by the researcher. And judgment calls are where our biases truly have teeth.

Because a degree of bias is inevitable, we should be extremely skeptical of self-anointed “revolutions” in economics that promise to make bias a thing of the past. One of those “revolutions” is gaining steam right now.

The “Credibility Revolution”

A recent vox.com article titled, “The radical plan to change how Harvard teaches economics,” heralds the coming sea change with an alternative introductory economics course taught by Professor Raj Chetty. The course stays away from the canonical supply and demand curves or any economic theory for that matter, and goes right to the data. Chetty and his course are products of a trend in economics often called the “new empiricism” or “credibility revolution.”

To be fair, it’s not clear whether actually replacing the standard Econ 101 course is Chetty’s idea or the fantasy of Vox writer Dylan Matthews. But it’s clear Chetty sees the standard theory as out of touch:

“I felt increasingly what we’re doing in our offices and our research is just totally detached from what we’re teaching in the intro classes,” Chetty says. “I think for many students, it’s like, ‘Why do I want to learn about this? What’s the point?’”

Chetty’s class is called “Using Big Data to Solve Economic and Social Problems,” and focuses on the empirical techniques forming the foundation of the so-called credibility revolution. Practitioners combine econometric estimation with an emphasis on “experiment design” to try to get as close as they can to science experiments in determining causes and effects of economic phenomena.

Much of the new empiricists’ work utilizes randomized field experiments–for example, development economists might collaborate with an NGO to randomly select villages to receive cash aid versus food aid. New empiricists also exploit “natural experiments,” such as using life events like maternity leave to assess the direct impact of teachers on classroom outcomes. The idea in either case is to try to drown out all of the complexity and noise of the real world to simulate the clean cut-and-dry world of a science lab.

Much of the backlash against the new empiricism within the economics profession has rightly focused on the virtual neglect of economic theory in favor of looking “at what the data tell us.” Many writers have exposed the flaws in this approach, most notably economist Russ Roberts. But beyond this age-old debate, we should be equally concerned at the suggestions by new empiricists that their process is immune to their own biases.

Chetty said as much in a 2013 New York Times op-ed when he wrote that “as the availability of data increases, economics will continue to become a more empirical, scientific field.” He frequently makes comparisons between economics and medicine, arguing that improving techniques will objectively solve vexing questions.

A 2010 article in the Journal of Economic Perspectives by Angrist and Pischke popularized the term “credibility revolution” and serves as a manifesto of sorts for new empiricists:

Empirical microeconomics has experienced a credibility revolution, with a consequent increase in policy relevance and scientific impact. Sensitivity analysis played a role in this, but as we see it, the primary engine driving improvement has been a focus on the quality of empirical research designs.

Design-based revolutionaries have notched many successes, putting hard numbers on key parameters of interest to both policymakers and economic theorists.

Research stemming from the new empiricism has no doubt made important contributions to the field. However, its claim of reaching a scientific ideal, untethering economics from human opinion, is deeply concerning and problematic.

Judgment and Bias

All research involves some amount of judgment calls, and judgment calls are the breeding ground for human biases. This problem no doubt effects even chemists in the most clinical laboratory, but the scope for bias multiplies when analyzing economic data from the real world no matter how “well-designed” one’s experiment.

One issue in the new empiricism that leaves the door wide open for researcher bias is the environment in which one conducts an experiment. Look no further than the lengths to which natural scientists go to make laboratories sterile and ordered. Compare that to where the new empiricists purport to get “scientific” results: randomized experiments and natural variation in data from the real world, where complexity along thousands of dimensions is unavoidable.

Consider the hypothetical randomized development experiment proposed above, where villages are randomly selected to receive cash or food aid. If researchers run this experiment in Nigeria, should policymakers in Laos view the results as scientific? In attempting to deflect this problem, Angrist and Pischke acknowledge only underscore its importance, writing that “anyone who makes a living out of data analysis probably believes that heterogeneity is limited enough that the well-understood past can be informative about the future.”

The belief in whether a certain result can inform development policy globally, or whether the result is peculiar to the circumstances of one country is a judgment call. Economists with biases toward wanting the result to be true globally will argue one way, those with investment in the result not being true may argue the other.

Judgment calls are also rampant in the presentation and analysis of empirical results. Empirical research in economics typically involves a deluge of numbers stemming from scores of empirical specifications attempted by the researcher. As Roberts notes, simply choosing which numbers to present, especially when turning one’s back on theory, will inevitably evoke bias:

“Young economists are enthusiastic about these quasi-experiments. As one economist once told me — I don’t rely on theory, I just listen to what the data tells me. But numbers don’t speak on their own. There are too many of them. We need some kind of theory to help us decide which numbers too listen to. Inevitably, our biases and incentives influence which numbers we think speak the loudest.”

Living With Bias

The idea that economists, and all human beings, will exploit any ambiguity in research results to attempt to further their own worldview sounds quite cynical. But look at how the debate over any economic (or scientific) issue informing political debate invariably splits along prior political lines. This is just something people do.

When highly talented researchers like Chetty and other new empiricists see their goal as overcoming such biases, they only end up reinforcing their own. We need to learn how to live with the wide diversity of prior views we as economists bring to any question, and even make those biases work to our advantage.

I learn a great deal from economists with different toolkits or political views than my own. My research on the job guarantee proposed by economists on the left let me see up close their thinking of how such a massive government would be beneficially structured. Thinking through why they were wrong greatly enhanced my own understanding of the pitfalls of such top-down interventions.

Economics as a field does not lend itself to single watershed results forcing researchers to immediately rethink their prior views. But an approach that lives with bias rather than trying to extinguish it allows researchers to gradually evolve over time.

Too often researchers in all fields define “unbiased” as “agreeing with me.” Bias, both due to deeply held principles and less noble aspects of human nature isn’t going anywhere. Economics is at its best when we approach others’ biases with intellectual curiosity, and our own biases with intellectual honesty.

Max Gulker

Max Gulker

Max Gulker is a former Senior Research Fellow at the American Institute for Economic Research. He is currently a Senior Fellow with the Reason Foundation. At AIER his research focused on two main areas: policy and technology. On the policy side, Gulker looked at how issues like poverty and access to education can be addressed with voluntary, decentralized approaches that don’t interfere with free markets. On technology, Gulker was interested in emerging fields like blockchain and cryptocurrencies, competitive issues raised by tech giants such as Facebook and Google, and the sharing economy.

Gulker frequently appears at conferences, on podcasts, and on television. Gulker holds a PhD in economics from Stanford University and a BA in economics from the University of Michigan. Prior to AIER, Max spent time in the private sector, consulting with large technology and financial firms on antitrust and other litigation. Follow @maxg_econ.

Get notified of new articles from Max Gulker and AIER.