July 18, 2022 Reading Time: 7 minutes

Economics focuses on how changes in incentives facing self-interested (not necessarily selfish, but desiring command over more resources to advance what they care about) individuals lead them to alter their behavior over an almost unlimited range of choices, reflecting the fact that the range of choices in which scarcity is a factor is similarly vast. To understand such choices requires “thinking straight,” so that many logical errors that could lead our reasoning astray must be avoided. The economics principles text I currently use calls such errors “Pitfalls to avoid in economic thinking” as part of its very first chapter.

One of those pitfalls is to equate association with causation. As my text puts it, “In economics, identifying cause-and-effect relationships is very important. But statistical association alone cannot establish this causation.”  

In my class, I sometimes use a silly example to illustrate the problems that interpreting association or correlation as implying causation can cause. Say that ice cream sales are positively associated (or correlated) with the level of property crime. That does not mean one of them caused the other. Higher ice cream sales could possibly have caused higher property crimes. Higher property crimes could possibly have caused higher ice cream sales. Some other variable or set of variables could possibly have caused both higher ice cream sales and higher property crime. Or it could possibly have been a random result (with a major issue being the likelihood that we can reject randomness). 

Students are quick to see that neither direction of causation between ice cream sales and property crime seems plausible (although there are a surprising number of cases in which crucial underlying incentives are not obvious on the surface, so that standard is not determinative). Then it doesn’t take someone much longer to suggest that summer is the underlying cause of both. Warmer weather would be expected to increase ice cream sales, and summer means school is not in session and more time is spent “out and about,” providing more opportunity for property crimes. 

Recognizing that allows me to address the key policy-related conclusion I wish students to recognize from that section. I simply ask, “How effective would our policy choices be if we believed causation ran in either direction between ice cream sales and property crimes?” In both cases, it could lead to very ineffective policies. If I restricted ice cream sales because I thought that would reduce property crime, I would waste a great deal of resources and achieve nothing I intended to. If I reduced enforcement of laws against property crime as a means of increasing ice cream sales, it would be similarly ineffective. And any other erroneous attribution of causation could have similarly adverse effects.

Further, in our complicated world, where we have often moved from the “other things equal” assumptions that facilitated learning the mechanisms of economic relationships one at a time, to having to weigh multiple and often conflicting incentive stories bearing on a particular situation, the number of such possibilities is very large. 

That is one great advantage of market systems in a complicated world. Anyone who thinks a cause-and-effect relationship exists between two variables, and that he could make a profit by utilizing that relationship, can put that belief to the market test, and what works better can be revealed by that process. But government agencies are typically monopolies, neither subject to the market test of profitability nor facing the potential of bankruptcy (other than in the moral sense). This opens up a far greater possibility of public policies being implemented with the absence of a reliable understanding of the cause-and-effect mechanisms in play. Thomas Sowell characterized the difference as “replacing what worked with what sounded good,” as illustrated by the fact that “In area after area–crime, education, housing, race relations–the situation has gotten worse after the bright new theories were put into operation. The amazing thing is that this history of failure and disaster has neither discouraged the social engineers nor discredited them.”

Such potential confusion, often with very large stakes, is one of many reasons that good intentions often result in ineffective results, creating many Pathways to Policy Failure. It also requires us to think more carefully about causation if we are to implement more effective policies. 

One obvious avenue of approach is to ask if one variable changed before the other. But should we conclude that if one variable (A) changed first and another variable (B) changed afterward, that the first caused the second? No. That is a famous enough fallacy that it has a name–the Post Hoc Ergo Propter Hoc fallacy (often shortened to the Post Hoc fallacy).  

While we cannot logically conclude that what happened first caused what happened afterward, that does not rule out that A could have caused B. It is still possible. In fact, such correlation is often the trigger for studies looking for a causation mechanism (or mechanisms) that could explain it.  

But as long as time cannot run backwards, that result would rule out that B caused A, which can substantially narrow down the “list of suspects” for causation. One description of Isaac Newton’s approach to physics illustrates this. “Newton thought of cause and effect as sequential…Since motion takes place in time, cause and effect must be temporally ordered. An effect can happen before the cause only in science fiction stories involving time machines–which is to say, it can’t happen in reality (as far as we know).” In other words, what happens after can’t cause what happened before.  

Unfortunately, what may be true in physics need not be true in the same way with people. There is an important way in which time can effectively run backwards in changing people’s behavior. 

That is because people often change their behavior when they first begin to expect something will happen, which could be substantially beforehand, not after it happens, as illustrated by the downward movement of stock prices in 1930 that occurred as the likelihood that the protectionist Smoot-Hawley tariff bill would pass increased. 

Other examples include expectations of future tax changes. If I begin to expect taxes on a particular class of assets will rise in the not-too-distant future, which would lower their after-tax value to me, I may sell such assets before that happens, and the effect would precede the ultimate cause. Such causes can even be in the relatively distant future. Say I was majoring in a field I thought would be relatively uninteresting but highly remunerative. An expected increase in marginal tax rates on higher income earners that would take place before I hit my peak earning years would reduce the remunerative side of the comparison, relative to the interesting but less-heavily taxed alternative, and could even change my major today, decades beforehand.

How such expectations affect policy choices is illustrated in introductory macroeconomics courses. Initially, in building the Aggregate Supply and Demand model, texts often just postulate that a change in Aggregate Demand is unanticipated, without specifying how we know it was unanticipated or when that would actually be the case in the real world. That postulate leads to an easily understood story about how people would be predictably fooled in such a situation, and what they would do in response, until given adequate time to more fully respond. That story becomes the basic macroeconomic storyline illustrated by the model.

At some point later, however, how expectations are formed and how that might change the basic storyline arises, usually under the rubric of rational expectations theory. In a nutshell, what happens as a result is that what was one basic story, in which whatever macroeconomic change planners decide fools people in the direction of the policy change, leading to the same response story each time, becomes any of four different stories (with variants), where we don’t know which one will actually take place. 

Assume the government tries to stimulate the economy with fiscal and/or monetary policy. The unexpected change story assumes people are completely fooled by what the government does. So such policy attempts, at least at first, move economic output in the desired direction, and by the intended amount (if government planners got a host of other things right as well). But it may also be that people “see” some of the policy changes coming in advance but underestimate their magnitudes. If that is the case, you would get the same qualitative story (which directions the variables change), but the size of the effects would change, depending on how completely people anticipated the policy alteration. Unfortunately, policy makers need to get magnitudes and timing, as well as directions, right to effectively stimulate or stabilize the economy. What might have been the right policy can become the wrong one, at least in degree. Further, people might correctly anticipate the policy changes and their magnitudes. In such cases, those people would not be fooled, and the intended effects on people’s behavior would disappear, as people’s responses offset policy changes. Policy in that case would not achieve its desired ends. It would be futile. And it is also possible that people will not just see the stimulus coming, but overestimate its magnitude. In that case, output and related variables would move in the opposite direction from that intended. Such a stimulus can lead to economic contraction rather than expansion, as illustrated by the stagflation of the 1970s, which we are hearing so many warnings against repeating today.  

This multiple-pronged set of possibilities, none of which can be relied on with any degree of certainty, is very different than a theoretical world in which a policy change is simply defined as unexpected. Yet that is where we find ourselves now in macroeconomic policy. Many voices claim to have THE ANSWER about what fiscal and monetary policy should be both now and into the future, expressed with a great show of assurance. But the truth is that nobody knows for sure exactly what will happen when macroeconomic policy changes, unless they know how people’s expectations will respond. And if someone’s confident self-appraisal on that score is wrong, the results could be very different than envisioned, including a substantial possibility that it will make things even worse. So we must recognize that in our current circumstances, honesty requires any serious answer (at least part) to involve “it depends,” not the plethora of “trust me; follow my plan, because I know what to do” from beltway snake oil salesmen. It would seem that we need a new latin phrase to protect ourselves—caveat civis (let the citizen beware).

Gary M. Galles

Gary M. Galles

Dr. Gary Galles is a Professor of Economics at Pepperdine.

His research focuses on public finance, public choice, the theory of the firm, the organization of industry and the role of liberty including the views of many classical liberals and America’s founders­.

His books include Pathways to Policy Failure, Faulty Premises, Faulty Policies, Apostle of Peace, and Lines of Liberty.

Books by Gary Galles

Get notified of new articles from Gary M. Galles and AIER.