One must be a diehard ideologue to believe that centrally planning an economy can work for more than a few years, and then only if the people within the planned economy are willing to sacrifice for a cause, like winning a war. But few people seem to understand that planning also fails at the organizational and even personal level. If they embraced real world selection rules instead, they could reduce the high costs associated with planning.
Planning fails at all levels for two primary reasons: insufficient information and insufficient incentive. The former is often called the knowledge problem, and the latter can be understood as a time consistency problem. Planners, like central bankers, can claim that they will behave in a predetermined manner in the future, but they may renege and, for example, allow a high level of inflation instead of risking recession, even though they promised to keep inflation low. Similarly, people know that eating like a pig will create health problems down the road, but an extra serving of slop would feel good right now, so they renege on their promise to themselves to “eat right.”
Both scenarios can be viewed as principal-agent problems, as the cost of allowing a disjunction between the incentives of employees and those of owners. In the first, the central bankers are agents willing and able to injure the interests of the principals, market participants, because they will not be punished for it. In the second, the current self is the agent willing and able to injure the future self, the principal.
Either way, planning fails where a rule may work more consistently by reducing agency costs. In monetary policy, for example, the Taylor Rule would reduce agency costs by decreasing policymakers’ discretion. Similarly, somewhat like swear jars, various self-imposed diet fine programs create monetary disincentives to overeat.
Couching planning in terms of agency costs makes it easier to see why organizational-level planning often fails too. Sure, there is a knowledge problem component even within organizations. An organization’s management, much like Stalin’s bean counters, may not have enough information of the right sort at the right time to create a plan superior to actions that would have emerged spontaneously anyway. The presence of agency costs seals the deal by raising the very real possibility that self-serving managers and/or employees may thwart any top-down plan for their own gain.
In simpler terms, one might know what to do but have no reason to do it, or one might have all the reason in the world but know not what to do.
Let’s be clear. Organizations should try to forecast future conditions and prepare contingencies, but it is a categorical error to conflate those activities, sometimes colloquially called planning, with the sort of central planning critiqued by Austrian economists that failed in the USSR, Cuba, and North Korea. That sort of planning tries to move an economy, organization, or person from point A to point Z by a predetermined path, one often spelled out in detailed five-year planning documents that come to have the force of law, HR mandate, or habit.
Planning seems really logical, and even necessary, until one realizes that the real world doesn’t give a whit about planners’ plans. The real world is ruled by rules, by which I mean selection algorithms. The clearest of those algorithms is evolution by means of natural selection. Organisms compete in a real world test, the quest to reproduce, with each generation becoming more and more like the best competitors in that test until conditions change. Then, competitors with a different set of characteristics better matched to the new conditions start to thrive in the reproduction game.
Armen Alchian noted in a 1950 paper that a similar selection algorithm culls for-profit organizations, though on the basis of profitability instead of reproductive success. In Alchian’s words, “those who realize positive profits are the survivors; those who suffer losses disappear.” In that view of the world, the success of an organization depends not on planning or even keen forecasting but is simply a stochastic function of matching the environment, which is to say consumer needs. Few business historians like Alchian’s paper because the first section replaces stories of the “great business leader” with his simple selection rule and random outcomes.
The second section of Alchian’s paper, though, suggests that businesses are not simple organisms whose success is bound entirely to how well their DNA matches theiur environment. Organizations can change their behaviors. But how can an organization know that it is changing for the better, that its changes will increase, let alone maximize, the probability of achieving a positive profit (or other goals in nonprofit and governmental settings)? Especially given that profitability depends upon efficiency relative to competitors, the costs and strategies of which may be unknown and even unknowable? “Even in a world of stupid men,” Alchian wrote, “there would still be profits.”
Managers and their gurus like Al Chandler and Peter Drucker suggest that organization owners should put “the best” available leader (a fraught subject itself) “in charge” and let him or her be “the decider.” The “management science” of when, how, and to whom to delegate decision-making authority then kicks in.
Generally speaking, the most successful organizations push decision-making down to the level where information is best and incentives strongest. The U.S. Marines, for example, employ 3-person combat teams that are ordered to complete a concrete mission objective but not told how to achieve it. Those combat teams are parts of larger units similarly tasked from above but not micro-managed.
As David H. Freedman writes in Corps Business, “the Marine Corps management principles are built around simple truths about human nature and the uncertainties of dynamic environments.” Even with modern comms working flawlessly, there is no way some “leader” in the rear can know at any given moment whether a particular squad should move forward or sit tight. Moreover, he has no significant skin in that particular part of the battle. So the squad commander gets to make the decision, or to delegate it down, depending on the tactical situation.
While organizations should not all immediately adopt Marines-style management from top to bottom, they do need to think about delegating decision-making to those with the best information and the most incentive to behave in ways that, in a world of uncertainty, optimize profit distribution or the probability of achieving other goals, like winning battles, minimizing casualties, or increasing donations.
The type of organization and worker demands special attention. Drucker popularized the notion that knowledge workers, the biggest part of the workforce today, are particularly difficult to micro-manage. They should “manage themselves,” he argued, especially because they often know more, and may even be more intelligent than their bosses. Think of Dilbert and his Pointy-Haired Boss, for example. But even the world’s smartest boss can’t know everything and will often be rationally ignorant of seemingly innocuous but ultimately important details. Nobody is perfect in part because nobody has a stake in every part of every game.
What, then, are organizational leaders to do? Think like classical liberals and align incentives carefully! Give workers autonomy by allowing them to make decisions where they have superior information. And provide them an ownership stake by paying efficiency wages, preferably committing to maintain real wages with a COLA. Or, better still, tie compensation to goal achievement. (They need to be careful, though, because employees will usually produce precisely what they are incentivized to produce.) Then they should let their workers do battle with the enemy and adapt spontaneously to changing competitive situations. The organization might still fail, but it will be less likely to do so, all else equal, than one with a rigid top-down plan.