July 29, 2020 Reading Time: 6 minutes
equations, ai, computer

On the 31st of July — just a few short days away — the six month anniversary of the WHO Declaration of a Global Health Emergency will arrive. And while there are many dates which could serve as the starting point for the start of the crisis, this one seems particularly appropriate: it is shortly thereafter that epidemiologists began plying the quantitative tools of their trade in force, and just after which the conversion from model outputs to policy schemes started to transpire. 

We know now how dreadfully awry much of that planning went, not only in terms of harmless (if shockingingly wrong) predictions, but in real terms: lost lives, wealth, and opportunities. To the extent that epidemiological models attempt to predict or ascertain the infectious rate of a given pathogen, there is reason to believe that they can be helpful. But to the extent that those predictions, in turn, hinge upon decisions made by human beings — often by including a characteristic of an agent that replicates making choices — the effort is doomed to fail and, moreover, is an exercise in uncommendable audacity. 

An overview of the taxonomy of problems underscores their incredible complexity, as well as the methods used to deal with them. 

But first, I’m an economist; not a mathematician, computer scientist, or theoretical physicist. And tied to that, I have decided not to introduce such concepts of polynomial time, nondeterministic machines, and the like which are typically part of this discussion. I believe that, nevertheless, the deeply complicated nature of human thought and action in the face of uncertainty or impediments — and the futility of attempting to approximate those in models or simulations — will be made apparent. 


There are some problems which, on the surface, appear simple: putting a large number of irregularly shaped objects into a large box, for example. Obviously no two objects can occupy the same space at the same time. And while it may be easy to approach an optimal ordering of the irregular objects inside the box, it may be exceedingly difficult to determine the optimal structuring within the box. And further still, the difference between close-to-optimal ordering and actually optimal ordering may be small but nontrivial: for example, it may require unpacking every one of the irregular objects and starting over again. 

It may be helpful to think of the solutions to a problem of this sort as a three-dimensional surface. If there are 500 irregular objects that we are required to place in the box – let’s say that the requirement is to order all of the objects in a way that the box can be closed – and there are 50 undefined spaces within the box with 10 different possible orientations for each object, the problem would require (500 times 50 times 10) 250,000 bits to accurately represent every possible ordering within the box. 

Now imagine that each of these possible orderings is evaluated by the amount of space that is left in the box, with a negative number representing a failure to be able to close the box and a positive number indicating a successful organization of objects within the box. Plotted in two dimensions, we would find a landscape: “sinkholes” or “valleys” in places, “hills” and “mountains” in others. Each “hill” represents a successful organization of objects within the box such that the box closes properly, and the optimal solution would be the highest “mountain” on our solution surface. 

There are various algorithms which can be employed to solve such problems, and some are better than others. (The term “efficiency” in computational science is used to describe the time, memory, or number of steps taken to complete a computational task.) Within this ‘solution surface’ metaphor, such algorithms may start at a point, iteratively discard certain areas of the surface, and quickly bring us toward the area of the landscape that contains solutions; and ultimately, the optimal solution. 

This is because this type of problem has an intrinsic structure to it. There is an ordering which may be simple or complex, but nevertheless exists; and given that it exists, it can be worked out and solved with enough computational power. We can not only find the best solution to the problem, but check and see that we have by running the problem over again. 


An NP-hard problem either has no such structure or the structure is so complicated as to be fundamentally impenetrable. In contrast to problems within the NP-Complete class, not only is such a problem difficult to solve, it is extremely difficult to determine if a chosen solution is the best one. 

And the coup de grace of the NP-Hard class of problems? Proving that a problem is actually NP-Hard is itself an NP-Hard problem.

Computational Complexity in Everyday Life

Most of the time, and for good reason, the topic of computational complexity is relegated to masters and dissertation level courses in computer science (and sometimes physics). The problems they reference are vastly beyond putting irregularly shaped objects in a box: they’re more along the lines of determining the existence of an optimal outcome to a game of generalized chess. Where the difficulty of finding a solution is concerned, they deal not with solutions that take hours or days but at times multiples of the theorized lifetime of the universe to compute. There is some debate over whether even quantum computers will be of use in solving the hardest of the NP-Hard problems. 

But this class of difficult (often near impossible) problems to solve which are even more difficult (and perhaps effectively impossible) to check solutions to isn’t consigned to scientific esoterica: human beings face them daily. Starting a new job, making decisions about saving or consumption, and a vast variety of other choices are, technically, NP-hard: even when binary decisions are involved, the number of considerations that make the perfect course of action is shrouded by uncertainty and changing situations, and that same array of conditions make looking back to determine if the correct option was selected an exercise in futility. Although we often give ourselves the benefit of the doubt, determining that in a past situation we picked the indisputably best alternative among what could be thousands of choices is probably not possible other than by force of sheer randomness. 

Many NP-Hard problems, both in theory and in human life, involve optimization. 

And I say “probably not possible,” because as mentioned previously: proving that an issue falls within the category of NP-Hard is itself NP-Hard. I can’t prove it, and even if I could it would be impracticable for me or anyone else to substantiate.

If so much is unsolvable, how do we solve anything?

If this framing sounds nihilistic, it isn’t meant to; nor is the epistemological concept itself. Few would be surprised to learn that there are some questions for which there will never be clear answers, or that looking back upon past decisions is at times maddeningly inconclusive. (And again, even when we tell ourselves we made the perfect choice at a perfect time or place, we are more likely engaging in harmless self-aggrandizement.) 

So how do human beings solve problems? First, some NP-Hard problems are basically solvable using simple but powerful tools. While many of the decisions and challenges we face may technically be NP-Hard (“What degree program should I enter?”), we don’t allow them to flourish in all of their intractable glory. While the definition of intelligence remains inconclusive, millions of years of evolution have resulted in the human mind coming ‘pre-loaded’ with heuristics: we accumulate rules of thumb, put together past experiences to construct educated guesses, and we are born with the ability to conduct basic empirical analysis. All of these, further, are added to and honed with time and experience. 

In short, human minds automatically jump over, cut through, and steer around uncountable solutions which are either obviously incorrect or don’t meet other qualifications of our solution-seeking process, and focus only upon that set of solutions which appear to. And all along, time-preference, social, and cultural factors weigh into and influence the selection of our final choice. Moreover, success or failure of chosen heuristics influences both which ones and the order in which they are brought to bear on problems in the future, which has further influence on our problem-solving ability.  

What’s the point of all this?

We have been watching, for months, as high-powered quantitative methods have been applied to interpret and predict the rapidly changing circumstances of a pandemic. And deeply embedded within that environment are a variety of social science issues. But despite sometimes acknowledging the challenges of boiling down the actions of hundreds of millions or billions of people to agents in an agent-based model or nodes and arcs in a network model, that method of informing policy seems to be continuing unabated. 

Inside the mind of every individual, scores of NP-Hard problems are being worked out: some so quickly as to be instantaneous, and others in a stepwise fashion of days, years, or decades. In the aggregate, deciphering the ways in which enormous masses of individuals will move or act is, similarly, an NP-Hard problem. The outcome of the collisions of individual responses to NP-Hard problems (and, to be sure, all of the other problems in other categories that people face) is utterly and wholly unpredictable; as intractable in the aggregate as in each of hundreds of millions or billions of minds.

Life is NP-Hard, and attempting to express that mentation in a few lines of code is imprudent, yet in and of itself probably harmless. But bringing those projected outcomes to politicians, who have neither the inclination to be skeptical or incentives to act cautiously, is dangerous. 

Peter C. Earle

Peter C. Earle

Peter C. Earle, Ph.D, is a Senior Research Fellow who joined AIER in 2018. He holds a Ph.D in Economics from l’Universite d’Angers, an MA in Applied Economics from American University, an MBA (Finance), and a BS in Engineering from the United States Military Academy at West Point.

Prior to joining AIER, Dr. Earle spent over 20 years as a trader and analyst at a number of securities firms and hedge funds in the New York metropolitan area as well as engaging in extensive consulting within the cryptocurrency and gaming sectors. His research focuses on financial markets, monetary policy, macroeconomic forecasting, and problems in economic measurement. He has been quoted by the Wall Street Journal, the Financial Times, Barron’s, Bloomberg, Reuters, CNBC, Grant’s Interest Rate Observer, NPR, and in numerous other media outlets and publications.

Get notified of new articles from Peter C. Earle and AIER.