November 12, 2020 Reading Time: 8 minutes

I am more than a little chagrined to have to discuss this matter, in 2020 no less, but here we are, as a society, running around like frightened children, believing all sorts of claims that we shouldn’t, and disbelieving all sorts of claims that we should at least consider more carefully.

What should a rational adult believe or disbelieve? is an unfruitful question because it is too broad. If pressed to answer it, I would respond “very little, either way.” We know astonishingly little that is worth knowing, especially of the positive variety.

Here is an example of what I mean: If asked, most Americans “know” that the Declaration of Independence was signed on 4 July 1776. They can state that from memory or at least choose it from a list, meaning they know that it was not signed on, say, Christmas Day 1792, or April Fool’s Day 1020. It turns out they are wrong as that document was actually signed on 2 August 1776. America did not even declare independence on 4 July, but rather did so 2 days earlier. The nation’s “birthday” celebration takes place on the anniversary of Congressional approval of the famous final text of the Declaration.

We get this wrong because of confusion over the verb and, frankly, because it doesn’t matter to how we live our lives today, independent of Great Britain, but still not independent of author-itarian influences.

Yes, author-itarian. We bow way too much to the author-ity of authors, especially those who purport to be scientists or “elite” journalists or academics.

Too many Americans believe X (some claim about policy or with clear policy implications) for some reason, like it sounds “right” or “woke” or PC or whatever. They then search out authorities who argue X and block out, vilify, or misconstrue anyone who asserts ~X. (This is called confirmation bias.) They drown those who support X in superlatives while implying, or sometimes screaming, that those who argue ~X must be dumb, paid shills, or downright evil.

The fact is, 99.9 percent of the population can form no more than a mere opinion about most Xs of real policy import. Some do not have the educational background to understand X, while others simply do not have the time to look into the matter deeply.

In other words, while they can opine about X, their views should have no more weight than if they asserted that “Y is the best color.” It doesn’t matter if 99% of the population agrees, there is no objective basis for the claim, which is why we call it an opinion.

For any given policy, about 0.1% of the population can go beyond mere opinion to give an informed judgement based on their experience, acumen, or what was once called discernment. Everyone should give due consideration to their “expert opinions” (which we really should call “informed judgements” or “expert discernment” to avoid confusion with general opinions) but we all have to keep in mind that experts, as fallible beings, can be wrong and that experts in X may be part of the 99.9% when it comes to issues Y or Z.

The ability to discern real experts from those claiming expert status is difficult to develop. The key is to understand precisely what is under discussion. If the issue is the transmission of a pathogen, then an epidemiologist or allied specialist might be an expert. If the issue is how policymakers should respond to a pathogen, then lots of other types of specialists may be experts too, including economists, who specialize in remembering to look for unseen costs.

The more complex or “wicked” the policy problem, the more experts will disagree on the proper policy path and the less that non-experts should jump on the bandwagon of one type of specialist or another as they most likely see the world from too narrow a view, through a pinhole as I recently put it. And that especially is the case when the specialist comes from one of the more arrogant specialties like clinical practice or epidemiology. The former convince themselves that they “save people” and the latter uses a lot of math, which makes it seem more precise or “scientific” than it actually is.

Increasingly, author-ities in some specialties deliberately distort studies in order to provide “evidence” for their favorite policy outcomes. Some outright admit that their goal is “social justice.” Consider, for example, this recent job advertisement for a scholar to lead Arizona State University’s “School of Social Transformation,” which “focuses on transformational knowledge, including creative research approaches to themes and questions embedded in broader historical, social and cultural contexts.” 

The goal, apparently, is not to increase knowledge to develop the most efficient approaches to social problems; it is to publish whatever is necessary to convince people that society needs to be transformed, undoubtedly in a very specific way because the researchers are “to be accountable to the communities with which they engage; and to foreground social transformation on local, national, and global levels.” 

In short, expert authorities can be wrong because they are fallible humans, because they have a policy or political agenda, and/or because they aren’t truly expert at the topic at hand or are viewing a broad problem from a narrow specialist perspective.

If you are thinking that maybe it isn’t such a good idea to blindly follow expert author-ity, no matter how good it might sound, great job at following my train of thought! But next you wonder who or what, if anything, you can believe.

I am no postmodernist who thinks that no truth exists and that all that passes for knowledge is merely an expression of power. Like the writers of The X-Files, I believe “the truth is out there” but it is bloody difficult to understand, even for relatively smart apes like us. Ultimately, falsification is key: we can sometimes tell when some assertion is false but we can never be entirely sure that some assertion is true.

Science is therefore a process for discovering that which is false. It begins with a model or theory that makes claims that can be tested or falsified in the real world and moves on to better models but it never ends. Properly understood, science is a quest for understanding, not a body of facts.

The best way to test a claim is to run an experiment with two identical test subjects exposed to the same world save for one variable. Maybe you have a model that predicts that rats need water in order to live. You randomly separate 100 closely related rats into two groups, give one group water and deny water to the other and observe the results over time. If the 50 water-deprived rats all die and the 50 in the watered control group live, your crude theory is confirmed but not proven. (If all 100 rats die, though, you do not jettison your theory; you try again with safer water.) 

Over time, you work to refine your theory, maybe by trying to ascertain the amount of water most rats need to live at different temperatures. A simple reduced form model becomes more complex, more of a structural model that tries to predict entire causal chains and in the process explains exactly when, where, how, and why rats need water.

Social scientists cannot ethically run experiments on people, so they must employ a variety of other techniques instead. All are fraught and some heavily reliant on statistical controls on large populations quack like ducks. The best social scientists can hope for is a “natural” experiment, where outside forces assign people into treatment and control groups. 

In one recent study, for example, researchers used a natural experiment, the effectively random distribution of Nazi raids on Italian villages in 1943-44, to show that in utero stress can negatively affect the later job market performance of the unborn (Vincenzo Atella et al, “Maternal Stress and Offspring Lifelong Labor Market Outcomes,” Sept. 2020, IZA DP No. 13744). The result seems intuitive for all but a hard core Nietzschean (that which doesn’t kill us makes us stronger, even in the womb!) and is supported by a structural endocrine model tested by lab experiments. So it seems like a claim we should provisionally accept until another test draws it into question.

If you ever wonder why classical liberal scholars get hot under the collar about socialism, it is because not one but at least three major natural experiments show that it “sucks” as Ben Powell and Robert Lawson put it. East and West Germany, North and South Korea, and the Three Chinas (mainland, Taiwan, and Hong Kong) were divided as they were due to military outcomes, not economic ideologies. The parts that became socialist remained relatively poor and backwards while the more market-oriented parts thrived economically, to the point that North Korea and East Germany had to build walls to keep their comrades from fleeing and communist China eventually embraced market reforms.

Most natural experiments are not so neat, though, so in those cases we have to remain content with simply rejecting claims that clearly do not hold up. South Dakota and Sweden may have survived the Covid epidemic and economic crisis without imposing strict lockdowns because their citizens are smarter than people in, say, New York and New Jersey. So we can’t argue that not locking down causes epidemic and economic success because maybe it is average intelligence that does so. (Hey, they were smart enough to live in places that didn’t lock down!)

We can argue, however, that lockdowns “suck” because everywhere they have been associated with a) Covid outcomes and general mortality outcomes no better than places that didn’t lock down; b) economic outcomes worse than places that didn’t lock down. In other words, no clear health benefit arises from lockdowns but clear costs, economic and non-Covid health costs, are palpable.

Some politicians and others have argued that they have to do something. That, in fact, is a logical fallacy, one of scores of commonly made reasoning mistakes, ranging from “ad hominem” (e.g., attacking Phil Magness instead of his argument) to “the strawman” (e.g., attacking the Great Barrington Declaration’s nonexistent herd immunity “strategy)” to “appeal to ignorance” (e.g., constantly asserting that a virus is “novel,” so we have to lock down). 

Great primers on informal fallacies like those described above can be found here and here. If you familiarize yourself with them and then read the work of pro-lockdowners you will be amazed to find the many fallacies they employ, wittingly or not.

Formal fallacies are even more problematic; all are non sequitur, which literally means “does not follow.” (There is even a fallacy fallacy, the argument that just because an argument is fallacious its conclusion is false. It might be true, just not for the illogical reasons adduced.) Logistical fallacies are the worst of all, yet they commonly appear in print. The most infamous is probably the “fallacy of the undistributed middle:” 

All men are people.

All women are people.

Therefore, all women are men.

That sounds silly with those premises but consider the same flawed logical form with different premises:

All Covid deaths are bad.

All ideas that aren’t from Dr. Fauci are bad.

Therefore, all ideas that aren’t from Dr. Fauci cause Covid deaths.

The second conclusion is as utterly logically flawed as the first yet you will see it assumed or implied in scores of major media articles and I daresay millions of social media posts.

Worst of all, fallacies grow stronger through repetition as not one person in a thousand can, and thinks to, dissect the underlying logic to expose the error.

Ergo, everybody should just shut up and listen to me! LOL, JK, that would be a non sequitur. What we do need is more intellectual humility. The real world is complex and most of us do not have the information, or the information processing capabilities, to make sense out of much of it. So instead of jumping on intellectual bandwagons and opining that Y is the best color, or masks work because … they do!, or somebody on TV says so, try just watching, listening, thinking, and asking questions, especially about alternatives. 

Why would another wave of lockdowns help with the pandemic when the first wave didn’t? Should we strive to save an 80-year-old from Covid if that means that a 20-year-old will take his own life? Or that a poor child will not receive enough nourishment for her body or brain? If masks are effective, and most people are wearing masks, why are Covid cases “spiking?” And on, and on, and on.

And, yes, I do try to practice what I preach. I am not screaming that Biden or Trump won the election because I do not know for sure either way. I can tell you that some states appear to have engaged in unconstitutional practices by allowing governors or judges to make election decisions instead of state legislatures as outlined in Article I, Section 4, Clause 1 of the US Constitution. But without accurate sources of information, all I can do is to prepare for the riots that may be coming and wait for SCOTUS to decide the matter.

Robert E. Wright

Robert E. Wright

Robert E. Wright is the (co)author or (co)editor of over two dozen major books, book series, and edited collections, including AIER’s The Best of Thomas Paine (2021) and Financial Exclusion (2019). He has also (co)authored numerous articles for important journals, including the American Economic ReviewBusiness History ReviewIndependent ReviewJournal of Private EnterpriseReview of Finance, and Southern Economic Review. Robert has taught business, economics, and policy courses at Augustana University, NYU’s Stern School of Business, Temple University, the University of Virginia, and elsewhere since taking his Ph.D. in History from SUNY Buffalo in 1997. Robert E. Wright was formerly a Senior Research Faculty at the American Institute for Economic Research.

Find Robert

  1. SSRN:
  2. ORCID:
  3. Academia:
  4. Google:
  5. Twitter, Gettr, and Parler: @robertewright

Get notified of new articles from Robert E. Wright and AIER.