June 25, 2010 Reading Time: 7 minutes

On Thursday, June 17, I attended a lecture by Dr. Benjamin Powell of Suffolk University. He spoke at the Charles G. Koch Foundation, giving one of his excellent lectures on Somalia and its Stateless society. While the lecture itself is well worth hearing, one of the most fundamental statements made came almost as an aside. He was speaking in the context of public versus private provision of roads, and made the point that just because something is done now, that does not mean that it must be done in that way or even that it historically has been done in that way. In his lecture, he pointed out that government provided roads have not historically been the case, and that in fact there were a number of private turnpikes in existence before the State took over.

This argument is one that is often lost today. We become accustomed to a certain way of doing things and imagine it to be the only way or the best way to go about it. The argument is flawed. Let us take a moment to look at the monetary system of the United States. Is this system the way that money should be provided? What does history have to tell us?

We can begin with the Mengerian origin of money as described by Richard Ebeling. (Dr. Ebeling provides an excellent summary, but for those of you who have already heard this, feel free to skip ahead.)

In his Principles of Economics (1871) and in a monograph entitled “Money” (1892), Menger explained the origin of a medium of exchange. Often there are insurmountable difficulties preventing people from trading one good for another. One of the potential trading partners may not want the good the other possesses. Perhaps one of the goods offered in exchange cannot readily be divided into portions reflecting possible terms of trade. Therefore, the transaction cannot be consummated.

As a result, individuals try to find ways to achieve their desired goals through indirect methods. An individual may first trade away the good in his possession for some other commodity for which he has no particular use. But he may believe that it would be more readily accepted by a person who has a good he actually wants to acquire. He uses the commodity for which he has no direct use as a medium of exchange. He trades commodity A for commodity B and then turns around and exchanges commodity B for commodity C. In this sequence of transactions, commodity B has served as a medium of exchange for him.

Menger went on to explain that, over time, transactors discover that certain commodities have qualities or marketable attributes that make them especially serviceable as media of exchange. Some commodities are in greater general demand among a wide circle of potential transactors. Some commodities are more readily transportable and more easily divisible into convenient amounts to reflect agreed-upon terms of exchange. Some are relatively more durable and scarce and difficult to reproduce. The commodities that possess the right combinations of these attributes and characteristics tend to become, over a long period of time, the most widely used and readily accepted media of exchange in an expanding arena of trade and commerce.

Therefore, those commodities historically became the money-goods of the market because the very definition of a money is that commodity that is most widely used and generally accepted as a medium of exchange in a market.

As described in the above section, the origin of money was not the Federal Reserve or any other central authority. Money emerged to deal with the costs of doing business: the market process at work. The market process did not stop there, however. Once gold and coins emerged as the medium of exchange, the need for banks also arose. As Vera C. Smith states,

The origins of banking in the modern sense are to be found in about the middle of the seventeenth century, when merchants took to depositing their balances of coin and bullion with the goldsmiths. The goldsmiths then began offering interest on deposits, since they could re-lend them at higher rates, and the receipts they gave in acknowledgment of the deposits began to circulate as money. There thus arose a number of small private firms, all having equal rights, and carrying on the issue of notes unrestricted and free from Government control.

Banks arose as warehouses and lending agencies, once again coming from the market process rather than a direct decree from a central authority. And it was not the failure of these institutions that led to the creation of a central bank, but rather a failure in government restraint. As Smith continues on to point out,

Charles II had had to rely to a very large extent for his financial needs on loans from the London bankers. He ran heavily into debt and in 1672 suspended Exchequer payments and therefore the repayment of bankers’ advances. The King’s credit was thereby ruined for several decades to come, and it was to provide a substitute for the sources of accommodation thus destroyed that William III and his Government fell in with the scheme of a financier by the name of Patterson for the foundation of an institution to be known as the Governor and Company of the Bank of England. Its establishment was described by the Tunnage Act, among the many clauses of which its incorporation looked an absurdly minor event, as being “for the better raising and paying into the Exchequer of the sum of £1,200,000.”

Overspending by the king and his refusal to repay his debts led to the creation of a special bank, granted privilege so that kings might continue such tactics without facing the consequences. Such is the beginnings of a central banking authority. It was designed not to correct bad banking practices absent regulation, but to allow for bad government practices. According to Smith,

After 1875 the central banking systems of those countries which already had them were accepted without further discussion, and the practical choice of the one system in preference to the alternative was never again questioned. Moreover, the declared superiority of central banking became nothing less than a dogma without any very clear understanding of the exact nature of the advantages, but there remained one among the chief commercial countries of the world which still lacked a central banking organisation: this was the United States of America.

The United States held out longer than most other large nations, before giving in to the idea of a central bank authority. The argument often runs that the pre-Fed period of the United States was one of instability and crises. As George Selgin puts it,

…financial crises have not been limited to those nations in which currency-issuing privileges are concentrated in a single bank. The United States, in particular, endured a series of severe crises—in 1873, 1884, 1893, and 1907—prior to its decision to embrace central banking in the form of the Federal Reserve System, which was created in 1913. The U.S. case therefore appears to contradict my claim that central banks are properly regarded as destabilizing rather than stabilizing institutions.

Selgin answers this critique by stating,

…the pre-Fed crises can themselves be shown to have been exacerbated, if not caused, by regulations originally aimed at easing the Union government’s fiscal burden. The U.S. case therefore represents a special instance of the general pattern according to which central banking emerged as an unintended by-product of fiscally motivated government interference with the free development of national financial institutions.

The interference in the U.S. case consisted, in part, of Civil War legislation that limited commercial banks’ ability to issue their own banknotes. National banks were allowed to issue their own notes only if every dollar of such notes was backed by $1.10 in federal government bonds, and state-chartered banks were forced to withdraw altogether from the currency business by a prohibitive tax assessed against their outstanding circulation beginning in August 1866. The result of these combined regulations was an aggregate stock of paper currency geared to the available supply of government securities. From the late 1870s onward, as the government took advantage of regular budget surpluses to reduce its outstanding debt, the supply of eligible backing for national banknotes dwindled, and the total stock of such notes also dwindled until, by 1891, the latter stock was only half as great in value terms as it had been a decade earlier. Regulations also prevented the stock of currency from adjusting along with seasonal increases in currency demand. Yet the U.S. economy was growing, and the seasonal demand for currency tended to rise sharply during the harvest season—that is, between August and November of each year. In the circumstances, it is hardly surprising that the United States endured frequent crises and that they all involved more or less severe shortages of paper currency.

The arguments for the instability of this period are the excuse for the authority of the Federal Reserve. Once again, a central bank is created not because the market is failing, but because of government spending and regulatory failure (as in too much, rather than not enough). Yet the stability that the fed supposedly provides is difficult to show. As Selgin points out,

…by almost any measure, the major financial crises of the Federal Reserve era—those of 1920–21, 1929–33, 1937–38, 1980–82, and 2007–2009 most recently—have been more rather than less severe than those experienced between the Civil War and World War I, even overlooking outbreaks of relatively severe inflation from 1917 to 1920 and from 1973 to 1980.

Many arguments point to the central bank as one of the reasons for more severe crises. In another piece by Richard Ebeling, he states,

…in the market for borrowing and lending the Federal Reserve pushes interest rates below the point at which the market would have set them by increasing the supply of money on the loan market. Even though savers are not willing to supply more of their income for investors to borrow, the central bank provides the required funds by creating them out of thin air and making them available to banks for loans to investors. Investment spending now exceeds the amount of savings available to support the projects undertaken.

Investors who borrow the newly created money spend it to hire or purchase more resources, and their extra spending eventually starts putting upward pressure on prices. At the same time, more resources and workers are attracted to these new investment projects and away from other market activities.

The twin result of the Federal Reserve’s increase in the money supply, which pushes interest rates below that market-balancing point, is an emerging price inflation and an initial investment boom, both of which are unsustainable in the long run. Price inflation is unsustainable because it inescapably reduces the value of the money in everyone’s pockets, and threatens over time to undermine trust in the monetary system.

The boom is unsustainable because the imbalance between savings and investment will eventually necessitate a market correction when it is discovered that the resources available are not enough to produce all the consumer goods people want to buy, as well as all the investment projects borrowers have begun.

Yet people have become accustomed to a central bank and have allowed themselves to believe that the Federal Reserve and other central authorities are better left in place. They’ve had nearly hundred years in the great experiment of central banking in the U.S. alone, nearly a century to perfect their craft. And they have shown time and again that they cannot stop crises from happening any better than the free market can, and in many cases worse. Why should we not reflect, honestly, on the way things are done now and determine if it truly is the best course?

Tom Duncan
Sound Money Fellow
Atlas Economic Research Foundation

Image by Francesco Marino / FreeDigitalPhotos.net.

Tom Duncan

Get notified of new articles from Tom Duncan and AIER.