

In 1558, Sir Thomas Gresham wrote a letter to Queen Elizabeth of England that would forever preserve his name for those of us studying money. What became known as Gresham’s law is popularly invoked to describe monetary competition, frequently stated and remembered as “Bad money drives out good money” — and it’s entirely wrong.
The namesake for the economic law is almost entirely innocent to its creation; in his letter, informing the newly ascended queen about the Crown’s foreign debt in Antwerp, he mentions that after Elizabeth’s father, Henry VIII, had debased the currency in the Great Debasement in the 1540s and 1550s, “all your fine goold was convayd ought of this your realme.”
That’s it, the apocryphal basis for one of economics’ first and perhaps most sound laws. It does require, writes the British monetary economist Frank W. Fetter in the 1930s,
considerable ingenuity to draw from Gresham’s modest statement about debasement and the exchanges, with its historical inaccuracies, a universal law that “bad money drives out good money.” Most certainly the passage gives no warrant for the assertion by many writers that Gresham expressed an economic law with great exactness.
Instead, the originator of what we call Gresham’s law was Henry Dunning MacLeod a mere three centuries later. Joseph Schumpeter, in his History of Economic Analysis, calls it both “trivial” and a “not quite correct phrase.” Robert Greenfield and Hugh Rockoff describe the law as “one of money’s governing principles.” Adding proper modifications, the Nobel laureate Robert Mundell describes it as “a theorem of the general law of economy, a consequence of the theory of rational economic behavior.” Now, why do we care about this seemingly commonsensical piece of obscure monetary economics?
In so many aspects of economics, there is a mismatch between what the economists are really saying and what the general public believes. The economic law to which Gresham has lent his name is a typical victim, as illustrated spectacularly this week by the Financial Times’ chief U.S. commentator, Edward Luce. In a piece titled “The Gresham’s Law of Our Democracies,” Luce invokes Gresham to explain Hayek’s famous point: in politics, the worst get on top.
It’s about as strange as it sounds.
A Run-Down of Gresham’s Law
The “venerable principle” that we know as Gresham’s law only had the first part of its true meaning conveyed to the public. The idea isn’t only that when two or more currencies circulate together, the money of superior quality (“good money”) will be hoarded while the money of inferior quality (“bad money”) will dominate trade. That would merely be the outcome of monetary competition where users preferred some of the available moneys’ attributes for different purposes.
Rather, Gresham’s law only causes inferior-quality money to dominate an economy’s transactions when there is some institutional rule (fixed mint price or legal tender laws) that forces market participants to accept two differing monies at a fixed ratio — and, importantly, this ratio must be different from current market prices. Mundell writes that “Gresham’s Law comes into play only if the ‘good’ and ‘bad’ exchange for the same price.” In this sense we can understand why Schumpeter called it “trivial.” This is nothing more than a special case of the law of one price — or the common sense notion that consumers, in most places at most times, will choose the cheaper version of two identical products offered to them. If one of the monies we hold is officially overvalued — that is, we get higher purchasing power for it than we should — it doesn’t take strict rationality postulates to figure out that consumers will spend their overvalued money.
To make matters slightly more complicated, Arthur Rolnick and Warren Weber wrote a much-read article in the 1980s where they found historical instances that contradicted this “trivial” law. After all, we have many historical instances — the French Revolution or the American Civil War, for example — where the conditions for the law did not result in the undervalued money disappearing from circulation. They write that “both bad and good money appear to have been current” at the same time. Another is 18th-century Sweden, whose peculiar seven-currency monetary system looks rather chaotic to a modern-day observer.
Rolnick and Weber’s version of the law specifies that bad money drives good money from circulation “only when use of the good money at its market (nonpar) price is too expensive.” Indeed, presuming — as Gresham’s law does — that the use of a competing money will simply cease is somewhat strange. Whenever two economically different goods circulate in the economy (Toyotas or BMWs, almond milk or cow milk), the “good” good doesn’t disappear from circulation; its price simply adjusts. Why couldn’t different officially mispriced monies (say gold and silver) fetch different prices in actual trade? Even if governments mandated legal tender laws, argued the authors, merchants could quote prices in the inferior money but provide discounts to consumers using the good money. In other words (barring limitless arbitrage profits until the mint runs out of resources), there is no way a government can fix two moneys’ prices in decentralized transactions.
George Selgin objected to Rolnick and Weber’s version by showing that preventing actors from communicating their preference over currency traps individuals in a prisoner’s dilemma. By imposing sufficiently strict penalties on breaches of legal tender, actors incorporate those additional negative payoffs and ultimately favor the bad money. In general, legal tender laws are rarely that strict, but they are possible.
Economists may quibble among themselves about the precise specification for a correct Gresham’s law and the conditions for “bad money” to drive out “good money.” What is clear is that some institutional feature must operate on market-established relative prices between good and bad money, restricting actors’ payoffs such that the economy centers on a “bad” money. It’s not enough that one money is — even universally — perceived to be better than another.
The Audacious Adventures of Mr. Luce
With that in mind, let’s see how much sense it makes to think about “bad” politicians in a Gresham’s-law way. Luce writes:
Where have all the good people gone? Gresham’s Law holds that the bad drives out the good. He was talking about debased currency. But it applies equally to the quality of public life. I have no doubt that there are just as many talented people, and individuals of high moral character, in our time as in earlier ones. Our average IQ apparently keeps rising. The intelligent ones obviously have enough sense to avoid politics.
The question itself has bugged political scientists and economists for centuries and is well worth pondering: is there something in the malicious field of politics that attracts particularly awful people?
On a superficial level, it also makes sense for Luce to invoke Gresham here. Luce wants to make the reasonable case that politics recently has been an area where “bad” conquers “good,” and in the popular (and wrong) version of Gresham’s law, “bad money” similarly drives out “good money.” As we now know a lot more about Gresham’s law, we can now see how this makes very little sense.
First, Gresham never talked about the law that carries his name. Second, for Gresham’s law to drive “good” out of circulation, there must be some institutional (non-price) rule that holds the relative prices of “good” and “bad” in place, preventing them from adjusting. We quibble over exactly which such feature (mint price, currency traders, legal tender laws) matters and how much is required, but there must be one.
Apparently, then, it’s too expensive for “good” politicians to circulate. I don’t know what that means. I can’t think of a legal restriction that makes us unable to trade “good” politicians at a premium price or that imposes official sanctions on us for doing so. I don’t even know what it means to say that a politician has a market price!
Mr. Luce has made the reasonable observation that politics seems less sane than we would expect it to be. He need not invoke a mistaken version of Gresham’s law to make that case.
Gas and Apparel Pull Everyday Prices Down in November


AIER’s Everyday Price Index fell 0.1 percent in November after posting a 0.4 percent increase in October. The Everyday Price index has fallen in four of the last six months. The Everyday Price Index measures price changes people see in everyday purchases such as groceries, restaurant meals, gasoline, and utilities. It excludes prices of infrequently purchased, big-ticket items (such as cars, appliances, and furniture) and prices contractually fixed for prolonged periods (such as housing).
The Everyday Price Index including apparel, a broader measure that includes clothing and shoes, decreased 0.3 percent in November after a 0.3 percent rise in October. The Everyday Price Index including Apparel has fallen in three of the past six months. Apparel prices fell 2.5 percent on a not-seasonally-adjusted basis in November and are down 1.6 percent over the past year. Apparel prices tend to be volatile, registering sporadic large gains or declines in between stretches of relatively steady prices.
The Consumer Price Index, which includes everyday purchases as well as infrequently purchased, big-ticket items and contractually fixed items, fell 0.1 percent in November, matching the decline in the Everyday Price Index. The Everyday Price Index is not seasonally adjusted, so we compare it with the unadjusted Consumer Price Index.
Over the past year, the Consumer Price Index is up 2.1 percent. Over the same period, the Everyday Price Index has risen 1.2 percent while the Everyday Price Index including apparel is up 0.9 percent. The modest increases in both indexes over the past year are largely due to weak energy and grocery store prices.
Motor-fuel prices fell 1.1 percent for the month on a not-seasonally-adjusted basis. Over the past year, motor-fuel prices are off 1.3 percent. Motor fuel prices are largely a function of crude oil prices. West Texas Intermediate crude oil prices fluctuated dramatically from mid-2017 through mid-2019, rising to a peak above $75 per barrel in October 2018 before plunging to less than $45 by December 2018. Crude prices have been relatively stable since May, bouncing around in a range of $50 to $60.
Grocery prices fell 0.3 percent in November and are up just 1.0 percent from a year ago. Over the last five years, grocery prices are essentially unchanged.
The components with the largest weights in the Everyday Price Index are food at home (weighted 20.8 percent and declining 0.3 percent in November), food away from home (17.6 percent and a 0.2 percent rise), household fuels and utilities (13.3 percent and a 0.3 percent drop), and motor fuel (11.8 percent with a 1.1 percent decrease). Together, these four categories account for 63.5 percent of the Everyday Price Index.
Overall, net changes in the Everyday Price Index remain modest. Energy prices are the most volatile component and have been, on balance, a negative contributor in recent months. Grocery prices (food at home) have also been rising at a slow pace and stand in sharp contrast to restaurant prices (food away from home) which have been rising more quickly and persistently. Apparel prices also remain volatile but in general have been a negative contributor. Other smaller components have had significant but largely offsetting trends. Notably, gardening and lawncare services prices are up 8.4 percent from a year ago, while tobacco products have risen 5.5 percent, postage and delivery services are up 5.4 percent, recreational reading materials are up 4.9 percent, and pet and pet products are up 3.3 percent. Partially offsetting these were audio discs and tapes, down 2.6 percent and video discs, down 2.5 percent. Several other smaller components have increases close to zero.


Related Articles – Everyday Price Index
Manual Labor Will Be Revived


In a previous column, I looked at the way automation and AI are likely to transform the world of work and employment. There is a lot of discussion about this, most of which focuses on the likely impact in terms of the kinds of paid work that will disappear. What there is much less of is discussion of the new kinds of paid work that will come into being.
If the result of automation is to create jobs more than to destroy them, then what kinds of work are likely to expand in the future? This is related to but distinct from the first question. In one way, this is a very hard question to answer. Many of the new kinds of employment that will appear are literally unimaginable — if we could imagine them, they would already exist.
Back in the 1980s, nobody could have told people worrying about the decline of jobs in the steel industry that there would be work designing apps for mobile phones, for example. So we can be confident that new kinds of work will appear but have no idea about what it will be — it’s for entrepreneurs to invent and discover that.
However, we can do some thinking about it because while the details may not be clear, there are cases where we can have a strong notion as to what will appear. In the 1900s, for example, there were a large number of jobs associated with horses, at that time still the main power for transport. Almost all of them were gone by 1930, but people could guess that a lot of new work would be created servicing and supporting (as well as producing) the motor vehicles that were replacing horses. Thinking like this about the present situation should lead us to a number of conclusions and to one in particular that many will find both surprising and heartening.
At-Risk Jobs
If you read the various studies that have been done over the last five years, there is widespread agreement about the kinds of jobs that are “at risk.” A recent study by the Brookings Institution estimated that 25 percent of current US jobs are at greater than 50 percent risk of automation. Some are not surprising. Any job that is both boring and repetitive is likely to be at risk. You might suppose that this would mean low-paid manual occupations would be at high risk, and indeed many are — shelf stacking, waitressing, and data entry are all at high risk.
On the other hand, many better-paying jobs are at considerable risk of disappearing. A range of jobs in the transport industry, from truck and taxi drivers to train and bus drivers, are likely to go in the medium term because of the rise of autonomous vehicles (most new metros around the world already have driverless trains). A wide range of clerical and administrative tasks are also likely to be handed over to algorithms, from financial services to company administration and financial advice.
The last example brings up another point. A recent study by the OECD argued that jobs involving face-to-face contact were more likely to survive — which suggests a rosier future for financial advisors. However, experience suggests this is actually not true. When the ATM was introduced, some argued that it would not catch on because customers preferred the human interaction with a teller. Experience suggests that actually the opposite was the case. The same is likely to be true in a range of occupations and not just financial advice and wealth management.
The common factor is that these are activities that can be readily reduced to a tick list of standard questions and hence an algorithm. Routine medical care and diagnosis is one; another is most standard legal work. This suggests that the risk of automation is actually high for many professional occupations such as medical general practice and routine attorney work. In the future, we will probably consult an algorithm rather than a human doctor or lawyer or accountant. However, surgery and nursing are still almost certain to be done by flesh and blood humans.
That particular example can lead us to the surprising and heartening conclusion mentioned earlier. Much of the commentary argues that we are moving into a world where the labor market will be dominated by two kinds of employment. There will be creative jobs that are open to highly educated people and which pay very well, and there will be unskilled and low-productivity jobs (hence low paying), but there will not be a range of middle-skill jobs that pay a decent or even high wage. The view is captured in the title of Tyler Cowen’s work Average Is Over. This has a number of alarming implications, most notably that access to high-paying work is going to become even more dependent than it already is on higher-education qualifications. We should be more sanguine, however.
A Heartening Conclusion
Economic theory, confirmed by empirical research, tells us that people will in general only adopt a new technology when the expected gain from doing so is greater than the cost (technically, when the marginal gain exceeds the marginal cost). This means there are many things that are technologically feasible that do not happen because they do not pass the test of their benefit being greater than their cost.
One example is supersonic passenger flight. This is certainly technically feasible — we know this because two such aircraft were in commercial service for some time. However, there are none now and no prospects for any. The reason, as Boeing discovered while trying to develop a supersonic transport, is that the benefit (getting from London to New York in three hours rather than seven, for example) is not valuable enough to consumers to exceed the costs of such an aircraft, which are due to the technology available and the unavoidable challenges of traveling at such a speed in the Earth’s atmosphere in a way that will meet standards of comfort and safety for passengers. (Military personnel have a different set of criteria, which is why supersonic combat aircraft are commonplace; plus, the buyers of such aircraft, namely governments, are not as price sensitive as airlines.)
One classic example of this is the artificial reproduction of manual dexterity or, to put it another way, of the combination of the human hand with the human brain and the complex feedback and control system (touch and sight) that connects the two. Despite much effort and research, this has proven incredibly difficult to reproduce artificially. Consequently, for most tasks involving manual dexterity and manipulation, it is still cheaper to use a human rather than a robot, and this seems likely to be the case for a long time. This explains why nursing and surgery are both at very low risk of being automated by everyone’s estimation, despite the fact that most surgical procedures and nursing tasks are standard and routine in many ways.
Manual Trades and Personal Services
So, there is a wide range of tasks and work that will not be automated because of this. However, the response might be that the kinds of jobs this applies to are precisely the low-productivity and low-skill jobs mentioned earlier, such as cleaning. Certainly, this is true, but it is not the whole truth. There is also a wide range of work involving manual dexterity that is skilled and highly paid, and the likely impact of AI will actually be to make that kind of work more productive and hence higher paid. Meanwhile, other foreseeable changes will increase the demand for these kinds of work and hence the number of employment opportunities, even allowing for the increase in productivity of individual workers.
This kind of work is that of skilled manual trades such as plumbing, painting and decorating, electricians, and construction work of all kinds. Another is personal-service work such as personal trainers or coaches. Teaching and researching are other examples (at the moment, these are thought of as jobs that require a degree, but that is more about rationing access than reality). Manual trades, for example, are very difficult because of the need for close-up manipulation — a robot that could do an electrician’s or plumber’s job would be seriously expensive.
At this point, another feature of innovation comes into play. What much innovation does is not so much replace human labor as enhance it and make it more productive. AI and associated control systems are a classic example of this. You will still need the manual dexterity of the surgeon or plumber, but the AI and associated technology will increase the range of things that they can do and make them much more effective. In other words, it will increase the value of the service they provide as well as the quantity per unit of time worked — which is the real definition of increased productivity. This translates into higher incomes for people delivering this kind of service.
Moreover, the demand for this kind of labor and service (as well as the others mentioned) is almost certain to increase. For one thing, the people earning very high incomes doing creative knowledge work will want to employ those providing these services in very large numbers (not least because the principle of comparative advantage means it makes sense for them to do this so they can concentrate on their own work). Another feature of AI is that it will make it much cheaper to personalize skilled work and services and so make it more valuable to the end consumer.
What we are likely to see, in fact, as well as the disappearance of a range of familiar jobs, is a revival in the value (and maybe the status?) of manual trades and personal services of all kinds. These will also become higher paying than many are at present (some, of course, already pay well). To give just one example, nursing and personal care are going to have their productivity significantly increased, while the demand for such services is going to rise organically because of the rise in the average age.
This will also mean an increase in the kind of work that requires a trade education and a relative decline in the need and demand for academic higher education. That sector will have to find another market to replace or supplement people looking for certification to have a shot at knowledge or creative work — the business of providing education and tutoring as a leisure and consumption good is one possibility.
In fact, one outcome of AI and automation may well be a revival of manual labor and of the traditional working class — maybe becoming more like an artisan class again. It is actually the credentialed and salaried white collar middle class that is more at risk in the years to come.
The overall effect will be massively positive, as economics leads us to expect. Thus a recent study by Price Waterhouse predicts that automation and AI will contribute an additional $15.7 trillion to the global economy by 2030, with a boost to local GDP of up to 26 percent by the same date. We should be sanguine about the impact of this latest wave of innovation, not just in terms of its overall impact on the wealth of the world but also in terms of its likely sociological impact.