We can think of history as a succession of rare big events in which a story goes viral, often (but not always) with the help of an attractive celebrity (even a minor celebrity or fictional stock figure) whose attachment to the narrative adds human interest.
Contagion is strongest when people feel a personal tie to an individual in or at the root of the story, whether a stock personality type or a real celebrity.
Narrative economics demonstrates how popular stories change through time to affect economic outcomes, including not only recessions and depressions, but also other important economic phenomena. The idea that house prices can only go up attaches to the stories of rich house flippers seen on television. The idea that gold is the safest investment attaches to stories of war and depression. These narratives have a contagious element, even if their attachment to any given celebrity is tenuous.
The overriding theme is that most people have little or nothing to say if you ask them to explain their objectives or philosophy of life, but they brighten at the opportunity to tell personal stories, which then reveal their values.
Economic narratives follow the same pattern as the spread of disease: a rising number of infected people who spread the narrative for a while, followed by a period of forgetting and falling interest in talking about the narrative.4 In both medical and narrative epidemics, we see the same basic principle at work: the contagion rate must exceed the recovery rate for an epidemic to get started.
Just as the world experiences co-epidemics of diseases, where two or more diseases interact positively with each other, we also see co-epidemics of narratives in which the narratives are perceived as sharing a common theme, such as case studies that illuminate a political argument, creating a picture in the mind that is hard to see if one focuses on just one of the narratives. In other words, large-scale economic narratives are often composed of a constellation of many smaller narratives. Each smaller narrative may suggest a part of a larger story, but we need to see the full constellation to discern the full theme.
Hence narratives that seem contrary to prevailing thought may have lower contagion rates that do not result in epidemics.
In addition to a constellation of narratives, there is a confluence of narratives that may help drive economic events. By a confluence, I mean a group of narratives that are not viewed as particularly associated with one another but that have similar economic effects at a point in time and so may explain an exceptionally large economic event. For example, in my 2000 book Irrational Exuberance, I listed a dozen precipitating factors, or narratives, that happened to occur together around 2000 to create the most elevated stock market in the United States ever, soon to be followed by a crash. The list, in brief, comprised the World Wide Web, the triumph of capitalism, business success stories, Republican dominance, baby boomers retiring, business media expansion, optimistic analysts, new retirement plans, mutual funds, decline of inflation, expanding volume of trade, and rising culture of gambling. If we want to know why an unusually large economic event happened, we need to list the seemingly unrelated narratives that all happened to be going viral at around the same time and affecting the economy in the same direction. However, it is important to recognize that big economic events usually can’t be described as caused by just a single constellation of narratives. It is far more likely that big economic events are not explainable in such satisfying terms. Instead, explaining those events requires making a list of economic narratives that itself cannot be described as a simple story or a contagious narrative.
1938 the existentialist philosopher Jean-Paul Sartre wrote, A man is always a teller of tales, he lives surrounded by his stories and the stories of others, he sees everything that happens to him through them; and he tries to live his life as if he were recounting it.1
Ultimately, the story’s rich visual imagery helped it evolve from an economic anecdote into a long-term memory. The visual detail of the napkin may have lowered the speed at which people forgot the narrative, which could have helped the epidemic penetrate a large fraction of the population. There is a lesson to be learned here for those who want their stories to go viral: when authors want their audience to remember a story, they should suggest striking visual images. In ancient Rome, the senator Cicero advocated the use of this strategy, quoting the scholar Simonides: For Simonides, or whoever else invented the art, wisely saw, that those things are the most strongly fixed in our minds, which are communicated to them, and imprinted upon them, by the senses; that of all the senses that of seeing is the most acute; and that, accordingly, those things are most easily retained in our minds which we have received from the hearing or the understanding, if they are also recommended to the imagination by means of the mental eye.
people tended to share content that enhances self-related thoughts—that is, information that “engages neural activity in regions related to such processes [self-presentation or mental concept], especially in medial prefrontal cortex,” and that “involves cognitions or forecasts about the mental states of others.”3 In other words, these people are more willing to share their health information in the form of stories about themselves and others.
The polymath David Hume (1711–76) wrote in 1742: When any causes beget a particular inclination or passion, at a certain time and among a certain people, though many individuals may escape the contagion, and be ruled by passions peculiar to themselves; yet the multitude will certainly be seized by the common affection, and be governed by it in all their actions.9
Gustave Le Bon said in his book Psychologie des foules (The Crowd, 1895), “Ideas, sentiments, emotions, and beliefs possess in crowds a contagious power as intense as that of microbes.”
In a competitive market in which competitors manipulate customers, and in which profit margins are competed away to normal levels, no one company can choose not to engage in similar manipulations. If they tried, they might be forced into bankruptcy. A phishing equilibrium with a certain acceptable level of dishonesty in narrative is therefore established.14 Phishing equilibria may not be all that bad. In the case of the book cover, there has developed an art of book jackets that sometimes have significant value.
All these examples illustrate a fundamental error that people tend to make: phools think that the popularity of a story or of a brand is evidence of its quality and deep importance, when in fact it rarely is. On the contrary, growing evidence in recent years has shown that many consumers detest logos and aggressive marketing.15 Narrative contagion is often the result of arbitrary details, such as the frequency of meetings among people (many people see a logo on a shirt) and natural links to other contagious narratives (Lacoste’s onetime fame as a tennis player).
Framing is related to the Daniel Kahneman and Amos Tversky representativeness heuristic (1973), whereby people form their expectations based on some idealized story or model, judging these expectations based on the prominence of the idealized story rather than estimated probabilities. For example, we may judge the danger of an emerging economic crisis by its similarity to a remembered story of a previous crisis, rather than by any logic.
Psychologists have also noted an affect heuristic, whereby people who are experiencing strong emotions, such as fear, tend to extend those feelings to unrelated events.26 Sometimes people note strong emotions or fears about possibilities that they know logically are not real, suggesting that the brain has multiple systems for assessing risk. This “risk as feelings” hypothesis holds that some primitive brain system more connected to palpable emotions has its own heuristic for assessing risk.
Though modern economists tend to be very attentive to causality, as a general rule they do not attach any causal significance to the invention of new narratives. I want to argue here not only that causality exists, but also that it goes both ways: new contagious narratives cause economic events, and economic events cause changed narratives.
Psychologists have studied how the brain chooses which memories to give flashbulb status, analogous to choosing which photos to put in a family album. It turns out that flashbulb memories are connected not only to the emotions attached to the remembered event but also to social psychological factors. Memories that involve a shared identity with others, or that are rehearsed with others, are more likely to achieve flashbulb status.14 Thus flashbulb memories are selected in a way that gives them a better chance to be involved in the formation of contagious narratives.
In attempting to be vivid, storytellers often resort to fiction or fake news, thereby providing amplified tales. The history of narratives shows that “fake news” is not new. In fact, people have always liked amusing stories, and they spread stories that they suspect are not true, as for example in urban legends. In fact, people often spread titillating stories without making any clear moral decision whether they are spreading falsehoods or not.
Ultimately, the mass of people whose consumption and investment decisions cause economic fluctuations are not very well informed. Most of them do not view or read the news carefully, and they rarely get the facts in any discernible order. And yet their decisions drive aggregate economic activity. It must be the case, then, that attention-getting narratives drive those decisions, often with an assist from celebrities or trusted figures.
Proposition 1: Epidemics Can Be Fast or Slow, Big or Small
The contagion rate also varies greatly from one narrative epidemic to another. One example of a narrative epidemic with very high contagion might be that of a national emergency, like the start of a war. With such narratives, people feel that the story is so important that they have license to interrupt any other conversation with the news, or to speak with people with whom they do not normally communicate. An example of a successful narrative with a very low contagion rate might be a patriotic story illustrating a country’s national greatness, a story that is brought up only at appropriate times at home, in the classroom, or at events sponsored by civic organizations. Such a narrative can develop (slowly) into a huge epidemic if the forgetting rate is low enough.
high contagion parameter and a low recovery rate mean that almost the whole population eventually hears the narrative, sometimes very quickly. But the same narrative can reach most of the population rather slowly if the contagion parameter is low but the recovery rate is even lower.
Proposition 2: Important Economic Narratives May Comprise a Very Small Percentage of Popular Talk
On their own, any individual, vague narratives might not have determined behavior, but a constellation of such narratives may have.
Proposition 3: Narrative Constellations Have More Impact Than Any One Narrative
Proposition 4: The Economic Impact of Narratives May Change Through Time
We must pay attention to the names that people attach to their narratives. Seemingly minor changes in the name of a narrative can matter a lot, especially if the new name attaches to a different constellation of narratives. In linguistics, synonyms never have exactly the same meaning. If pressed, people can state complex thoughts about the slightly different connotations of synonyms. In neurolinguistics, synonyms have different connections in the neural network. Some of those connections can matter a lot in terms of the economic ideas they support.
Proposition 5: Truth Is Not Enough to Stop False Narratives
according to political scientist Stephen Van Evera (1984), World War I started at least partly because a false narrative, which he calls “the Cult of the Offensive,” went viral. This narrative was a theory that the country that moves first to attack another country will generally have the advantage. The idea was supported by some historical narratives and illustrated by simplistic psychological, mathematical, and bandwagon arguments. Ultimately, Van Evera argues, this theory led to instability: everyone wanted to attack first. Germany thought it had a “window of opportunity” to successfully pursue a “preventive war” against Russia. But the narrative was wrong. It had economic consequences—a huge arms race—and resulted in a war that was disastrous for both the offense and the defense. Norman Angell called the narrative “The Great Illusion” in a 1911 book with that title. Angell’s ideas were convincing to many (and he later won the Nobel Peace Prize for his work), but they did not go viral fast enough to prevent the war. The illusion won out even after it had been decisively disproven, because the proof did not spread as fast as the illusion did.
Ultimately, a story’s contagion rate is unaffected by its underlying truth. A contagious story is one that quickly grabs the attention of and makes an impression on another person, whether that story is true or not.
false stories had six times the retweeting rate on Twitter as true stories. The researchers did not interpret that finding as specific to Twitter, and the result may be specific to the time of the study, a time when mistrust of conventional media sources was higher than usual. Rather, these authors interpreted their results as confirming that people are “more likely to share novel information.” In other words, contagion reflects the urge to titillate and surprise others. We can add another twist to that conclusion: a new story correcting a false story may not be as contagious as the false story, which means that the false narrative may have a major impact on economic activity long after it is corrected.
Proposition 6: Contagion of Economic Narratives Builds on Opportunities for Repetition
Proposition 7: Narratives Thrive on Attachment: Human Interest, Identity, and Patriotism
Epidemics can be fast or slow, big or small. The timetable and magnitude of epidemics can vary widely. 2. Important economic narratives may comprise a very small percentage of popular talk. Narratives may be rarely heard and still economically important. 3. Narrative constellations have more impact than any one narrative. Constellations matter. 4. The economic impact of narratives may change through time. Changing details matter as narratives evolve over time. 5. Truth is not enough to stop false narratives. Truth matters, but only if it is in-your-face obvious. 6. Contagion of economic narratives builds on opportunities for repetition. Reinforcement matters. 7. Economic narratives thrive on human interest, identity, and patriotism. Human interest, identity, and patriotism matter.
these perennial narratives’ overarching and ever-shifting influence on society today, explaining how many of the challenges that we tend to attribute to discrete contemporary forces are in fact influenced profoundly by narratives—stories that took root generations and even centuries ago but that reappear in newly configured expressions.
Typically, when a narrative reappears, say in another country or a few decades later, the mutated narrative tends to have features different from those of the original narrative—a different celebrity, different visual images, a different punch line.
Mutations in a narrative or in the environment surrounding the narrative may cause it to become an economic narrative by tying it better to economic decisions. A mutation may also occur that increases contagion but twists the story so that it ceases to be the same economic narrative. It may then morph into some different moral or lesson afterward. For example, as we shall see below, a narrative about labor-saving machines replacing jobs (chapter 13) created a sense of fear during the Great Depression of the 1930s, but the same narrative mutated (chapter 14) to create a sense of opportunity during the dot-com boom of the 1990s.
Narratives may be relevant to economic events even if the timing of the narrative’s appearance does not coincide with the event. When it goes epidemic, a narrative may inspire a latent fear, such as a fear that technology will someday replace one’s job, which may result eventually in changes in economic behavior years later when some other narrative or news creates a sense that the feared replacement is imminent.
The first step in our task is organizing and dassifying some of the major economic narratives and the mutations that allowed them to recur over long intervals of time. The remaining chapters in this part describe nine perennial economic narratives, along with some of their mutations and recurrences. Most readers will recognize these narratives in their most recent forms but not in their older forms: 1. Panic versus confidence 2. Frugality versus conspicuous consumption 3. Gold standard versus bimetallism 4. Labor-saving machines replace many jobs 5. Automation and artificial intelligence replace almost all jobs 6. Real estate booms and busts 7. Stock market bubbles 8. Boycotts, profiteers, and evil business 9. The wage-price spiral and evil labor unions
Several classes of confidence narratives have characterized the history of the industrialized economies. The first class is a financial panic narrative that reflects psychologically based stories about banking crises. The second class is a business confidence narrative that attributes slow economic activity not so much to financial crises as to a sort of general pessimism and unwillingness to expand business or to hire. The third is a consumer confidence narrative that attributes slow sales to the fears of individual consumers, whose sudden lack of spending can bring about a recession. Figure 10.1 plots the succession of these narratives since 1800. All of these slow-moving narratives have shown growth paths that span lifetimes. Financial panic came first, followed by narratives about crisis in business confidence, followed by narratives of a crisis in consumer confidence.
increasing self-censorship of narratives may, and sometimes does, encourage panic. Because people are aware that others self-censor, they increasingly try to read between the lines of public pronouncements to determine the “truth.”
In the eighteenth and nineteenth centuries, most people did not save at all, except maybe for some coins hidden under a mattress or in a crack in a wall. In economic terms, the Keynesian marginal propensity to consume out of additional income was close to 100%. That is, most people, except for people with high incomes, spent their entire income. So, to the spinners of narratives of these past centuries, there would have been no point in surveying ordinary people about their consumer confidence. Most people then had no concept of retirement or sending their children to college, so they had no motivation to save toward these goals.3 If they became bedridden in old age, they expected to be cared for by family or by a local church or charity. Life expectancy was short, and medical care was not expensive. People tended to see poverty as a symptom of moral degradation and drunkenness or dipsomania (now called alcoholism), not as a condition related to the strength of the economy. So there was practically no thought that consumer confidence should be bolstered. The people saw the authorities as responsible for instilling moral virtues rather than building consumer confidence. The idea that the poor should be taught to save grew gradually over the nineteenth century, the result of propaganda from the savings bank movement. But contemporary thought was miles away from the idea that a depression might be caused by ordinary people heeding the propaganda and trying to save too much.
Closely related to the idea of crowd psychology is suggestibility, which refers to the idea that individual human behavior is subconsciously imitative of and reactive to others. The word, first seen in the late nineteenth century, appears to be pivotal in narrative constellations and in popular understandings of crowd psychology.
Suggestibility implies that oftentimes we are acting blind or as in a dream. By 1920, the concept of suggestibility was widely known, indicating that people of that era may have felt that other people are easily influenced by abstract or subtle examples, and are therefore more likely to conduct their economic behavior expecting a highly unstable world. The narrative would lead them to expect herdlike behavior and perhaps to contribute to such behavior. If you think that other people are members of an impressionable herd, you may be more likely to try to anticipate the herd’s movements and try to get ahead of them.
Frugality and an impulse to maintain a modest lifestyle have roots going back to ancient times. Sumptuary laws in ancient Greece and Rome, as well as China, Japan, and other countries, forbade excess ostentation. Stories about the disgusting flaunting of wealth are one of the longest-running perennial narratives, in many countries and religions. Opposing these frugality narratives are conspicuous consumption narratives: to succeed in life, one must display one’s success as an indication of achievement and power. The two narratives are at constant war, with modesty relatively strong during some periods and conspicuous consumption dominant at other times. Both are important economic narratives because they affect how people spend or save, and hence they influence the overall state of the economy.
The family is the unit upon which our whole American system of living is built…. Any collapse now of its morale or loss of its solvency will have a disastrous effect on posterity.4 This narrative justified postponing unnecessary expenditures while maintaining an attitude of normalcy, but in doing so it contributed to prolonging the economic depression. It also offered a reason for families not affected by the Depression to avoid conspicuous consumption, in deference to the perceived suffering of other families and the outlook for more of the same.
The Great Depression became a time of reflection about what is important in life beyond spending money. Writing in the United Kingdom in 1931, columnist Winifred Holtby asked: In other words, can we not use this period to get rid of a little snobbery and bunkum and live lives dictated by our own tastes instead of our neighbours’ supposed notions of “what is done”? With so much to do, and a world so rich in experience, must we shut ourselves up into little genteel compartments in which we all adopt the same arbitrary standards, wear the same things, eat the same things, and produce the same sad monotony of “appearances”? … Can we not remember the wisdom of Marie Lloyd’s old song, “It’s a little of what you fancy does you good!”?—not a little of what you fancy your neighbours will fancy that you ought to fancy. Can we not dare to be poor?
The modest economic recovery that started at the bottom of the Great Depression in 1933 occurred, at least in part, because people were spending more because poverty was no longer so chic! All of these narratives imply that the causes and effects of the Great Depression extend beyond economists’ simple story of multiple rounds of expenditure and the effects of interest rates on rational investing behavior.
People seem to have a natural respect for ideas that they perceive as coming from the wisdom of the past and that reflect true or important values.
the new narrative about the gold standard in the 1930s differed from that of earlier years. The difference was partly a matter of new words. Sullivan quotes Talleyrand, Napoleon’s chief diplomat, that “the business of statesmanship is to invent new terms for institutions which under their old names have become odious to the public.”31 The supporters of the devaluation apparently understood this. By the 1930s, the new word devaluation had massively replaced the negative-sounding debasement and inflation. Devaluation refers to a constructive action of enlightened governments, while debasement and inflation connote a moral failing.
The mutation that renewed the old narrative and made it so virulent in 1811 was a new kind of power loom that was eliminating weavers’ jobs. The word Luddite continued to appear regularly in newspapers in following years and today remains a synonym for a person who resists technological progress.
In the depression of 1873–79, a particularly strong depression in the United States and Europe, concern that labor-saving inventions were at least partly to blame for high unemployment took center stage in the popular consciousness, likely worsening the depression. In the United States, this depression is typically attributed to financial speculation leading to the banking panic of 1873, but the fear-inducing narrative about a long-term loss of jobs and job prospects due to labor-saving inventions may help to explain why the depression went global. Certainly the depression of the 1870s was accompanied by farmers’ accelerated adoption of labor-saving machinery, along with more workers destroying machines and hired farm laborers threatening violence.3 Underneath the violence was widespread concern about the outlook for the common laborer.
However, by 1879, a counternarrative had already developed: labor-saving processes will increase the number of jobs, not decrease them. One editorial in the Daily American, dismissing the worries about replacement of labor by machines, noted, The whole tendency of labor-saving processes is towards the elevation of the laboring classes, and if the change is accompanied by some hardship, so is every step in the progress of the human race.8
An 1894 editorial in the Los Angeles Times blamed the severity of the 1890s depression on labor-saving inventions: There is no doubt that the introduction of labor-saving machinery and the consequent increase of production has had more than a little to do with the present depression in business…. It is true that during the past few years the increase in the invention and adoption of labor-saving machinery has been so great that the community has scarcely been able to keep up with it.11
“The reason we have this unemployment is because we are eliminating jobs through labor-saving methods faster than we are creating them.”20 These words, alongside the new official reporting of unemployment statistics, created a contagion of the idea that a new era of technological unemployment had arrived, and the Luddites’ fears were renewed. The earlier agricultural depression, with its associated fears of labor-saving machinery, began to look like a model for an industrial depression to follow.
Underconsumption narratives appeared five times as often in ProQuest News & Newspapers in the 1930s as compared with any other decade. The narrative has virtually disappeared from public discourse, and the topic now appears largely in articles about the history of economic thought. But it is worth considering why it had such a strong hold on the popular imagination during the Great Depression, why the narrative epidemic could recur, and the appropriate mutations or environmental changes that would increase contagion. Today, underconsumption sounds like a bland technical phrase, but it had considerable emotional charge during the Great Depression, as it symbolized a deep injustice and collective folly. At the time, it was mostly a popular theory, not an academic theory.
For example, the US Senate in Washington, DC, replaced its non-dial phones with dial telephones in 1930, the first year of the Great Depression. Three weeks after their installation, Senator Carter Glass introduced a resolution to have them torn out and replaced with the older phones. Noting that operators’ jobs would be lost, he expressed true moral indignation against the new phones: I ask unanimous consent to take from the table Senate resolution 74 directing the sergeant at arms to have these abominable dial telephones taken out on the Senate side … I object to being transformed into one of the employes of the telephone company without compensation.32 His resolution passed, and the dial phones were removed. It is hard to imagine that such a resolution would have passed if the nation had not been experiencing high unemployment. This story fed a contagious economic narrative that helped augment the atmosphere of fear associated with the contraction in aggregate demand during the Great Depression.
Albert Einstein, the world’s most celebrated physicist, believed this narrative in 1933, at the very bottom of the Great Depression, saying the Great Depression was the result of technical progress: According to my conviction it cannot be doubted that the severe economic depression is to be traced back for the most part to internal economic causes; the improvement in the apparatus of production through technical invention and organization has decreased the need for human labor, and thereby caused the elimination of a part of labor from the economic circuit, and thereby caused a progressive decrease in the purchasing power of the consumers.
The same “zero hour” for the labor-saving machinery economic narrative that appeared in 1929 reappeared late in the second half of the twentieth century, but in mutated forms. The term singularity began to be used after Einstein published his general theory of relativity in 1915. The word denotes a situation in which some terms in the equations became infinite, and it was used to describe the astronomical phenomenon of what came to be called the black hole: a “singularity in space-time.” But later the glamorous term singularity came to be defined as the time when machines are finally smarter than people in all dimensions.
This new twist in the fear-of-automation narrative around 1995 did not immediately produce a recession. Most people were not moved to curtail spending because of it, and the world economy boomed. The dominant narratives in the 1990s seemed to be focused on the wonderful business opportunities brought by the coming new millennium. The automation narratives trailed off again in the 2000s, with the distractions of the dot-com boom, the real estate boom, and the world financial crisis of 2007–9. But the automation narratives are still with us, described by new catchphrases.
Recent talk has stressed machine learning, in which computers are designed to learn for themselves rather than be programmed using human intelligence. A Google Trends search for Web searches for machine learning reveals a strong uptrend since 2012, with the Google search index more than quadrupling between 2004 and 2019. The narrative is propelled by recent stories. The highly successful chess computer program AlphaZero is described as working purely through machine learning—that is, without use of any human ideas about how to play chess. This narrative describes a tabula rasa program that plays vast numbers of chess games against itself, given no more information than the rules of the game, and learns from its mistakes.22 In some ways, the machine learning narrative is more troubling than computers running human-generated programs. Historian Yuval Noah Harari describes this narrative as leading toward a “growing fear of irrelevance” of ourselves and worries about falling into a “new useless class.”23 If they grow into a sizable epidemic, such existential fears certainly have the potential to affect economic confidence and thus the economy.
Henry George’s solution to the labor-saving machines problem—and the defining proposal of his book Progress and Poverty, published during the depression of the 1870s—was to impose a single tax on land, to tax away the labor-saving inventions’ benefits to landowners. George’s proposal assumed that the sole purpose of the new machines was to work the land, which might be the case if the economy is purely agricultural. This proposal is analogous to the much-discussed “robot tax” that appeared in public discussion during the Great Depression and has reappeared in the last few years. Taxing companies that use robots, the argument goes, will provide revenue to help the government deal with the unemployment consequences of robotics.25 George proposed to distribute part of the tax proceeds as a “public benefit.”26 His proposal is essentially the same universal basic income proposal that is talked about so often today: In this all would share equally—the weak with the strong, young children and decrepit old men, the maimed, the halt, and the blind, as well as the vigorous.
Traditionally, prices of new homes were widely thought to be dominated by construction costs.6 In fact, it used to be conventional wisdom that home prices closely tracked construction costs. A 1956 National Bureau of Economic Research study noted some short-term movements in US home prices not explained by construction costs between 1890 and 1934, but it concluded: With regard to long-term movements, however, the construction cost index conforms closely to the price index, corrected for depreciation.… For long-term analysis the margin of error involved in using the cost index as an approximation of a price index cannot be great.7 Because their construction cost index included only the prices of wages and materials, but not the price of land, the NBER analysts were viewing investments in homes as nothing more than holdings of depreciating structures, wearing out through time and tending to go out of fashion. With such a narrative, housing bubbles have little chance of getting started.
Social psychologist Leon Festinger described a “social comparison process”10 as a human universal. People everywhere compare themselves with others of similar social rank, paying much less attention to those who are either far above them or far below them on the social ladder. They want a big house so that they can look like a member of the successful crowd that they see regularly. They stretch when they pick the size of their house because they know the narrative that others are stretching. McGinn’s “You Are Where You Live” effect confirms the power of the real estate comparison narrative. As of the early 2000s, when the housing boom was at its peak, there was no other comparable success measure that one could just look up on the Internet.
When a city’s population is expanding, even if the city is not particularly attractive and has no particularly favorable narratives, there will be some people who want to move there. For example, there are always potential immigrants, often from poor or unstable countries, seeking a foothold in advanced countries, and they may choose cities based on arbitrary factors such as proximity to their home country or the existence of a subpopulation speaking their language in the destination city. If land is readily available for purchase there, new houses will be built, and the immigrants’ demand for housing may have minimal impact on prices. But if such land has run out, these immigrants will have to outbid others for existing houses, and home prices will rise. In that case, only the wealthier buyers will be able to live in that city. People who are already living in the city but have no special interest in it have an incentive to sell their houses and take the proceeds to another more affordable house in another city. The supply constraint thus results in higher prices and a wealthier population in that city.
Then, in the early 2000s, during the enormous home price boom, the term flipper became attached to people who bought homes, fixed them up a little or a lot, and sold them quickly. Once again admiring stories were told of their successes. While most people were not enthusiastic enough to actually flip houses, they may have imagined that they were engaged in “long-term flipping” simply by purchasing a primary residence as a long-term investment. Thus they engaged the speculation narrative.
In this present season, on the contrary, conservative opinion has frankly and emphatically expressed the unfavorable view. In a succession of utterances by individual financers [sic] and at bankers’ conferences, the prediction has been publicly made that the end of the speculative infatuation cannot be far off and that an inflated market is riding for a fall.4 Clearly, evidence of speculation was available to the public, which read about it in the news and talked about it on train cars. For example, in the year before its 1929 peak, the US stock market’s actual volatility was relatively low. But the implied volatility, reflecting interest rates and initial margin demanded by brokers on stock market margin loans, was exceptionally high, suggesting that the brokers who offered margin loans were worried about a big decline in the stock market.5 So the evidence of danger was there in 1929 before the market peak, but it was controversial and inconclusive. A high price-earnings ratio for the stock market can predict a higher risk of stock market declines, but it is not like a professional weather forecast that indicates a dangerous storm is coming in a matter of hours. Most people will heed that kind of storm warning. However, in 1929 a great many people did not heed the warning communicated by the high price-earnings ratio. After the crash, many of them must have remembered the warnings and wondered why they had not listened.
The 1987 epidemic draws much of its strength from memories of 1929. Suicides were attributed to the 1987 crash too, but these stories do not seem to have formed long-term memories, for a strong narrative did not develop and there was no reinforcing story of depression after 1987. A 50% margin requirement in force in 1987, but not in 1929, meant that in the United States many fewer people were “wiped out” or “ruined” by the 1987 crash than by the 1929 crash.
Policymakers might take a lesson from both the real estate bubble narratives and the stock market crash narratives: during economic inflections, there is real analytical value to looking beyond the headlines and statistics. We should also consider that certain stories that recur with mutations play a significant role in our lives. Stories and legends from the past are scripts for the next boom or crash.
Anger at business varies through time. People may start thinking business is evil when prices of consumer goods increase substantially. Narratives blame business aggressiveness for rising prices, and public anger may continue after the inflation stops, if the public believes that prices are still too high. Anger can also become inflamed when businesses cut wages. Such anger may induce organized boycotts or disorganized decisions to postpone spending until prices are lower. In such cases, people view their buying decisions in moral terms, not just as satisfying their wants.
the boycott narrative and others in its constellation tend to recur when there is a broad-based undercurrent of social opprobrium, and they are economically important because they affect people’s willingness to spend and willingness to compromise.
By the middle of the depression of the 1890s, the narrative began to change, and the public was becoming fed up with a constant succession of boycotts. The moral authority of boycotts disappears when most people begin to express suspicion and annoyance with them. As Wolman notes: The influence of the American Federation of Labor has been exerted in inducing in its members a greater conservatism in the employment of the boycott. Practically the great majority of its legislative acts from 1893 to 1908 have been designed to control the too frequent use of the boycott. At the convention of 1894 the executive council remarked “the impracticability of indorsement of too many applications of this sort. There is too much diffusion of effort which fails to accomplish the best results.” Thereafter, every few years saw the adoption of new rules restricting the endorsement of boycotts.
After World War I, with immediate postwar inflation totaling 100%, a deflation narrative developed by 1920. The story that consumer prices would fall dramatically was strongly contagious owing to its association with the profiteer narrative. Indeed, during the 1920–21 depression, thousands of newspaper articles noted that certain individual prices had fallen to their prewar 1913 or 1914 levels. The newspapers’ writers and editors knew that readers would respond well to such stories because, to most people, it seemed natural that once the war was over, prices would return to their old levels: a very important perceived “return to normalcy” that might eventually encourage consumers to buy a new house or a new car, but only after prices came down fully.
As one observer wrote in 1920: The buying public knows that the war is over and has reached the point where it refuses to pay war prices for articles. Goods do not move, for people simply will not buy.6 Populist anger grew, along with protests against profiteering manufacturers and retailers. The protests sought to take advantage of a basic economic principle: If people determine to buy foodstuffs or anything else only what they actually cannot do without, the working of the inexorable law of supply and demand will operate automatically to bring conditions to a more normal state.7 Thus thrift became a new virtue as people waited for the return of the “normal” prices of 1913.
The economic narrative of the 1920s created an emotionally rich atmosphere of expectations about falling prices. The narrative was not only that it was smart to postpone purchases, but also that it was moral and responsible to do so.
The profiteer narratives did not stop with the end of the war in 1918. During the postwar inflation, in 1920 and 1921, narratives spread of customers angry at high prices chastising their milkman and telling their butcher they would stop eating meat altogether to spite them. Economists understood why wartime inflation continued until 1920 (heavily indebted governments faced troubles from a war-disrupted economy and did not want to raise taxes or raise interest rates, which would add to their deficit), but the public at large did not. The public began to view the wartime experience and the immediate postwar experience in terms of a battle between good and evil. The popular author Henry Hazlitt wrote in 1920: Hence we have self-righteous individuals on every corner denouncing the outrages and robberies committed by a sordid world. The butcher is amazed at the profiteering of the man who sells him shoes; the shoe salesman is astounded at the effrontery of the theatre ticket speculator; the theatre ticket speculator is staggered at the highhandedness of his landlord; the landlord raises his hands to high heaven at the demands of his coal man, and the coal man collapses at the prices of the butcher.13 We might ask: Did these people deserve to be called profiteers? It seems that their only crime was selling at higher prices in an inflationary period.
In 1917, during World War I, the United States imposed a 60% excess profits tax on profits above the prewar 1911–13 level. The excess profits tax was not revoked until October 1921, because anger at corporations lingered long after the war was over. The tax contributed to the 1920–21 depression by encouraging companies to postpone profits until after the tax was revoked. Meanwhile, people held off buying, not only because of their anger at selfish profiteers but also because of the perceived opportunity to profit from postponing their purchases during a time of falling prices.
Many of the narratives surrounding the recession of 1973–75 had a source in human anger. The most cited cause of this recession—the oil crisis generated by OPEC angrily protesting US support of Israel in the 1973 Yom Kippur War—was only part of the story. The price of oil suddenly quadrupled to unheard-of levels, generating anger among consumers and stories of difficulties dealing with oil rationing in the United States, such as odd-even rationing of gasoline. (Consumers could buy gasoline only on odd-numbered days if their license plate ended with an odd number, and only on even-numbered days if their license plate ended with an even number.) Higher oil prices caused higher electric bills, and anger at the perceived injustice was one of the reasons many people started keeping much of their homes in darkness, as a sort of protest.37 In the period of runaway US inflation of the 1970s, when many viewed inflation as the nation’s most important problem, one observer wrote in July 1974, “Fighting inflation is like fighting a forest fire, it requires courage, team play, and coordinated sacrifice.”38 At the time, US annual inflation was 12%, which was a record high excluding periods surrounding the world wars.
The wage-price spiral narrative took hold in the United States and many other countries around the middle of the twentieth century. It described a labor movement, led by strong labor unions, demanding higher wages for themselves, which management accommodates without losing profits by pushing up the prices of final goods sold to consumers. Labor then uses the higher prices to justify even higher wage demands, and the process repeats itself again and again, leading to out-of-control inflation. The blame for inflation thus falls on both labor and management, and some may blame the monetary authority, which tolerates the inflation. This narrative is associated with the term cost-push inflation, where cost refers to the cost of labor and inputs to production. It contrasts with a different popular narrative, demand-pull inflation, a theory that blames inflation on consumers who demand more goods than can be produced.
In contrast to the 1920s and the preceding chapter, there were now multiple possible sources of evil behind inflation, not so focused on evil businesses of various kinds, but now also on evil labor.
Social media and search engines have the potential to alter the fundamentals of contagion. In the past, ideas spread in a random, non-systematic way. Social media platforms make it possible for like-minded people with extremist views to find each other and further reinforce their unusual beliefs. Contagion is not slowed down by fact-checkers. In contrast, the Internet and social media allow ideas to be spread with central control that is nonetheless poorly visible. Designers of social media and search engines have the ability to alter the nature of contagion, and society is increasingly demanding that they do so to prevent devious use of the Internet and the spread of fake news.