The Rise and Fall of the Neoliberal Order: America and the World in the Free Market Era

The battle between America’s left and right has come to a very polarizing moment with the election of President Trump. The left wants U.S. government to spend more on social issues related to education, healthcare, and efforts to reduce social inequality, while the right wants to reduce regulation and hold free market ideology to the extreme. Going back to the period following World War II, Gary Gerstle discusses the rise and fall of the New Deal Order between 1930s and 1970s, which was followed by the (dying) Neoliberal Order we know today. This book offers a balanced view of the necessity of fostering innovation through maintaining liberal order and the free market, while at the same time allowing government intervention to solve problems that the free market is unable to fix – a lesson for this period of the century.

In the last hundred years, America has had two political orders: the New Deal order that arose in the 1930s and 1940s, crested in the 1950s and 1960s, and fell in the 1970s; and the neoliberal order that arose in the 1970s and 1980s, crested in the 1990s and 2000s, and fell in the 2010s.

The New Deal order was founded on the conviction that capitalism left to its own devices spelled economic disaster. It had to be managed by a strong central state able to govern the economic system in the public interest. The neoliberal order, by contrast, was grounded in the belief that market forces had to be liberated from government regulatory controls that were stymieing growth, innovation, and freedom.

Political orders, in other words, are complex projects that require advances across a broad front. New ones do not arise very often; usually they appear when an older order founders amid an economic crisis that then precipitates a governing crisis. “Stagflation” precipitated the fall of the New Deal order in the 1970s; the Great Recession of 2008–2009 triggered the fracturing of the neoliberal order in the 2010s.

The fear of communism made possible the class compromise between capital and labor that underwrote the New Deal order. It made possible similar class compromises in many social democracies in Europe after the Second World War.

being tagged with the kiss-of-death label, “soft on communism.” But the threat of communism, I argue, actually worked in a quite different direction: It inclined capitalist elites to compromise so as to avert the worst. American labor was strongest when the threat of communism was greatest. The apogee of America’s welfare state, with all its limitations, was coterminous with the height of the Cold War. The dismantling of the welfare state and the labor movement, meanwhile, marched in tandem with communism’s collapse.

The New Deal order gained its power not just from dependable electoral and business constituencies but also from its ability to implant its core ideological principles on the political landscape. One article of faith was that unfettered capitalism had become a destructive force, generating economic instability and inequalities too great for American society to tolerate. The lack of jobs was calamitous; across a decade, the United States struggled with unemployment rates that hovered around and often exceeded 20 percent. This level of market breakdown drove a stake through the ideology of laissez-faire, a shibboleth of American economic life in the late nineteenth and early twentieth centuries. Large majorities of Americans now agreed that some force was necessary to counterbalance the destructive chaos of markets and to manage capitalism’s growth in the public interest. Those pushing for change looked to the federal government as the one institution with the size, resources, and will to perform that role.

If the New Deal order rested on durable electoral constituencies, a class compromise between capital and labor, and a hegemonic belief in the value of government constraining markets, it also brought into politics a distinct moral perspective: first, that public good ought to take precedence over private right; second, that government was the instrument through which public good would be pursued and achieved; and third, that the goal of government action—and a central part of the pursuit of the public good—ought to be to enhance every individual’s opportunities for personal fulfillment.

A commitment to the public good over private right; happiness and expressiveness through consumption; the capacity of the marketplace to deliver on America’s egalitarian promise; and a faith in the ability of expertise to nurture individuality: These were the components of the New Deal order’s moral perspective.

To ensure success in the fight against communism, those in the mainstream of the Republican Party actually acquiesced to the core principles of the New Deal, thereby facilitating the New Deal’s transition from political movement to political order.47 The capitulation of Republicans to the New Deal can best be grasped by a glance at the contrasting fortunes of its two leaders during this time. The first was Robert Taft, senator from Ohio, who desperately wanted to restore laissez-faire and small government to America but could find no way, in a world organized around the Cold War, to persuade enough Republicans and Americans to go along with him. The second was Dwight D. Eisenhower, whose willingness to support the New Deal as president ensured the ascendance of his star as Taft’s faded.

Too many Americans had seen their livelihoods destroyed in the 1930s, the longest period of market failure in American history. National security now required a managed capitalist system; it demanded that the New Deal be maintained, even expanded. Social programs once anathema to Republicans were now legitimate, for they would help to contain the Soviet threat—both at home, so that Americans would have no cause to find communism appealing, and abroad, by demonstrating the success of the American system to the emerging nations of Asia and Africa.

Under Cold War pressure, Eisenhower made the Republican Party a supporter of Democratic Party programs. This was the moment when the New Deal transitioned from political movement to political order, when all meaningful players in the political arena felt compelled to abide by its principles.

The 1960s and 1970s were the New Deal order’s moment of reckoning. As race and Vietnam became the two most important issues in American politics, they created divides among Democratic Party constituents that the New Deal order could not bridge. These divisions were followed by the long economic recession of the 1970s, a recession whose consequences endured because they were associated with underlying changes in the world economy. These three forces—race, Vietnam, and economic decline—battered the New Deal order in the 1960s and 1970s beyond a point where it could repair itself.

The rivalry between the United States and the Soviet Union for the loyalty of what was coming to be called the Third World was fierce. Both sides understood how much the outcome could turn on race. Already in the 1940s the Soviet Union was taking delight in embarrassing the United States internationally, by showing how the institutions of white supremacy in the southern states contradicted America’s professed commitment to the proposition that all men are created equal. The Soviet press was disseminating in Africa and Asia stories about black children in the South being denied adequate schooling, black accident victims dying because no white hospital in the South would admit them, and African diplomats being refused access to white restaurants and washrooms while traveling south of the Mason-Dixon line. The State Department worked hard to counter this image but its efforts would count for little as long as the American South remained segregated. The United States, its foreign policy establishment concluded, had to demonstrate through deeds a commitment to dismantling segregation to achieve racial equality. The Cold War was helping to make civil rights a paramount issue in America.

Prior to 1954, the US Justice Department had begun filing amicus curiae briefs in support of NAACP lawsuits challenging the legality of segregation in the South. In these briefs, the government repeatedly stressed the embarrassment that race discrimination was causing America abroad and the damage it was doing to national security. In the amicus brief filed in Brown, the Justice Department reproduced a statement from former secretary of state Acheson declaring that “hostile reaction [to American racial practices] among normally friendly peoples . . . is growing in alarming proportions” and jeopardizing the “effective maintenance of our moral leadership of the free and democratic nations of the world.”

It compelled members of the American establishment to realize that they had to do something on the American race question, not so much because African American citizens deserved equal rights but because the failure to act would harm the country in its life-and-death international struggle with the Soviet Union.

The anti-government dimensions of Carter’s incipient neoliberalism dominated his presidential rhetoric. “There is a limit to the role and the function of government,” he declared in his 1978 State of the Union address. “Government cannot solve our problems, it can’t set our goals, it cannot define our vision. Government cannot eliminate poverty or provide a bountiful economy or reduce inflation or save our cities or cure illiteracy or provide energy. And government cannot mandate goodness.”41 These declarations constituted a quite extraordinary rejection of the pro-government creed that lay at the heart of the New Deal order. Carter was giving voice to neoliberal stirrings. In concrete terms, they manifested themselves in the deregulatory legislation aimed at the airline, trucking, and railroad industries that Carter pushed through Congress. These laws removed restrictions that had hampered the entry of new providers into these sectors, thereby intensifying competition and stimulating innovation. The changes were most immediately visible in the airline industry, where Congress shifted regulatory responsibility from a lumbering Civil Aeronautics Board (CAB) to a Federal Aviation Administration far more open than the CAB had been to licensing new carriers, to reducing costs, and to expanding service.

Politically, liberty signified a determination to limit government and thereby to maximize individual freedom. In the twenty years following 1776, it became fundamental to American political thought and integral to the system of government set up under the US Constitution, ratified in 1789. That system fragmented the authority of the central government into three branches—the executive, the legislative, and the judicial—so as to prevent any one of them from accumulating sufficient power to reproduce the tyranny of George III. The first ten amendments to the Constitution, adopted in 1791, further strengthened the liberal character of American governance by elaborating a set of individual rights—to freedom of speech, assembly, and religion; to petition the government without fear of reprisal; to a swift and fair trial if charged with a crime—that could not be abrogated except under extreme circumstances. The rights enumerated in these ten amendments came to be known as the Bill of Rights and arguably they constitute the greatest liberal document ever produced by the United States. This bill identified a core area of human freedom, asserted its inviolability, and protected it from the exercise of arbitrary government power. This emphasis on human freedom and its protection was foundational to classical liberalism—and to the American republic.

Surveying the dreams unleashed by liberalism in the nineteenth century, the prominent twentieth-century journalist and cultural critic Walter Lippmann, writing from the perspective of the 1930s, declared that liberalism had been nothing short of revolutionary. Liberalism changed “the condition in which men lived,” Lippmann declared. “It was no accident that the century which followed” liberalism’s appearance, Lippmann argued, “was the great century of human emancipation. In that period, chattel slavery and serfdom, the subjection of women, the patriarchal domination of children, caste and legalized class privileges, the exploitation of backwards peoples, autocracy in government, the disfranchisement of the masses and their compulsory illiteracy . . . were outlawed in the human conscience, and in a very substantial degree they were abolished in fact.” Lippmann followed Smith in arguing that this march of human freedom was grounded, first and foremost, in the progress of economic freedom. The advance of economic freedom was rooted, in turn, in liberalism’s success in lifting barriers to individual initiative and in unshackling Smith’s division of labor dynamic, triggering both economic growth and new confidence in the scope of human individuality.10

As they turned toward order, some liberals also began to suggest that not all peoples were equipped with the tools necessary to cultivate their individuality, and thus, that they were not ready for full participation in liberal projects. Modern peoples, the prominent English liberal, Herbert Spencer, argued, were engaged in a Darwinian struggle for freedom in which only the fittest would survive. Not every nation and race, therefore, would succeed in constructing a liberal polity and in handling freedom responsibly. Those who were deemed unfit for such freedom were to be excluded from polities or denied full participation in them.

Both Roosevelt and Wilson rejected the notion that the free market constituted a natural order whose energies were beyond the capacity of humans to manage or redirect. They believed that unregulated markets had produced an intolerable imbalance in power and wealth between employers and employees. The human casualties of a capitalist system had grown too numerous, their injuries too disabling. The time had come for a strong central state to intervene in economic processes, to create a level playing field in which workers and employers could engage each other on more or less equal terms, and to provide a cushion for those who, through injury, unemployment, and poverty, had been cast aside. The Progressive era (1900–1920) was defined by efforts to curb corporate power, to grant labor unions rights to collective bargaining, to inaugurate schemes of social insurance, and to establish a welfare state.

What drew Rougier to The Good Society was Lippmann’s condemnation of laissez-faire, which Lippmann and many others held responsible for liberalism’s early twentieth-century decline. In Lippmann’s eyes, the fatal mistake made by laissez-faire advocates was to think that markets were founded in nature and needed no superintending. To the contrary, Lippmann insisted, markets had never existed in the natural world. They had to be built by human hands and actively maintained. Market malfunctions would inevitably occur and require repair. By insisting that markets were natural creations and in need of no superintendence, laissez-faire dogmatists, Lippmann charged, had unwittingly freed up political space for the superficially attractive but actually wicked collectivisms of the right and left to take hold.

In fact, we can discern in American neoliberalism three quite distinct strategies of reform, or clusters of policy initiatives, each crystallizing in the twenty years between the first meeting of the Mont Pelerin Society and the late 1960s, when the crackup of the New Deal order gave neoliberal reformers their opportunity to gain influence in American politics. The first strategy of reform was to encase free markets in rules governing property and exchange and the circulation of money and credit. Encasement required strong government interventions in economic life, both domestically and internationally. The second strategy was to apply market principles not just to those two areas traditionally identified as market driven (work and production, on the one hand, and income and consumption, on the other) but to all areas of human endeavor. Some neoliberals began extending market analysis into the private realm of family and morality, reimagining these spheres not, in Smithian terms, as preserves of human association and sensibility located at a safe distance from market forces, but as behavior best understood in economistic terms of inputs and outputs, investments and returns. The third strategy sought to recuperate the utopian promise of personal freedom embedded in classical liberalism. This strand privileged not order and control and the analysis of inputs and outputs, as did the first two strategies, but the thrill and adventure of throwing off constraints from one’s person and one’s work. Foucault had argued in his 1978–1979 Sorbonne lectures on neoliberalism that this last strategy became particularly influential in America, drawing support from the left as well as the right, its “utopian focus . . . always being revived.”

Neoliberals readily justified giving the government powers it had formerly lacked if such power could be demonstrated to ensure the smooth operation of markets. This strategy was built on a paradox: namely, that government intervention was necessary to free individuals from the encroachments of government. Another way of framing the paradox is to note that the establishment of economic order, or what Hayek called a “constitution of liberty,” was a prerequisite for making possible liberty of individuals from government.

The second neoliberal strategy was more genuinely innovative: expand the terrain of human activities subject to market principles. In classical liberalism, homo economicus, “economic man,” was defined as a man of exchange, who swapped his labor for a wage. The resulting income he then exchanged for goods in the marketplace that he either needed or desired. Much economic analysis revolved around how the two economic spheres of “production” and “consumption” were to be structured on both a micro and macro scale. Other realms of human existence—the family, religion, and politics—were thought to stand apart from these two major arenas of economic exchange and outside the realm of activity encompassed by homo economicus. Neoliberals, however, began to argue that economic man could not be comprehended in such narrow terms. Rather, economic man was himself a repository of capital. He was the producer of his own wants and needs; he was what Foucault would later call “an entrepreneur of himself,” “being . . . his own capital.”34 The concept of homo economicus, therefore, had to be expanded to include the various investments that such a man and others had made in his personhood. A capitalist of the self, homo economicus was always being called on to make decisions about how to deploy his capital so as to satisfy his needs and wants.

Terminology will be something of a problem as we go forward, because so many of the advocates of free markets in America would end up calling themselves conservatives, largely a result of the theft of the liberal label by the Rooseveltian New Dealers fifty years earlier. That theft qualifies as one of history’s great terminological heists.70 Many of the principals in the story of neoliberalism’s rise, as a result of the heist, ended up identifying themselves as conservatives. But conservative is not a good descriptor of the commitment to free market capitalism that lay at the heart of their worldview. Free market capitalism connotes dynamism, creative destruction, irreverence toward institutions, and the complex web of relations that imbed individuals in those institutions. This sort of capitalism, in other words, is the enemy of what conservatives in the classical sense value: order, hierarchy, tradition, embeddedness, continuity.

Powered to the presidency in 1980 by constituencies he had peeled away from the Democrats (whites in the South and white urban ethnics in the North), Reagan began to implement his neoliberal vision for American life across a broad front: deregulating the economy; stripping the government of power and resources; reshaping the courts and their jurisprudence; establishing new rules to “free” political conversation from the grip of establishment, and allegedly, New Deal–oriented, media; and cultivating a neo-Victorian moral code to gird Americans against the temptations of excess that were forever present in an economy given over to market freedom. Reagan’s two terms were not sufficient to transform everything. But by the time he left office in 1989, he had profoundly altered the landscape of American politics. Money, votes, policies, jurisprudence, media influence, and a strong moral stance: These were all part of the architecture of an ascending neoliberal order. They came together under the presidency of Ronald Reagan.

Corporate political action committees (PACs) constituted a second new form of business mobilization. Campaign finance law changes in the 1970s had made it possible for individual corporations to ask their employees to contribute to a company’s PAC, with decisions about which policies and political campaigns to support left in the hands of firm owners and executives. This rule change dramatically increased the potential for corporate influence, as the ceiling on the size of PAC contributions permitted by law was much higher than the amount a single wealthy individual could donate to a candidate.

Many businessmen who were intrigued with Reagan were not as radical as Simon and not bent, initially at least, on upending the New Deal order. Quite a number of them had long benefited from the rich set of government-business relations that the New Deal order had engendered. What, then, made them willing to consider the more radical path advocated by Reagan? Three factors stand out. First, the American economy performed poorly for much of the 1970s, its reputation for global preeminence now tarnished, the instruments in the Keynesian toolkit stiff and rusty. Second, the sharp escalation of foreign goods invading the US marketplace made many in corporate ranks less willing to tolerate the power of organized labor. Signing on to high wages and benefits as a matter of course, the precedent set by the 1950 Treaty of Detroit, had become a more costly venture in the age of renewed international competition and minuscule productivity increases.12 Growing suspicions that the Soviet economy was in decline, and thus that the communist system perhaps posed less of a threat, may also have signaled to the corporate titans of America they were now freer to adopt a more antagonistic stance toward labor unions. Indeed, the Business Roundtable’s mobilization to defeat the labor reform act of 1977 was a novel, even shocking, move by corporate America in an industrial relations system still ostensibly governed by the rules hammered out in the Treaty of Detroit thirty years earlier.

Reagan immediately targeted two pillars of the New Deal for deregulatory treatment: federal government support for collective bargaining and progressive taxation. In 1981, he fired more than 10,000 air traffic controllers who had gone on strike for better pay and improved working conditions. Reagan’s bold move stunned the union, the Professional Air Traffic Controllers Organization, which had endorsed Reagan in the 1980 election. It signaled to all public and private employers that he would support a tougher stance toward unions than any administration had since the 1920s. In symbolic terms, his act carried as much significance as the refusal of the Democratic governor of Michigan and President Franklin Roosevelt in 1937 to send National Guard or federal troops to Flint to oust the autoworkers occupying the plants of General Motors. This 1930s refusal signaled that a president and his party were serious about compelling corporations to reach fair agreements with unions that had organized their workers. Similarly, Reagan’s firing of an entire workforce for going on strike was the equivalent of a president sending in the troops to break a strike. It served notice that the president and the dominant party were now ready to eviscerate labor’s power. From that time forward, American workers who went on strike knew that they might well pay for their actions with the loss of their jobs. The American labor movement hemorrhaged members in the 1980s. The decline in labor membership is often not included in accounts of deregulation and neoliberal success. But, in fact, there is no more powerful form of market deregulation than stripping government of its ability to strengthen workers in their negotiations with employers.

Putin desperately wanted the Soviet Union to fight for its life at all costs. Across history, this is what most empires in decline had done. Some went to war. Others tried to save themselves through internal reform, and then repressed their subjects when the reforms unleashed forces of change that no one had anticipated. The Soviet Union could have survived in this manner for decades beyond 1991, especially since no one, outside the mujahideen in Afghanistan, really wanted to fight it. With 500,000 troops in Eastern Europe as late as 1989, and tens of thousands of nuclear warheads spread throughout the country and capable of reaching any destination on earth in an interval ranging from minutes to hours, the USSR still possessed formidable capacity to rain mayhem and destruction on its enemies.

Another consequence of communism’s fall may be less obvious but is of equal importance: It removed what remained in America of the imperative for class compromise. A compromise between capital and labor had been foundational to the New Deal order. Labor had gained progressive taxation, social security, unemployment insurance, the right to organize, a national commitment to full employment, government backing for collective bargaining, and limits on the inequality between rich and poor. Capital had gained assurances that government would act to smooth out the business cycle, maintain a fiscal and monetary environment that would assure reasonable profits, and contain labor’s power. In the 1990s, capital still wanted the US government’s assistance in ordering markets. But in a world cleared of communism, long its most ardent opponent, it felt the need to compromise with labor less and less.

Bush shared Thomas Friedman’s faith in the power of pluralism. And he believed, with Friedman, that pluralism and the free movement of people were the keys to innovation, economic growth, and dynamic capitalism. Freedom in all its forms (to move, to mingle, to communicate, to innovate), Bush remarked in 2003, “unleashes human creativity—and creativity determines the strength and wealth of nations.”43 But what to do in situations where dictators or backward-looking elites had shut off their societies from these neoliberal forces? For Bush, as for Friedman, Arab and Muslim societies in the Middle East had posed this challenge in acute form. In such situations, Friedman had argued, pluralism might have to be imposed on a society, perhaps via American bombs and tanks. Friedman had been an early and ardent supporter of Bush’s war in Iraq. Arab societies, he believed, should not be allowed to opt out of one-worldism. The American invasion, he argued, was designed in part to blow apart the institutions and elites that had closed Muslim societies to the open circulation of ideas, to pluralism, and to opportunity and innovation.

Lawrence Summers, a prominent economist in the Clinton administration soon to play an important role in Barack Obama’s first presidential term, gave voice years later to what may have been on Greenspan’s mind in 2005 and 2006. In a 2013 speech to the International Monetary Fund, Summers seemed to be suggesting that “even a great [financial] bubble” of the sort that had enveloped America in the new century’s first decade had not been enough to stimulate full employment and significant inflation. Why not? Was it possible, Summers wondered, that aggregate global demand, even with easy money, was too weak? Had bubbles become necessary to stimulate demand by making it possible for consumers, now encouraged to deepen the debt side of their household ledger sheets, to increase their purchasing power well beyond what their incomes and savings otherwise would have allowed? Did it follow from Summers’s hypothesis that the global economy may have “needed” Americans to spend beyond their means in order to generate sufficient demand to sustain a global production system with enormous capacity?

Obama’s approach to economic recovery damaged that hegemony further by undercutting one of neoliberalism’s core propositions: namely, that “freeing markets” from government oversight would lead to opportunity and prosperity for all. Freeing the banks from government regulation had produced first the housing bubble and then financial and economic collapse. In the aftermath of the crash, almost no one believed that the failing financial markets could repair themselves. Markets, it turned out, required government intervention and regulation. But of what sort? Was it right for the Obama administration to have privileged the banks in its recovery plans? Was it appropriate for it to have given private insurance companies a central role in its design of health care reform? Or did these decisions simply demonstrate that government was deepening rather than easing the rigging of American life in favor of large institutions that were already dominant? On the left, socialists began to make themselves heard for the first time in decades, arguing that government had to bring back the robust regulatory apparatus of the New Deal order. On the right, pundits would soon begin to talk about a “deep state” that engineered outcomes for the rich and powerful and that was impervious to popular and democratic control. The political repercussions of the Great Recession of 2008 were about to explode on America, and they would rock the neoliberal order to its core.

When neoliberalism was hegemonic, the virtues of free trade and globalization were unassailable. To attack these policies during the neoliberal heyday was to mark oneself as marginal and irrelevant at best and as dangerously delusional at worst. That the two most dynamic contestants for president in 2016, one on the left and the other on the right, were both mounting direct challenges to neoliberal orthodoxies reveals the degree to which the neoliberal order itself was coming under challenge and, perhaps, would soon be knocked from its perch.

The ethnonationalist leaders, Trump included, also possessed authoritarian tendencies. They were impatient with parliaments, independent judiciaries, and other aspects of liberal democracy often associated with systems of neoliberal rule. They saw themselves and each other as strongmen determined to take care of the right people and able to make the tough decisions necessary to do so. They wanted the planet to be governed by force rather than by international law and multinational nongovernmental organizations. They sought a world divided into blocs—East Asia, North America, northern Eurasia, the Middle East, and South Asia among them—each controlled by a regional hegemon. This world would be far from flat; rather, tall, jagged, and hard-to-scale walls would separate blocs (and often the nations within them) from each other.

A political order must have the ability to shape the core ideas of political life. It must be able to do so not just for one political party’s most ardent supporters but for people located across the political spectrum. The New Deal order sold a large majority of Americans on the proposition that a strong central state could manage a dynamic but dangerous capitalist economy in the public interest. The neoliberal order persuaded a large majority of Americans that free markets would unleash capitalism from unnecessary state controls and spread prosperity and personal freedom throughout the ranks of Americans and then throughout the world. Neither of these propositions today commands the support or authority that they once possessed.

Posted in Review and Ideas | Tagged , , , , , , , , , , | 1 Comment

Prisoners of Geography: Ten Maps that Explain Everything About the World

The war in Ukraine, decades-long tension between India and Pakistan, and the melting ice cap in Arctic, these are all issues discussed by Tim Marshall in his 2015 book, Prisoners of Geography. I picked up this book on the recommendation of my girlfriend’s friend living in Hong Kong on the days following Russian invasion in Ukraine this year, and found the author’s analysis on the need of Russia to keep Ukraine in its orbit to be prescient. Of course, this does not justify a non-provoked attack on the sovereign country, but it does help in understanding the geopolitical landscape of the region. Regardless of where you live and your political view, this book provides facts-based analysis on how the great geopolitical chess game in the world could unfold in the coming decades.

When writers seek to get to the heart of the bear they often use Winston Churchill’s famous observation of Russia, made in 1939: “It is a riddle wrapped in a mystery inside an enigma,” but few go on to complete the sentence, which ends “but perhaps there is a key. That key is Russian national interest.” Seven years later he used that key to unlock his version of the answer to the riddle, asserting, “I am convinced that there is nothing they admire so much as strength, and there is nothing for which they have less respect than for weakness, especially military weakness.”

At the end of the Second World War in 1945, the Russians occupied the territory conquered from Germany in Central and Eastern Europe, some of which then became part of the USSR, as it increasingly began to resemble the old Russian empire. In 1949, the North Atlantic Treaty Organization (NATO) was formed by an association of European and North American states, for the defense of Europe and the North Atlantic against the danger of Soviet aggression. In response, most of the Communist states of Europe—under Russian leadership—formed the Warsaw Pact in 1955, a treaty for military defense and mutual aid. The pact was supposed to be made of iron, but with hindsight, by the early 1980s it was rusting, and after the fall of the Berlin Wall in 1989 it crumbled to dust. President Putin is no fan of the last Soviet president, Mikhail Gorbachev. He blames him for undermining Russian security and has referred to the breakup of the former Soviet Union during the 1990s as a “major geopolitical disaster of the century.”

Russia as a concept dates back to the ninth century and a loose federation of East Slavic tribes known as Kievan Rus, which was based in Kiev and other towns along the Dnieper River, in what is now Ukraine. The Mongols, expanding their empire, continually attacked the region from the south and east, eventually overrunning it in the thirteenth century. The fledgling Russia then relocated northeast in and around the city of Moscow. This early Russia, known as the Grand Principality of Muscovy, was indefensible. There were no mountains, no deserts, and few rivers. In all directions lay flatland, and across the steppe to the south and east were the Mongols. The invader could advance at a place of his choosing, and there were few natural defensive positions to occupy.

Whatever its European credentials, Russia is not an Asian power for many reasons. Although 75 percent of its territory is in Asia, only 22 percent of its population lives there. Siberia may be Russia’s “treasure chest,” containing the majority of the mineral wealth, oil, and gas, but it is a harsh land, freezing for months on end, with vast forests (taiga), poor soil for farming, and large stretches of swampland. Only two railway networks run west to east—the Trans-Siberian and the Baikal-Amur Mainline. There are few transport routes leading north to south and so no easy way for Russia to project power southward into modern Mongolia or China: it lacks the manpower and supply lines to do so.

When the Soviet Union broke apart, it split into fifteen countries. Geography had its revenge on the ideology of the Soviets, and a more logical picture reappeared on the map, one where mountains, rivers, lakes, and seas delineate where people live, how they are separated from each other and, thus, how they developed different languages and customs. The exception to this rule are the “stans,” such as Tajikistan, whose borders were deliberately drawn by Stalin so as to weaken each state by ensuring it had large minorities of people from other states.

For the Russian foreign policy elite, membership in the EU is simply a stalking horse for membership in NATO, and for Russia, Ukrainian membership in NATO is a red line. Putin piled the pressure on Yanukovych, made him an offer he chose not to refuse, and the Ukrainian president scrambled out of the EU deal and made a pact with Moscow, thus sparking the protests that were eventually to overthrow him. The Germans and Americans had backed the opposition parties, with Berlin in particular seeing former world boxing champion turned politician Vitali Klitschko as their man. The West was pulling Ukraine intellectually and economically toward it while helping pro-Western Ukrainians push it westward by training and funding some of the democratic opposition groups. Street fighting erupted in Kiev and demonstrations across the country grew. In the east, crowds came out in support of the president. In the west of the country, in cities such as L’viv, which used to be in Poland, they were busy trying to rid themselves of any pro-Russian influence. By mid-February 2014, L’viv, and other urban areas, were no longer controlled by the government. Then on February 22, after dozens of deaths in Kiev, the president, fearing for his life, fled. Anti-Russian factions, some of which were pro-Western and some pro-fascist, took over the government. From that moment the die was cast. President Putin did not have much of a choice—he had to annex Crimea, which contained not only many Russian-speaking Ukrainians but most important the port of Sevastopol. This geographic imperative and the whole eastward movement of NATO is exactly what Putin had in mind when, in a speech about the annexation, he said “Russia found itself in a position it could not retreat from. If you compress the spring all the way to its limit, it will snap back hard. You must always remember this.”

Crimea was part of Russia for two centuries before being granted to the Soviet Republic of Ukraine in 1954 by President Khrushchev at a time when it was envisaged that Soviet man would live forever and so be controlled by Moscow forever. Now that Ukraine was no longer Soviet, nor even pro-Russian—Putin knew the situation had to change. Did the Western diplomats know? If they didn’t, then they were unaware of rule A, lesson one, in “Diplomacy for Beginners”: When faced with what is considered an existential threat, a great power will use force. If they were aware, then they must have considered Putin’s annexation of Crimea a price worth paying for pulling Ukraine into modern Europe and the Western sphere of influence.

You could make the argument that President Putin did have a choice: he could have respected the territorial integrity of Ukraine. But, given that he was dealing with the geographic hand God has dealt Russia, this was never really an option. He would not be the man who “lost Crimea” and with it the only proper warm-water port his country had access to. No one rode to the rescue of Ukraine as it lost territory equivalent to the size of Belgium, or the state of Maryland. Ukraine and its neighbors knew a geographic truth: that unless you are in NATO, Moscow is near, and Washington, DC, is far away. For Russia this was an existential matter: they could not cope with losing Crimea, but the West could.

President Putin is a student of history. He appears to have learned the lessons of the Soviet years, in which Russia overstretched itself and was forced to contract. An overt assault on the Baltic States would likewise be overstretching and is unlikely, especially if NATO and its political masters ensure that Putin understands their signals. At the beginning of 2016, the Russian president sent his own signal. He changed the wording of Russia’s overall military strategy document and went further than the naval strategy paper of 2015. For the first time the US was named as an “external threat” to Russia.

Why would the Russians want Moldova? Because as the Carpathian Mountains curve around southwest to become the Transylvanian Alps, to the southeast is a plain leading down to the Black Sea. That plain can also be thought of as a flat corridor into Russia, and just as the Russians would prefer to control the North European Plain at its narrow point in Poland, so they would like to control the plain by the Black Sea—also known as Moldova—in the region formerly known as Bessarabia.

Russia’s most powerful weapons now, leaving to one side nuclear missiles, are not the Russian army and air force, but gas and oil. Russia is second only to the United States as the world’s biggest supplier of natural gas, and of course it uses this power to its advantage. The better your relations with Russia, the less you pay for energy; for example, Finland gets a better deal than the Baltic States. This policy has been used so aggressively, and Russia has such a hold over Europe’s energy needs that moves are afoot to blunt its impact. Many countries in Europe are attempting to wean themselves off their dependency on Russian energy, not via alternative pipelines from less aggressive countries but by building ports.

Washington is already approving licenses for export facilities, and Europe is beginning a long-term project to build more LNG terminals. Poland and Lithuania are constructing LNG terminals; other countries such as the Czech Republic want to build pipelines connecting to those terminals, knowing they could then benefit not just from American liquefied gas, but also supplies from North Africa and the Middle East. The Kremlin would no longer be able to turn the taps off. The Russians, seeing the long-term danger, point out that piped gas is cheaper than LNG, and President Putin, with a What did I ever do wrong? expression on his face, says that Europe already has a reliable and cheaper source of gas coming from his country. LNG is unlikely to completely replace Russian gas, but it will strengthen what is a weak European hand in both price negotiation and foreign policy. To prepare for a potential reduction in revenue, Russia is planning pipelines heading southeast and hopes to increase sales to China.

If China did not control Tibet, it would always be possible that India might attempt to do so. This would give India the commanding heights of the Tibetan Plateau and a base from which to push into the Chinese heartland, as well as control of the Tibetan sources of three of China’s great rivers, the Yellow, Yangtze, and Mekong, which is why Tibet is known as “China’s Water Tower.” China, a country with approximately the same volume of water usage as the United States, but with a population five times as large, will clearly not allow that. It matters not whether India wants to cut off China’s river supply, only that it would have the power to do so. For centuries China has tried to ensure that it could never happen. The actor Richard Gere and the Free Tibet movement will continue to speak out against the injustices of the occupation, and now settlement, of Tibet by Han Chinese; but in a battle between the Dalai Lama, the Tibetan independence movement, Hollywood stars, and the Chinese Communist Party—which rules the world’s second-largest economy—there is going to be only one winner. When Westerners, be they Mr. Gere or President Obama, talk about Tibet, the Chinese find it deeply irritating. Not dangerous, not subversive—just irritating. They see it not through the prism of human rights, but that of geopolitical security, and can only believe that the Westerners are trying to undermine their security. However, Chinese security has not been undermined and it will not be, even if there are further uprisings against the Han. Demographics and geopolitics oppose Tibetan independence.

There was, is, and always will be trouble in Xinjiang. The Uighurs have twice declared an independent state of “East Turkestan,” in the 1930s and 1940s. They watched the collapse of the Russian Empire result in their former Soviet neighbors in the stans becoming sovereign states, were inspired by the Tibetan independence movement, and many are now again calling to break away from China. Interethnic rioting erupted in 2009, leading to more than two hundred deaths. Beijing responded in three ways: it ruthlessly suppressed dissent, it poured money into the region, and it continued to pour in Han Chinese workers. For China, Xinjiang is too strategically important to allow an independence movement to get off the ground: it not only borders eight countries, thus buffering the heartland, but it also has oil, and is home to China’s nuclear weapons testing sites. The territory is also key to the Chinese economic strategy of “One Belt, One Road.”

There are similar reasons for the party’s resistance to democracy and individual rights. If the population were to be given a free vote, the unity of the Han might begin to crack or, more likely, the countryside and urban areas would come into conflict. That in turn would embolden the people of the buffer zones, further weakening China. It is only a century since the most recent humiliation of the rape of China by foreign powers; for Beijing, unity and economic progress are priorities well ahead of democratic principles. The Chinese look at society very differently from the West. Western thought is infused with the rights of the individual; Chinese thought prizes the collective above the individual. What the West thinks of as the rights of man, the Chinese leadership thinks of as dangerous theories endangering the majority, and much of the population accepts, at the least, that the extended family comes before the individual.

America is committed to defending Taiwan in the event of a Chinese invasion under the Taiwan Relations Act of 1979. However, if Taiwan declares full independence from China, which China would consider an act of war, the United States is not to come to its rescue, as the declaration would be considered provocative.

There are 1.4 billion reasons why China may succeed, and 1.4 billion reasons why it may not surpass America as the greatest power in the world. A great depression such as in the 1930s could set it back decades. China has locked itself into the global economy. If we don’t buy, they don’t make. And if they don’t make, there will be mass unemployment. If there is mass and long-term unemployment, in an age when the Chinese are a people packed into urban areas, the inevitable social unrest could be—like everything else in modern China—on a scale hitherto unseen.

In the twenty-first century, Mexico poses no territorial threat to the United States, although its proximity causes America problems, as it feeds its northern neighbor’s appetite for illegal labor and drugs. In 1821 that was different. Mexico controlled land all the way up to Northern California, which the United States could live with, but it also stretched out east, including what is now Texas, which, then as now, borders Louisiana. Mexico’s population at the time was 6.2 million, the United States’s 9.6 million. The US army may have been able to see off the mighty British, but they had been fighting three thousand miles from home with supply lines across an ocean. The Mexicans were next door. Quietly, Washington, DC, encouraged Americans, and new arrivals, to begin to settle on both sides of the US–Mexican border. Waves of immigrants came and spread west and southwest. There was little chance of them putting down roots in the region we now know as modern Mexico, thus assimilating, and boosting, the population numbers there. Mexico is not blessed in the American way. It has poor-quality agricultural land, no river system to use for transport, and was wholly undemocratic, with new arrivals having little chance of ever being granted land. While the infiltration of Texas was going on, Washington, DC, issued the Monroe Doctrine (named after President James Monroe) in 1823, which boiled down to warning the European powers that they could no longer seek land in the Western Hemisphere, and that if they lost any parts of their existing territory they could not reclaim them. Or else.

Hence, we will see the United States increasingly investing time and money in East Asia to establish its presence and intentions in the region. For example, in Northern Australia the Americans have set up a base for the US Marine Corps. But in order to exert real influence they may also have to invest in limited military action to reassure their allies that they will come to their rescue in the event of hostilities. For example, if China begins shelling a Japanese destroyer and it looks as if they might take further military action, the US Navy may have to fire warning shots toward the Chinese navy, or even fire directly, to signal that it is willing to go to war over the incident. Equally, when North Korea fires at South Korea, the South fires back, but currently the United States does not. Instead, it puts forces on alert in a public manner to send a signal. If the situation escalated it would then fire warning shots at a North Korean target, and finally, direct shots. It’s a way of escalating without declaring war—and this is when things get dangerous.

The German nation state, despite being less than 150 years old, is now Europe’s indispensable power. In economic affairs it is unrivaled; it speaks quietly but carries a large euro-shaped stick, and the Continent listens. However, on global foreign policy it simply speaks quietly, sometimes not at all, and has an aversion to sticks. The shadow of the Second World War still hangs over Germany. The Americans, and eventually the West Europeans, were willing to accept German rearmament due to the Soviet threat, but Germany rearmed almost reluctantly and has been loath to use its military strength. It played a walk-on part in Kosovo and Afghanistan, but chose to sit out the Libya conflict. Its most serious diplomatic foray into a noneconomic crisis has been in Ukraine, which tells us a lot about where Germany is now looking. The Germans were involved in the machinations that overthrew Ukraine’s President Yanukovych in 2014 and they were sharply critical of Russia’s subsequent annexation of Crimea. However, mindful of the gas pipelines, Berlin was noticeably more restrained in its criticism and support for sanctions than, for example, the UK, which is far less reliant on Russian energy. Through the EU and NATO, Germany is anchored in Western Europe, but in stormy weather anchors can slip, and Berlin is geographically situated to shift the focus of its attention east if required and forge much closer ties with Moscow.

Europe’s traditional white population is graying. However, population projections, of an inverted pyramid, with older people at the top and few people to look after them or pay taxes, have not made a dent in the strength of anti-immigrant feeling in what was previously the indigenous population as it sees the world in which it grew up change rapidly. This demographic change is in turn having an effect on the foreign policy of nation states, particularly toward the Middle East. On issues such as the Iraq War, or the Israeli-Palestinian conflict, for example, many European governments must, at the very least, take into account the feelings of their Muslim citizens when formulating policy. The characters and domestic social norms of the European countries are also impacted. Debates about women’s rights and the veiling of women, blasphemy laws, freedom of speech, and many other issues have all been influenced by the presence of large numbers of Muslims in Europe’s urban areas. Voltaire’s maxim that he would defend to the death the right of a person to say something, even if he found it offensive, was once taken as a given. Now, despite many people having been killed because what they said was insulting, the debate has shifted. It is not uncommon to hear the idea that perhaps insulting religion should be beyond the pale, possibly even made illegal.

Africa’s head start in our mutual story did allow it more time to develop something else that to this day holds it back: a virulent set of diseases, such as malaria and yellow fever, brought on by the heat and now complicated by crowded living conditions and poor health-care infrastructure. This is true of other regions—the subcontinent and South America, for example—but sub-Saharan Africa has been especially hard-hit, for example by HIV, and has a particular problem because of the prevalence of the mosquito and the tsetse fly.

Despite having fought five wars with Israel, the country Egypt is most likely to come into conflict with next is Ethiopia, and the issue is the Nile. Two of the continent’s oldest countries, with the largest armies, may come to blows over the region’s major source of water. The Blue Nile, which begins in Ethiopia, and the White Nile meet in the Sudanese capital, Khartoum, before flowing through the Nubian Desert and into Egypt. By this point the majority of the water is from the Blue Nile. Ethiopia is sometimes called Africa’s water tower, due to its high elevation, and has more than twenty dams fed by the rainfall in its highlands. In 2011, Addis Ababa announced a joint project with China to build a massive hydroelectric project on the Blue Nile near the Sudanese border called the Grand Ethiopian Renaissance Dam, scheduled to be finished by 2020. The dam will be used to create electricity, and the flow to Egypt should continue; but in theory the dam could also hold a year’s worth of water, and completion of the project would give Ethiopia the potential to hold the water for its own use, thus drastically reducing the flow into Egypt. As things stand, Egypt has a more powerful military, but that is slowly changing, and Ethiopia, a country of 96 million people, is a growing power. Cairo knows this, and also that, once the dam is built, destroying it would create a flooding catastrophe in both Ethiopia and Sudan. However, at the moment it does not have a casus belli to strike before completion, and despite the fact that a cabinet minister was recently caught on microphone recommending bombing, the next few years are more likely to see intense negotiations, with Egypt wanting cast-iron guarantees that the flow will never be stopped. Water wars are considered to be among the imminent conflicts this century, and this is one to watch.

There is a new scramble for Africa in this century, but this time it is two-pronged. There are the well-publicized outside interests, and meddling, in the competition for resources, but there is also the “scramble within” and South Africa intends to scramble fastest and farthest.

A dusty little town called Amman became the capital of Transjordan, and when the British went home in 1948 the country’s name changed to Jordan. But the Hashemites were not from the Amman area: they were originally part of the powerful Qureshi tribe from the Mecca region, and the original inhabitants were mostly Bedouin. The majority of the population is now Palestinian: when the Israelis occupied the West Bank in 1967, many Palestinians fled to Jordan, which was the only Arab state to grant them citizenship. We now have a situation where the majority of Jordan’s 6.5 million citizens are Palestinian, many of whom do not regard themselves as loyal subjects of the current Hashemite ruler, King Abdullah. Added to this problem are the one million Iraqi and Syrian refugees the country has also taken in who are putting a huge strain on its extremely limited resources.

Groups such as al-Qaeda and, more recently, the Islamic State have garnered what support they have partially because of the humiliation caused by colonialism and then the failure of pan-Arab nationalism—and to an extent the Arab nation state. Arab leaders have failed to deliver prosperity or freedom, and the siren call of Islamism, which promises to solve all problems, has proved attractive to many in a region marked by a toxic mix of piety, unemployment, and repression. The Islamists hark back to a golden age when Islam ruled an empire and was at the cutting edge of technology, art, medicine, and government. They have helped bring to the surface the ancient suspicions of “the other” throughout the Middle East.

For millennia the Jews had lived in what used to be called Israel, but the ravages of history had dispersed them across the globe. Israel remained for them the “promised land,” and Jerusalem, in particular, was sacred ground. However, by 1948 Arab Muslims and Christians had been a clear majority in the land for more than a thousand years. In the twentieth century, with the introduction of the Mandate for Palestine, the Jewish movement to join their minority co-religionists grew, and, propelled by the pogroms in Eastern Europe, more and more Jews began to settle there. The British looked favorably on the creation of a “Jewish homeland” in Palestine and allowed Jews to move there and buy land from the Arabs. After the Second World War and the Holocaust, Jews tried to get to Palestine in even greater numbers. Tensions between Jews and non-Jews reached the boiling point, and an exhausted Britain handed over the problem to the United Nations in 1948, which voted to partition the region into two countries. The Jews agreed, the Arabs said no. The outcome was war, which created the first wave of Palestinian refugees fleeing the area and Jewish refugees coming in from across the Middle East.

The mountainous terrain of Iran means that it is difficult to create an interconnected economy and that it has many minority groups each with keenly defined characteristics. Khuzestan, for example, is ethnically majority Arab, and elsewhere there are Kurds, Azeri, Turkmen, and Georgians, among others. At most, 60 percent of the country speaks Farsi, the language of the dominant Persian majority. As a result of this diversity, Iran has traditionally centralized power and used force and a fearsome intelligence network to maintain internal stability. Tehran knows that no one is about to invade Iran, but also that hostile powers can use its minorities to try and stir dissent and thus endanger its Islamic revolution.

Baluchistan is of crucial importance: while it may contain only a small minority of Pakistan’s population, without it there is no Pakistan. It comprises almost 45 percent of the country and holds much of its natural gas and mineral wealth. Another source of income beckons with the proposed overland routes to bring Iranian and Caspian Sea oil up through Pakistan to China. The jewel in this particular crown is the coastal city of Gwadar. Many analysts believe this strategic asset was the Soviet Union’s long-term target when it invaded Afghanistan in 1979: Gwadar would have fulfilled Moscow’s long-held dream of a warm-water port. The Chinese have also been attracted by this jewel and invested billions of dollars in the region. A deep-water port was inaugurated in 2007 and the two countries are now working to link it to China. In the long run, China would like to use Pakistan as a land route for its energy needs. This would allow it to bypass the Strait of Malacca, which as we saw in chapter two is a choke point that could strangle Chinese economic growth.

If Pakistan had full control of Kashmir it would strengthen Islamabad’s foreign policy options and deny India opportunities. It would also help Pakistan’s water security. The Indus River originates in Himalayan Tibet, but passes through the Indian-controlled part of Kashmir before entering Pakistan and then running the length of the country and emptying into the Arabian Sea at Karachi. The Indus and its tributaries provide water to two-thirds of the country: without it the cotton industry and many other mainstays of Pakistan’s struggling economy would collapse. By a treaty that has been honored through all of their wars, India and Pakistan agreed to share the waters; but both populations are growing at an alarming rate, and global warming could diminish the water flow. Annexing all of Kashmir would secure Pakistan’s water supply. Given the stakes, neither side will let go; and until they agree on Kashmir the key to unlocking the hostility between them cannot be found. Kashmir looks destined to remain a place where a sporadic proxy war between Pakistani-trained fighters and the Indian army is conducted—a conflict that threatens to spill over into full-scale war with the inherent danger of the use of nuclear weapons.

Hence, geography has dictated that Pakistan will involve itself in Afghanistan, as will India. To thwart each other, each side seeks to mold the government of Afghanistan to its liking—or, to put it another way, each side wants Kabul to be an enemy of its enemy. When the Soviets invaded Afghanistan in 1979, India gave diplomatic support to Moscow, but Pakistan was quick to help the Americans and Saudis to arm, train, and pay for the mujahideen to fight the Red Army. Once the Soviets were beaten, Pakistan’s intelligence service, the ISI, helped to create, and then back, the Afghan Taliban, which duly took over the country.

So the Taliban bled the British, bled the Americans, bled NATO, waited NATO out, and after thirteen years NATO went away. During this whole period, members of the highest levels of Pakistan’s establishment were playing a double game. America might have its strategy, but Pakistan knew what the Taliban knew: that one day the Americans would go away, and when they left, Pakistan’s foreign policy would still require a Pakistan-friendly government in Afghanistan. Factions within the Pakistan military and government had continued to give help to the Taliban, gambling that after NATO’s retreat the southern half of Afghanistan at the very least would revert to Taliban dominance, thus ensuring that Kabul would need to talk to Islamabad.

The problems that would be created by Korea imploding or exploding would be multiplied if it happened as a result of warfare. Many countries would be affected and they would have decisions to make. Even if China did not want to intervene during the fighting, it might decide it had to cross the border and secure the North to retain the buffer zone between it and the US forces. It might decide that a unified Korea, allied to the United States, which is allied to Japan, would be too much of a potential threat to allow.

Latin America is very fond of the word “hope.” We like to be called the “continent of hope” . . . This hope is like a promise of heaven, an IOU whose payment is always put off. It is put off until the next legislative campaign, until next year, until the next century. —Pablo Neruda, Chilean poet and Nobel Laureate

The limitations of Latin America’s geography were compounded right from the beginning in the formation of its nation states. In the United States, once the land had been taken from its original inhabitants, much of it was sold or given away to small landholders; by contrast, in Latin America the Old World culture of powerful landowners and serfs was imposed, which led to inequality. On top of this, the European settlers introduced another geographical problem that to this day holds many countries back from developing their full potential: they stayed near the coasts, especially (as we saw in Africa) in regions where the interior was infested by mosquitoes and disease. Most of the countries’ biggest cities, often the capitals, were therefore near the coasts, and all roads from the interior were developed to connect to the capitals but not to one another.

The effects of the melting ice won’t just be felt in the Arctic: countries as far away as the Maldives, Bangladesh, and the Netherlands are at risk of increased flooding as the ice melts and sea levels rise. These ramifications are why the Arctic is a global, not just a regional, issue. As the ice melts and the tundra is exposed, two things are likely to happen to accelerate the process of the graying of the ice cap. Residue from the industrial work destined to take place will land on the snow and ice, further reducing the amount of heat-reflecting territory. The darker-colored land and open water will then absorb more heat than the ice and snow they replace, thus increasing the size of the darker territory. This is known as the albedo effect, and although there are negative aspects to it there are also positive ones: the warming tundra will allow significantly more natural-plant growth and agricultural crops to flourish, helping local populations as they seek new food sources.

The melting of the ice cap already allows cargo ships to make the journey through the Northwest Passage in the Canadian archipelago for several summer weeks a year, thus cutting at least a week from the transit time from Europe to China. The first cargo ship not to be escorted by an icebreaker went through in 2014. The Nunavik carried twenty-three thousand tons of nickel ore from Canada to China. The polar route was 40 percent shorter and used deeper waters than if it had gone through the Panama Canal. This allowed the ship to carry more cargo, save tens of thousands of dollars in fuel costs, and reduced the ship’s greenhouse emissions by 1,300 metric tons. By 2040, the route is expected to be open for up to two months each year, transforming trade links across the High North and causing knock-on effects as far away as Egypt and Panama in terms of the revenues they enjoy from the Suez and Panama Canals.

Posted in Review and Ideas | Tagged , , , , , , , , | 1 Comment

The Future of Money: How the Digital Revolution is Transforming Currencies and Finance

The rise in popularity of cryptocurrencies and fintech companies in both developed and developing world have changed the way money is perceived and being transacted. Eswar S. Prasad, Tolani Senior Professor of International Trade Policy at Cornell University and a senior fellow at the Brookings Institution, in his book discuss the impact of these 21st century invention and their future in a world where capital is easily moving across borders. Below are some highlights taken from the book.

Facebook now portrays Diem as a set of digital coins limited to serving as a means of payment fully backed by a reserve constituted by major hard currencies such as the US dollar and the euro. A digital Diem dollar coin will be issued only when, for example, an actual US dollar is deposited into the Diem reserve. The full backing Diem enjoys suggests that it will provide a stable store of value—hence the moniker stablecoin—and will have no monetary policy implications because it will not involve the creation of any new money. Central bankers remain concerned, however, that Facebook could one day deploy its massive financial clout to issue units of Diem backed by its own resources rather than by reserves of fiat currencies. It is an intriguing, and in some ways disturbing, prospect that major multinational social media companies as well as commercial platforms such as Amazon could become important players in financial markets by issuing their own tokens or currencies. Amazon Coins can already be used to buy games and apps on Amazon’s platform; it is conceivable that such tokens could eventually be used for trading a broader range of goods on the platform. The backing of a behemoth company could ensure the stability of the value of its coins and make them a viable medium of exchange, reducing demand for central bank money for commercial transactions.

Several factors make EMEs and developing economies fertile ground for Fintech innovations. First, as these economies become richer, there is enormous latent demand for higher-quality financial services (for example, wealth management, retirement planning) and products (such as mutual funds, stock options, automobile and mortgage loans) from their fast-expanding middle-class populations. The size of some of these economies also allows innovations to be scaled up quickly to reduce per-unit or per-transaction costs. Second, financial regulators in these countries seem to be more willing to take chances on such advances. In China, payment providers such as Alipay met little resistance from financial regulators in their early days. This enabled them to experiment and innovate, quickly moving from just providing payment apps to offering other financial products, with few constraints. Third, these countries often do not have large, powerful incumbents that thwart progress and block the entry of new firms. Fourth, some of the technologies that are powering financial innovations—especially mobile phone–based technologies—are widely available and do not need massive infrastructure investments.

Modern urban societies are more complex. There remain corners of the world in which the local pub or coffee shop allows regulars to keep a running tab that can be settled at the end of the month. But this is the exception. Most purchases of goods and services have to be paid for before or soon after the nonfinancial part of the transaction is completed. When you buy a new iPhone, paying with a credit card ensures the finality of that payment even though it puts off the day of fiscal reckoning—for a price, of course. The credit card company guarantees that Apple will get its money. After all, that company has ways of imposing a cost on you for defaulting on payments, including by reporting such behavior to a credit scoring agency and hurting your credit score. Thus, the need to establish mutual trust between two parties to an economic transaction can sometimes be circumvented by trust in a third party.

While new technologies hold out the promise of democratizing and decentralizing finance—eroding the advantages of larger institutions and countries and thereby leveling the playing field—they could just as well end up having the opposite effect. Consider network effects, the phenomenon that adoption of a technology or service by more people increases its value, causing even more people to use it and creating a feedback loop that makes it dominant and less vulnerable to competition (think Facebook and Google). Despite the lower barriers to entry, the power of technology could lead to further concentration of market power among some payment systems and financial services providers. Existing financial institutions could co-opt new technologies to their own benefit, deterring new entrants. Even currency dominance could become entrenched, with the currencies of some major economies or stablecoins issued by prominent corporations rivaling national currencies of smaller economies, as well as those with less credible central banks and profligate governments.

Additionally, Fintech and CBDC have social implications. Consider two integral precepts of a free and open society—anonymity (wherein the identities of the parties to a transaction can be concealed even if the transaction itself is not) and privacy (an individual’s control over the collection, dissemination, and use of their personal and transactional data). If cash gave way to CBDC and payment systems were overwhelmingly digital, any notion of maintaining anonymity and privacy in financial matters would be severely compromised. Central banks are, of course, under no obligation, legal or moral, to provide anonymous means of payment such as cash. Still, changing the form of central bank money risks pulling these institutions into debates about social and ethical norms, especially if a CBDC is perceived as a tool enabling the implementation of various government economic and social policies. Such a perception could compromise the independence and credibility of central banks, rendering them less effective in their core functions. In authoritarian societies, central bank money in digital form could become an additional instrument of government control over citizens rather than just a convenient, safe, and stable medium of exchange.

There is a key difference between inside money and outside money that may look like a simple matter of accounting but has important consequences. Inside money is an asset that is in zero net supply in the private sector. That is, if one were to look at the private sector as a whole—individuals, corporations, banks—inside money is, at any given time, entered on the asset side of some balance sheets, and exactly the same total amount is listed on the liability side of other balance sheets. To take one example, a mortgage loan would be a liability to a household that uses it to finance the purchase of a house; that amount would appear as an asset (in the form of a bank deposit) on the balance sheet of the entity that sold that property. The assets and liabilities generated by the creation of inside money exactly offset each other, leaving a zero net position on the overall private-sector balance sheet. Outside money, on the other hand, is a liability on the central bank’s balance sheet but an asset on the overall private-sector balance sheet. Why does inside money matter at all if it just cancels out on the private sector’s balance sheet? It is the creation of inside money by banks that facilitates economic activity. By providing credit to households and businesses, banks enable them to finance purchases of goods and services and undertake investments, thereby increasing economic activity. When a loan is paid back by the household or business that took it out, the corresponding deposit is extinguished, and inside money is destroyed.

In general, M0 characterizes currency in circulation (banknotes and coins). Certain types of bank deposits share some of the characteristics of cash—they are easily accessible and can be used to make payments. A measure of money that encompasses such deposits is M1. M1 typically includes M0, demand deposits, and checking deposits. A broader monetary aggregate, M2, is popular in academic and policy circles because it includes central bank money and various short-term deposits, and most countries by and large define it similarly. Not surprisingly, M2 is sometimes referred to simply as broad money. In the United States, M2 is defined as “a measure of the U.S. money stock that includes M1 (currency and coins held by the non-bank public, checkable deposits, and traveler’s checks) plus savings deposits (including money market deposit accounts), small time deposits under $100,000, and shares in retail money market mutual funds.”

The Chinese government has long recognized the risks of shadow finance and occasionally tightens the screws on this sector by subjecting shadow banks to greater control, but it has not quashed the sector completely. It turns out that, as they do in many other economies, shadow banks serve a useful function in China. For one thing, the government has been unwilling to crack down on the nexus between state-owned enterprises and state-owned banks, both of which are politically powerful. At the same time, the government recognizes that it needs the private sector to generate employment growth and contribute to the economy’s dynamism. The private sector cannot function without funds, so the government has allowed the shadow banking system to continue.

A key attribute of Chinese digital payments, in addition to their ease of use and high reliability, is their low cost. This renders such payments viable even for microscale transactions—purchasing a piece of fruit or an order of dumplings from a street vendor. The fee paid by merchants on Alipay and WeChat Pay is nominally 0.6 percent of the transaction amount. Both platforms refund the fees if a merchant’s monthly volume is below a certain threshold. And discounts on large volumes imply that the actual fees average out to about 0.4 percent of transaction amounts. This is in stark contrast to the high costs of retail payments in the United States, where credit cards dominate digital payments. Mobile credit card readers have become increasingly popular among small businesses in the United States, but payment processors usually charge 2.5 to 3 percent of the transaction amount plus a monthly fee, which is used to pay interchange fees to credit card companies and assessment fees to credit card networks. Why do these cost differences persist? For one thing, credit card issuers in the United States have effectively co-opted customers to advocate on their behalf. Virtually every major US credit card offers cash back or other types of rewards, making customers eager to use credit cards and forcing merchants to accept them for fear of alienating customers and losing business. Alipay and WeChat Pay, by contrast, do not have any regular rewards programs because their margin on each transaction is already wafer thin.

In some advanced countries, including the United States, regulation has tended to protect incumbents and limit competition in various parts of the economy. Network effects and outdated antitrust regulations enabled the ascendancy of the Big Tech firms—Amazon, Apple, Facebook, Google—that dominate their respective spaces and gobble up any competitors they cannot squash. The US financial sector does not suffer from such extreme concentration, although the United States certainly has a handful of major banks and payment providers. They do not exert the same degree of dominance as the Big Tech firms; still, stringent regulatory requirements have created barriers to entry in financial markets and kept competition in check.

Moreover, the rewards for validating a block are hardwired to fall over time as more bitcoins get mined. In this process, referred to as Bitcoin halving, the number of generated rewards per block is periodically divided by two to keep the total supply of bitcoins, which will never exceed twenty-one million, from growing too fast. This process of controlling supply is also seen as essential to preserving Bitcoin’s value. Bitcoin halving happens every 210,000 blocks and reduces the reward by 50 percent each time in a geometric progression. The latest Bitcoin halving took place in May 2020, when the reward fell to 6.25 bitcoins for each block mined. The initial block reward was 50, so this means that about 18.4 million bitcoins had been mined by the time this halving took place. The process is expected to end in 2140 with all Bitcoin having been issued.

Blockchain technology gets around the verifiability problem through its transparency and also ensures the finality of transactions. Once a block of transactions is validated and added to the blockchain, the transactions can easily be confirmed by anyone with an internet connection who knows where to look. After a transaction is validated through the consensus protocol, there is no going back to erase or modify the record. Given that copies of the blockchain exist on multiple nodes, attempts by one or a few nodes to tamper with the record of transactions would be noticed and rejected by the rest of the network.

The genius of Bitcoin is its simultaneous creation, out of thin air, of a digital asset that can serve as both a medium of exchange and a store of value. This duality of purpose distinguishes Bitcoin from other payment innovations. Debit and credit cards created a payment technology that makes transactions easier to execute, but they do not fundamentally alter the concept of money. These systems do not create new money but essentially charge a fee for serving as trusted intermediaries facilitating transactions between parties that do not know each other and have no particular reason to trust each other. Bitcoin’s innovations enabling secure transactions between such parties without the intervention of a trusted third party, and through this very process generating the medium of exchange that can be used for more such transactions, are truly ingenious and groundbreaking. For all the marvels of its technology, in practice Bitcoin has proven to be patently ineffective as a medium of exchange. This leaves open the question of whether scarcity by itself is enough for Bitcoin to create and maintain its value. On this point, it must be acknowledged that Bitcoin has (so far) worked better in practice than in theory. As will be discussed in Chapter 5, the values of some newer cryptocurrencies are backed by reserves of a fiat currency or linked to the prices of specific commodities. Such cryptocurrencies are also in effect just payment systems that do not constitute the creation of new money. Bitcoin is thus different in important ways from such cryptocurrencies as it has no backing of any sort, although it is no longer unique, as some cryptocurrencies such as Ether share similar features.

In Proof of Stake, the nodes engaged in validation are referred to as forgers or minters (or, more generically, as validators) because they forge or mint new blocks to be added to the blockchain. This process is less computationally demanding than mining under Proof of Work, and there is no block reward. While Bitcoin awards both a block reward and a transaction fee every time a new block is validated, anyone who contributes to the Proof of Stake system typically earns only a transaction fee. Proof of Stake typically takes on a linear structure, with the percentage of blocks a forger can validate rising as a constant ratio of that forger’s stake in the cryptocurrency. If Bitcoin used this protocol, a node that staked 1 percent of the total amount of staked Bitcoins would be able to validate 1 percent of new transactions that use that cryptocurrency, while another that staked 10 percent of the total would be able to validate 10 percent of new transactions.

The more you stake, the more you earn. At the same time, though, the more you lose if you go against the system. This model also prevents groups of nodes from joining forces to dominate the network just to make a profit. Instead, those who contribute to the network by freezing their coins are rewarded proportionately to the amount they have invested. When using a Proof of Stake consensus mechanism, it would not make financial sense to attempt a 51 percent attack. A malicious node would need to acquire a majority of the coins in circulation, which would lead to a rise in the price for the coins that might ultimately end up being worth less if trust in the network were damaged. Given all these advantages, the world’s second most valuable cryptocurrency, Ether, which runs on the Ethereum blockchain, is in the process of moving from Proof of Work to Proof of Stake. This process, which was slated to happen in early 2020, was pushed back to an indeterminate date that, as of May 2021, had not yet been finalized. When this eventually happens (probably in 2022), the number of Ether transactions that can be processed is expected to increase to thousands per second.

Smart contracts are self-executing computer programs that perform predefined tasks based on a predetermined set of criteria or conditions. These programs cannot be altered once deployed—their integrity is protected by the public and transparent nature of the blockchain. This ensures the faithful completion of contractual terms agreed to by the relevant parties. A smart contract in effect plays the role of the trusted third party normally invoked to complete such transactions. Instead of a middleman who holds the relevant assets (or asset and corresponding payment) in escrow to make sure both parties fulfill their commitments, the escrow account is operated autonomously via a smart contract with predefined rules. Smart contracts can include deadlines that make them useful for time-sensitive transactions and also reduce counterparty risk. Smart contracts are usually set up such that the entire transaction will fail if any of the multiple steps involved in it cannot be executed, a feature referred to as atomicity.

Some ICOs take the form of Equity Token Offerings (ETOs). A company conducting an ETO adds shares to its capital. These shares, which are recorded on a blockchain, grant investors a percentage of voting rights as well as titles of ownership within the company. This differentiates ETOs from normal ICOs, which do not involve any transfer of ownership stakes.

The outcry from central bankers and financial market regulators around the world was strident and predictable, although it seemed to have caught Facebook by surprise. The thrust of the criticisms was that if Libra were to gain traction, in light of the enormous international network of Facebook members, there would be scope for the cryptocurrency to be delinked from the reserve and for Facebook to become an unregulated creator of money, with implications for both monetary policy in individual countries and cross-border financial flows. Global central bankers led the charge, warning of the dangers posed by Libra. Their remarks were uncharacteristically sharp and forceful, departing from their normal understated style of commentary. At a congressional committee hearing a few weeks after the Libra announcement, Fed chair Jerome Powell stated that “Libra raises a lot of serious concerns, and those would include around [sic] privacy, money laundering, consumer protection, financial stability.” Soon thereafter, then-European Central Bank president Mario Draghi laid out a menu of concerns about Libra, including cybersecurity, money laundering, terrorism financing, privacy, monetary policy transmission, and financial stability. Mark Carney, who was then the Bank of England governor, defended the objectives of Libra but cautioned that Facebook could not expect a free pass from regulators: “In terms of how this will proceed or not going forward, this will not be like social media. This will not be a case where something gets up and starts running and the system tries to work out after the fact how it’s regulated. It’s either going to be regulated properly, overseen properly, or it’s not going to happen.” In September 2019, the French and German governments issued a joint statement announcing their intention to block Libra, noting that “no private entity can claim monetary power, which is inherent to the sovereignty of nations.”

To sum up, Libra is envisioned as a set of stablecoins that will be limited in function to serving as mediums of exchange. The coins will be fully backed by fiat currency reserves, and the issuance of the coins will not represent the creation of new, unbacked money. They will have many of the desirable properties of cryptocurrencies: the ability to send money quickly, the security of cryptography, and the freedom to easily transmit funds across borders. One crucial difference is that the trust model is very different from decentralized cryptocurrencies such as Bitcoin and Ethereum that are “open.” In Libra, network participation is limited or “permissioned” (this refers to validator nodes that must be approved, rather than users of Libra).

Cryptocurrencies might ultimately turn out to be nothing more than sophisticated and convoluted pyramid schemes that one day result in significant economic pain for cryptocurrency enthusiasts. When such schemes unravel, they can have a disproportionate impact on gullible and vulnerable investors who can least afford such losses.

The proliferation of cryptocurrencies and their relationship to fiat currencies, whether physical or digital, is likely ultimately to hinge on how effectively each currency delivers on its intended functions. In this sense, by parceling out the various functions, cryptocurrencies have already changed the nature of money. Fiat money bundles together multiple functions as it serves as a unit of account, medium of exchange, and store of value. Now, with the advent of various forms of digital currencies, these functions can be separated conceptually.

Thus, even if a CBDC was managed using blockchain or any form of DLT, it would be a permissioned blockchain in contrast to the decentralized, permissionless one of the sort used by Bitcoin. There are in fact a couple of government-issued digital currencies being designed to operate on permissioned blockchains. This group, which I will refer to as official cryptocurrencies, constitutes a third and somewhat peculiar conception of CBDC, which ostensibly provides greater user anonymity. Such a cryptocurrency is issued and managed by a government agency or a private agency explicitly designated for the purpose. The validation of transactions is done in a decentralized manner (usually through a Proof of Stake consensus mechanism) but only by approved entities rather than through an open decentralized mechanism.

The Riksbank notes that an e-krona could alleviate the problem of concentration in the payment infrastructure and also its potential vulnerability to loss of confidence. The digital currency would be based on a separate infrastructure that would also be open to private agents willing to offer payment services linked to the e-krona. The general public would have access to the e-krona, with both suppliers of payment services and Fintech companies allowed to operate on the central bank’s network. Thus, an e-krona system would be designed to promote competition and innovation rather than displace private payment systems.

The Bank of Canada, for instance, has indicated that it is conducting contingency planning for launching a CBDC, with two scenarios seen as triggers for a launch. First, the use of banknotes could decline to a point where Canadians could no longer use them for transactions. Second, one or more private-sector digital currencies could become widely used as an alternative to the Canadian dollar as a method of payment, store of value, and unit of account. Under either of these scenarios, “a CBDC could be one way of preserving desirable features of the current payment ecosystem, such as universal access to secure payments, an acceptable degree of privacy, competition, and resilience. The second scenario in particular would constitute a significant challenge to Canada’s monetary sovereignty—our ability to control monetary policy and provide services as lender of last resort.”

An account-based CBDC that replaced cash would free up monetary policy in a way that turns out to be quite important for economies facing severe recessions related to financial market meltdowns, as happened in 2008–2009, or other major adverse events such as the worldwide coronavirus outbreak in 2020. With an account-based CBDC, the central bank would find it easier to impose a negative nominal interest rate. In the absence of cash, the zero lower bound would no longer be a constraint on pushing down nominal interest rates. Even in an economy facing deflation, this would make it feasible to drive the real interest rate low or even negative in inflation-adjusted terms.

A money-financed fiscal stimulus is sometimes more effective than having the government finance its deficit expenditures by issuing more debt that is sold to private investors. The debt-financed approach can lead to higher interest rates, defeating the purpose of the stimulus. But even a money-financed fiscal stimulus could be less efficient than direct helicopter money drops to households and would also run into political complications about who benefits from the government’s largesse and who does not. Moreover, there is some wastage inherent to government spending and some types of spending might prop up economic activity but not afford direct benefits to those most in economic need. In the past, there was no channel through which the central bank could hand out money directly. That could soon change.

Statistics provided by the Riksbank show that in Sweden crimes linked to cash declined sharply as the use of cash plummeted. Reported bank robberies fell from seventy-seven in 2009 to eleven in 2018. Over this period, robberies of cash-in-transit operations fell from fifty-eight to just one, while taxi robberies fell to one-third and shop robberies to less than one-half of their previous levels. In 2013, a local newspaper reported a foiled bank robbery in central Stockholm. The robber left empty-handed because the bank branch did not deal with cash.

The size of the shadow economy is not an innocuous matter. Unpaid taxes mean lower government revenues that could have been used for social expenditures, infrastructure investment, and other productive government spending. This reduces a country’s economic growth and the welfare of its citizens. When the average Greek worker sees highly paid professionals blatantly cheating on taxes, it erodes trust in the tax system and the social norms supporting voluntary compliance as well as in the government as a whole. Moreover, the shadow economy can disadvantage honest businesses, lead to worker exploitation, and fuel illegal activities and illicit commerce. The shadow economy can thus undermine state institutions, encouraging crime and reducing support for institutions and ultimately threatening economic and political stability.

To sum up, a CBDC would discourage illicit activity and rein in the shadow economy by reducing the anonymity and nontraceability of transactions now provided by the use of banknotes. This point has been made forcefully by Kenneth Rogoff of Harvard University, especially in the context of high-denomination banknotes. A CBDC would also affect tax revenues, both by bringing more activities out of the shadows and into the tax net and also by enhancing the government’s ability to collect tax revenues more efficiently.

In principle, a central bank can even have a balance sheet that looks insolvent. This, too, does not matter since the central bank can print money and continue to function even if its liabilities exceed its assets such that its net worth is negative. Behind every central bank stands a government that has the authority to levy taxes, thereby generating revenues that can over time help the central bank bring its balance sheet back into shape. Thus, in the long run the central bank is intrinsically safer than any private financial institution, no matter how large that institution or how strong its balance sheet.

This semiapocalyptic vision of the postcash world runs counter to the notion that digital money would help the poor, deter tax evasion and certain forms of crime, and facilitate more efficient economic interactions. The irony in the libertarian position is that it calls for a central bank to provide the instrument (cash) that will, in effect, undermine the government’s ability to enforce its laws and regulations. This is akin to asking the government to build roads and then leave it up to drivers to make up their own laws rather than enforcing speed limits or other rules of the road since that would, presumably, impinge on individual liberties.

One major consequence of a CBDC is likely to be the loss of privacy in commercial transactions. Notwithstanding any protestations to the contrary by governments and central banks contemplating the issuance of CBDC, the traceability of all digital transactions effectively eliminates the possibility of using central bank money for anonymous transactions. Admittedly, there is little reason why a central bank should feel obliged to provide an anonymous payment mechanism. This is certainly not part of any central bank’s legal mandate. One could make the argument that easier monitoring of its citizens’ activities would make the state more effective in reducing illicit commerce and other illegal activities. And that is precisely what creates a risk. An authoritarian government could easily use this heightened surveillance of its citizens to smother dissent and protest. Worse, it could even enable a democratic government that takes an autocratic turn to tighten its control and attempt to subvert the very institutions that have traditionally served as checks and balances on such concentration of power. Fundamental rights such as free speech, free assembly, and peaceful dissent could be threatened.

Holders of the e-CNY receive no interest from the central bank unless the money is deposited into a bank account, where it earns the normal rate. Thus, the e-CNY does not compete with commercial bank deposits, reducing the risk of disintermediation of the banking system. In more technical terms, the e-CNY constitutes “a full reserve system with no derivative deposits or money multiplier effects.”

All merchants in China who accept digital payments such as Alipay and WeChat Pay are required to accept the e-CNY because it is legal tender. Moreover, the e-CNY can be used across apps, which is not the case with the two major private payment platforms that do not support each other. The e-CNY will have near field communication (NFC)–based payment options. This means that two persons with phones that hold e-CNY digital wallets can exchange money by bringing their phones into proximity, even if those phones temporarily lack internet or wireless coverage. Any risks of double-spending in the absence of immediate centralized verification by a payment platform or a bank can be overcome by the electronic traceability of all transactions. Thus, the e-CNY provides the important cash-like feature of portability and at least partial confidentiality for small-scale transactions.

The idea behind some government-issued cryptocurrencies appeared to be that the underlying cryptographic technology would sufficiently obscure the identities of those using the digital currencies, allowing foreign individuals and institutions to conduct transactions with the issuing country without falling afoul of US sanctions. If this logic does not appear sound, there is a reason—it is not. Not only would foreign financial institutions be unwilling to use such currencies that their home country regulators would frown upon, but the fact that even official cryptocurrencies would eventually have to be converted into more reliable currencies could vitiate any attempt to escape the dollar-centric international financial system.

When the Fed hikes rates, as noted earlier, money tends to flow out of EMEs as investors opt for a decent rate of return in a safe investment rather than a higher-return but riskier investment. Such “risk-on” and “risk-off” investor behavior leads to erratic swings in capital flows to EMEs. To the exasperation of policymakers in these countries, they end up being exposed to such volatility even when their policies are disciplined, and their economies are doing perfectly well. In other words, they suffer collateral damage when the Fed uses monetary policy levers to achieve its own (domestic) ends, with minimal regard for the effects of those policies on other economies.

One important requirement of a store of value currency is depth. That is, there should be a large quantity of financial assets denominated in that currency so that both official investors such as central banks and private investors can easily acquire those assets. There is a vast amount of US Treasury securities, not to mention other dollar-denominated assets, that foreign investors can easily acquire. Another characteristic that is important for a store of value, and one that is very much related to its depth, is its liquidity. That is, it should be possible to easily trade the asset even in large quantities. An investor should be able to count on there being sufficient numbers of buyers and sellers to facilitate such trading, even in difficult circumstances. This is certainly true of US Treasuries, which are traded in large volumes. For an aspiring safe haven currency, depth and liquidity in the relevant financial instruments denominated in that currency are indispensable. More importantly, both domestic and foreign investors tend to place their trust in such currencies during financial crises because they are backed by a powerful institutional framework. The elements of such a framework include an institutionalized system of checks and balances, the rule of law, and a trusted central bank. These elements provide a security blanket to investors, assuring them that the value of those investments will be protected and that investors, both domestic and foreign, will be treated fairly and not subject to risk of expropriation.

Having one global currency accepted for transactions in all countries would have some salutary effects. It would eliminate exchange rate volatility, for the simple reason that there would no longer be any national currencies and no currency exchange rates to speak of. There would be no incentive to undertake (or possibility of undertaking) competitive currency devaluations to promote a country’s exports. This disruptive and zero-sum game of currency wars could no longer be used to stimulate economic recoveries. A single and stable currency serving all the functions of money would reduce the need for hedging foreign exchange risk and also eliminate the volatility of import and export prices resulting from exchange rate fluctuations. More importantly, the United States and its central bank would no longer have such a massive impact on global financial markets. A single global currency would, however, impose considerable costs and constraints on national policymakers. It would mean the abandonment of monetary policy autonomy and the elimination of an adjustment mechanism for changing relative prices when a country finds itself hit with an adverse shock specific to it (as distinct from a global shock that has a common effect across all countries). Why, then, do countries voluntarily give up monetary independence and join currency unions such as the eurozone? For one thing, common currency zones bind economies together more tightly, increasing trade and investment flows between them. Eliminating exchange rate volatility within the zone essentially removes one source of uncertainty that affects trade and investment transactions. The second motivation is that, especially for countries with reputations for spendthrift governments and undisciplined central banks, a fixed exchange rate is one way of buying credibility by tying the central bank’s hands on monetary policy.

One major difference between the SDR and a national currency is that the SDR has no real backing. True, the IMF holds some gold and also has money on deposit from its member countries. But unlike a central bank–issued fiat currency that has a national government’s authority to levy taxes behind it, the IMF has no such power. The IMF functions more like a credit union where the shareholders keep deposits that can then be lent out to members in need of short-term loans.

The IMF declares that “the SDR is neither a currency nor a claim on the IMF. Rather, it is a potential claim on the freely usable currencies of IMF members. SDRs can be exchanged for these currencies.” In other words, the IMF guarantees that it will arrange for conversion of a country’s SDR balances, upon request from that country’s government, into any of the currencies that make up the SDR basket (at the relevant SDR exchange rate for each of those currencies). One could argue that this guarantee, which is based on rules that have been agreed to by all IMF members, constitutes a form of backing for SDRs.

Issuing SDRs increases global “liquidity” since they are tradable for currencies in the SDR basket (the central banks issuing those currencies would create the required amounts of money). It is not, however, the most efficient way to channel money to countries that need it the most given the rules that govern how SDRs are distributed. Moreover, making the SDR an international medium of exchange would require substantial changes to its design. Still, the SDR has its advantages. The IMF can essentially create any amount of SDRs out of thin air, which in principle makes it a pliable reserve asset, the supply of which can be increased whenever the need arises. All it takes is agreement among a majority of the IMF’s members. This requirement, however, complicates matters.

The reality has fallen short of these promises and is likely to continue to do so. The Chinese government has shown that when pressures build up for significant currency appreciations or depreciations as capital flow pressures shift, it is prepared to tighten capital controls and exchange rate management to offset those pressures and reduce volatility. It is hard to envision a government that has a command-and-control mentality leaving highly visible and consequential economic variables, such as the renminbi-dollar exchange rate, entirely to market forces. All told, it remains unlikely that the Chinese government will permit a truly open capital account, although it has certainly allowed the exchange rate to move more freely in both directions—appreciation and depreciation—since 2019.

The consensus has by now decisively shifted toward the view that central banks should in fact care explicitly about both key macroeconomic outcomes and financial stability. After all, goes this counterargument to the pure inflation-targeting view, central banks in fact have two tools at their disposal. The first is monetary policy, which comprises such instruments as interest rates, lines of credit to commercial banks, and direct purchases and sales of government securities and other assets. The second tool is the capacity to implement regulatory policies, either at the level of the entire financial system or applied to specific financial institutions. These policies can take a variety of forms. Banks can be instructed to hold more money in reserve in their accounts at the central bank, issue more equity capital that could help absorb any losses they incur, or require larger down payments on mortgage loans they provide. The two objectives—low and stable inflation (along with low unemployment) and financial stability—and the tools to achieve them have come to be seen as inextricably linked. For instance, financial instability can lead to gyrations in economic activity that make it harder to maintain stable inflation. But the lines between the two policies on occasion blur and get tangled up, making policy decisions less straightforward. There are periods of low inflation and decent growth when the stock market might show signs of rising too fast. In such cases, monetary policy might seem on track to hit the inflation mandate, but if it ignored frothy stock prices, it could forgo the opportunity to let some air out of the stock market rather than standing by while it soars and perhaps ultimately crashes. Tightening monetary policy by raising interest rates would cool off the stock market, but this could, on the other hand, come at the price of restraining growth.

the BoC also makes the broader point that the country’s monetary sovereignty would be threatened if a private digital currency not denominated in Canadian dollars were to assume major roles as a unit of account and means of payment in Canada. Such a development would threaten the central bank’s ability to achieve price and financial stability. Households’ spending power would depend on the value of a digital currency over which the BoC would have no influence. Moreover, the BoC notes that its policies related to the role of lender of last resort can be enacted only in the currency supplied by the central bank. The implication is that if an alternative currency were to establish a major foothold in the Canadian economy, the central bank’s firefighting tools would be rendered less potent amid a financial crisis.

One unresolved question is whether nonbank and informal financial institutions are more or less sensitive than traditional commercial banks to changes in policy interest rates. The available evidence on this subject is limited and rather mixed. It is unlikely that such institutions will be entirely isolated from changes in interest rates in the formal banking sector. Yet the sensitivity of these institutions to policy rate changes could be lower than that of commercial banks, especially if they do not rely on wholesale funding—funding from other financial institutions rather than through deposits—and have other ways of intermediating between savers and borrowers. In fact, there is accumulating evidence that both in China and the United States shadow banking interferes with the transmission of monetary policy—for instance, credit growth in this sector tends to rise during periods of monetary tightening, when the central bank is trying to reduce credit growth and cool down economic activity.

Apprehension about Fintech’s impact on systemic financial stability stems mainly from innovations that could displace existing financial institutions, lead to concentration of payment systems, and accentuate technological vulnerabilities. For EMEs, the expansion of conduits for cross-border financial flows with greater efficiency and lower costs could be a double-edged sword, making it easier for these countries to integrate into global financial markets but at the risk of higher capital flow and exchange rate volatility. Such volatility has often caused marked stresses for corporate and sovereign balance sheets in these economies, especially when many of their loans are denominated in foreign currencies.

The costs and benefits of a CBDC are inextricably tied to the reputation of the central bank issuing it. The value and acceptability of any form of central bank money is the product of the institution’s credibility, which in turn depends on its independence and the quality of a government’s fiscal and other economic policies. In other words, absent any other changes, the digital version of a central bank’s fiat currency is likely to fare no better or worse than cash in terms of its acceptability as a medium of exchange and stable source of value. Nevertheless, from other perspectives, such as those of increasing financial inclusion and improving payment systems, there might be advantages to issuing CBDCs even in countries that have macroeconomic problems such as high and volatile inflation and weak policy institutions. There are looming challenges on the external front. EMEs will have to manage new cross-border payment systems and other developments that facilitate easier, cheaper, and quicker international flows of capital. These changes will bring many benefits but also exacerbate capital flow and exchange rate volatility while making capital controls less potent. By promoting digital payments, CBDCs might hasten developments in domestic and cross-border payments and other financial technologies that come back to haunt EME central banks.

For it is hard to speak properly upon a subject where it is even difficult to convince your hearers that you are speaking the truth. On the one hand, the friend who is familiar with every fact of the story may think that some point has not been set forth with that fullness which he wishes and knows it to deserve; on the other, he who is a stranger to the matter may be led by envy to suspect exaggeration if he hears anything above his own nature. —Thucydides, “Pericles’s Funeral Oration,” The Peloponnesian War

Financial innovations will generate new and as yet unknown risks, especially if financial market participants and regulators put undue faith in technology and let down their guard. Decentralization and fragmentation cut both ways. They can promote financial stability by reducing centralized points of failure and increasing resilience through greater redundancy. Distributed ledger technologies (DLTs), for instance, are in many ways more secure and failproof than their centralized counterparts. On the other hand, while fragmented systems can work well in good times, confidence in them could prove fragile in difficult circumstances. If the financial system were to be dominated by decentralized mechanisms that are not directly backed (as commercial banks are) by a central bank or other government agency, confidence could easily evaporate. Thus, fragmentation might yield efficiency in good times and rapid destabilization when economies struggle.

Another irony is that the origin of cryptocurrencies can be traced to a desire to demonstrate that a trusted authority is not needed to accomplish payment clearing and settlement and also to limit government intrusion into private transactions. Instead, the proliferation of these currencies is goading central banks into issuing digital versions of their own currencies, which might end up putting the privacy of even basic transactions all the more at risk of government surveillance.

Posted in Review and Ideas | Tagged , , , , , , , , | Leave a comment

Key Points from Book: Trillions

TRILLIONS: HOW A BAND OF WALL STREET RENEGADES INVENTED THE INDEX FUND AND CHANGED FINANCE FOREVER

by Robin Wigglesworth

Only 10 to 20 percent of active funds beat their benchmarks over any rolling ten-year period. In other words, investing is a rare walk in life where it generally pays to be lazy and choose a cheap passive fund.

However, in retrospect Seides does make one damning concession: If he was a young man today, he would not choose a career in investing. The profession has become increasingly competitive and difficult, and judging whether someone’s results are due to luck or skill is almost impossible. Moreover, it is a rare career path where experience does not necessarily make you more proficient, and being mediocre is of no value. “Your average doctor can still save lives. But your average investor detracts value from society,” Seides admits.

In other words, while a clever buyer might think he may be landing a bargain, a presumably similarly intelligent seller must be assuming he is getting a good price. Otherwise no deal would be struck. Therefore, at any given moment in time financial securities are priced at the level that investors as a whole and on average consider fair. This was a groundbreaking realization. And that was not all. Bachelier showed that financial securities appeared to follow what scientists call a “stochastic,” or random, movement. The most famous form of random movement was discovered by the Scottish botanist Robert Brown. While examining grains of pollen under a microscope in 1827, Brown saw tiny particles ejected by the pollen that moved around willy-nilly with no discernible pattern, a phenomenon that subsequently became known as Brownian motion.

In fact, Markowitz suggested that all investors should really care about was how the entire portfolio acted, rather than obsess about each individual security it contained. As long as a stock moved somewhat independently of the others, whatever its other virtues, the overall risk of the portfolio—or at least its volatility—would be reduced. Diversification, such as can be achieved through a broad, passive portfolio of the entire stock market, is the only “free lunch” available to investors, Markowitz argued.

Yet the best argument for the enduring value of the efficient-markets hypothesis comes from the eminent twentieth-century British statistician George Box, who is said to have quipped that “all models are wrong, but some are useful.” The efficient-markets hypothesis may not be entirely correct. After all, markets are shaped by humans, and humans are prone to all sorts of behavioral biases and irrationality. But the hypothesis is at the very least a decent approximation for how markets work—and helps explain just why they have in practice proven so hard to beat. Even Benjamin Graham, the doyen of many investors, later in his career became a de facto believer in the efficient-markets hypothesis. Fama later presented an apt, if lewd, metaphor to tweak the noses of investors who disagreed with his ideas, likening traditional money management to pornography: “Some people like it but they’re not really getting better than real sex. If you’re willing to pay for it, that’s fine. But don’t pay too much.”

The investment manager turned historian Peter Bernstein recounts that at the time one former colleague sputtered that he wouldn’t buy the S&P 500 even for his mother-in-law.30 The Leuthold Group, a Minneapolis-based financial research group, famously distributed a poster where Uncle Sam declared, “Help stamp out index funds. Index funds are un-American!” Copies continue to float around the offices of index fund managers as mementos of the hostility they initially faced. Of course, as the writer Upton Sinclair once observed, it is difficult to get someone to understand something when their salary depends on them not understanding it.

Direct indexing takes this to the natural next level. Rather than buy an index fund or ETF, an investor would buy all (or nearly all) the individual securities in a benchmark—allowing them total freedom to create their own flavor of investment portfolio, and, at least in the United States, more efficiently harvest any losses on individual securities. Imagine it being like having all the stocks of the S&P 500 or FTSE 100 as the default option, and then simply ticking off companies that don’t appeal. Hey presto, a bespoke index fund tailored perfectly to the customer’s sensibilities, which they can tweak when and in what ways they see fit. Direct indexing is not entirely new, but three recent developments have transformed its prospects. First, technological advances mean that it is now much easier to implement in practice. What was once a computer processing sinkhole is now more straightforward. Second, trading costs have plummeted in recent years, and are in some cases free, making the cost more competitive versus buying a cheap, simple index fund. Third, the emergence of “fractional” shares—the ability to buy part of a share of a stock if it is too expensive—has helped make direct indexing possible for a broader range of investors.

Bond indices are funny beasts. It makes perfect sense to set the relative weightings of companies in the big stock market indices according to their overall value. So Apple has a bigger weighting than Under Armour. But bond market benchmarks are weighted according to the value of debt issued. So, perversely, the more indebted a country or company, the more heft it should have in an index. Moreover, the greater the price a bond is trading at, the greater its weighting, even if that means it in practice offers a negative interest rate—a phenomenon that has become increasingly common given the vast monetary stimulus unleashed by central banks in recent years. In other words, the peculiarities of bond indices mean that passive bond funds are compelled to buy negative-yielding debt, in practice locking in a guaranteed loss if the debt is held until it matures.

As the IMF noted, the impact on the bonds of emerging economies is starting to become particularly noticeable. A 2018 paper by Tomas Williams, Nathan Converse, and Eduardo Levy-Yayati found that the “growing role of ETFs as a channel for international capital flows has amplified the transmission of global financial shocks to emerging economies.”18 In other words, while ETFs are helping funnel money to the developing world, their tradability makes countries more susceptible to sudden shifts in global investor sentiment, irrespective of domestic factors.

“Indices were designed as measures, but once you begin investing in them you actually distort them,” Green argues. “The moment they became participants and began to grow, they affected markets.”

Given that most index funds are capitalization-weighted, that means that most of the money they take in goes into the biggest stocks (or the largest debtors). Critically, and contrary to popular conception, an index fund does not automatically buy more of a security simply because it has gone up in price, given that it already holds that security. But if the fund takes in new money, then that will go into securities according to their shifting size, and that can in theory disproportionately benefit stocks that are already on the up. For instance, over the past four decades, on average 14 cents of every new dollar put into the Vanguard 500 fund or State Street’s SPDR would have gone into the five biggest companies. A decade ago it was closer to 10 cents. Today, it is over 20 cents—the highest on record.4 Although those bigger companies are, well, bigger, those extra cents can have a disproportional market impact, according to a 2020 study.5 In other words, size can beget size, a dynamic that could contribute to the tendency of financial markets toward bubbles, according to critics.

Moreover, Green argues that index funds are contributing to a secular increase in average stock market valuations seen since the financial crisis of 2008—but at the same time making markets more fragile in a downturn.

Yet in Green’s view, the biggest effect comes from how index-tracking strategies have now vacuumed up so much of the stock market. They have been the dominant “bid”—Wall Street parlance for the buyer—for stocks over the past decade. That leaves fewer shares for everyone else, even though their holdings aren’t excluded from index calculations. This is an issue because most big benchmarks like the S&P 500 are nowadays “float”-adjusted rather than purely value-weighted. In other words, how much space they have in an index is determined by the value of shares that are actually freely available to trade, rather than its total value. Imagine a $10 million public company whose founder owns half of its 1 million shares. That means 500,000 shares still trade freely on the stock market, and their $5 million value determines its weighting in indices—not $10 million. But index funds might now own another 20 percent, which they never sell unless they suffer investor withdrawals. That means other investors are in practice buying and selling just 300,000 shares worth $3 million, even though the value used to calculate the company’s index weighting is $5 million. Incremental buying—from active managers or index funds seeing further inflows—can then push the price up more aggressively, simply because there are fewer sellers around. In

“The stock market is supposed to be a capital allocation machine. But by investing passively you are just putting money into the past winners, rather than the future winners,” she argues. In other words, beyond the impact on markets or other investors, is the growth of index investing having a deleterious impact on economic dynamism?

Although the framing was deliberately provocative, it is undeniably true that index funds are free riders on the work done by active managers, which has an aggregate societal value—something even Jack Bogle admitted. If everyone merely invested passively, the outcome would be “chaos, catastrophe,” Bogle noted a few years before passing away. “There would be no trading. There would be no way to turn a stream of income into a pile of capital or a pile of capital into a stream of income,” Vanguard’s founder observed in 2017.20

There is a conundrum at the heart of the efficient-markets hypothesis, often called the Grossman-Stiglitz Paradox after a seminal 1980 paper written by hedge fund manager Sanford Grossman and the Nobel laureate economist Joseph Stiglitz.22 “On the Impossibility of Informationally Efficient Markets” was a frontal assault on Eugene Fama’s theory, pointing out that if market prices truly perfectly reflected all relevant information—such as corporate data, economic news, or industry trends—then no one would be incentivized to collect the information needed to trade. After all, doing so is a costly pursuit. But then markets would no longer be efficient. In other words, someone has to make markets efficient, and somehow they have to be compensated for the work involved.

Michael Mauboussin, one of Wall Street’s most pedigreed analysts and an adjunct professor at Columbia Business School, has an apt metaphor to show how the hope among many active managers that index funds will eventually become so big that markets become easier to beat is likely in vain: Imagine that investing is akin to a poker game between a bunch of friends of varying skill. In all likelihood, the dimmer players will be the first to be forced out of the game and head home to nurse their losses. But that doesn’t mean that the game then becomes easier for the remaining cardsharps. In fact, it becomes harder, as the players still in the game are the best ones.24 Although financial markets are a wildly more dynamic game, with infinitely more permutations and without the fixed rules of poker, the metaphor is a compelling explanation for why markets actually appear to be becoming harder to beat even as the tide of passive investing continues to rise. Mediocre fund managers are simply being gradually squeezed out of the industry. At the same time, the number of individual investors—the proverbial doctors and dentists getting stock tips on the golf course and taking a bet—has gradually declined, depriving Wall Street of the steady stream of “dumb money” that provided suckers for the “smart money” of professional fund managers to take advantage of. Perhaps there may be an element of the distortionary effects fingered by the likes of Green. But most fund managers willingly admit that the average skill and training of the industry keeps getting higher, requiring constant reinvention, retraining, and brain-achingly hard work. The old days of “have a hunch, buy a bunch, go to lunch” are long gone. Once upon a time, simply having an MBA or a CFA might be considered an edge in the investment industry. Add in the effort to actually read quarterly financial reports from companies and you had at least a good shot at excelling. Nowadays, MBAs and CFAs are rife in the finance industry, and algorithms can read thousands of quarterly financial reports in the time it takes a human to switch on their computer.

Posted in Review and Ideas | Tagged , , , , , , | Leave a comment

Key Points from Book: Principles for Dealing with the Changing World Order

Ray Dalio is perhaps the most successful global macro investor in the 21st century, with his fund Bridgewater being among the most watch investment house in the world. In this book, he synthesized factors that brought nations into becoming a global superpower and signposts to watch for their rise and decline. With the rivalry between the U.S. and China being on the front page news today, and the various war – technology, trade, geopolitical – being waged, this book provides a unique perspective of how this rivalry could evolve in the future. Some of my favorite excerpts shown below has served me well as a reminder of the book’s core content.

This Big Cycle produces swings between 1) peaceful and prosperous periods of great creativity and productivity that raise living standards a lot and 2) depression, revolution, and war periods when there is a lot of fighting over wealth and power and a lot of destruction of wealth, life, and other things we cherish. I saw that the peaceful/creative periods lasted much longer than the depression/revolution/war periods, typically by a ratio of about 5:1, so one could say that the depression/revolution/war periods were transition periods between the normally peaceful/creative periods.

Yet, most people throughout history have thought (and still think today) that the future will look like a slightly modified version of the recent past. That is because the really big boom periods and the really big bust periods, like many things, come along about once in a lifetime and so they are surprising unless one has studied the patterns of history over many generations. Because the swings between great and terrible times tend to be far apart the future we encounter is likely to be very different from what most people expect.

I learned that the biggest thing affecting most people in most countries through time is the struggle to make, take, and distribute wealth and power, though they also have struggled over other things too, most importantly ideology and religion.

throughout time and in all countries, the people who have the wealth are the people who own the means of wealth production. In order to maintain or increase their wealth, they work with the people who have the political power, who are in a symbiotic relationship with them, to set and enforce the rules. I saw how this happened similarly across countries and across time.

over time, this dynamic leads to a very small percentage of the population gaining and controlling exceptionally large percentages of the total wealth and power, then becoming overextended, and then encountering bad times, which hurt those least wealthy and least powerful the hardest, which then leads to conflicts that produce revolutions and/or civil wars. When these conflicts are over, a new world order is created, and the cycle begins again.

Human productivity is the most important force in causing the world’s total wealth, power, and living standards to rise over time. Productivity—i.e., the output per person, driven by learning, building, and inventiveness—has steadily improved over time. However, it has risen at different rates for different people, though always for the same reasons—because of the quality of people’s education, inventiveness, work ethic, and economic systems to turn ideas into output. These reasons are important for policy makers to understand in order to achieve the best possible outcomes for their countries, and for investors and companies to understand in order to determine where the best long-term investments are.

Countries with large savings, low debts, and a strong reserve currency can withstand economic and credit collapses better than countries that don’t have much savings, have a lot of debt, and don’t have a strong reserve currency.

Briefly, a credit collapse happens because there is too much debt. Typically, the central government has to spend a lot of money it doesn’t have and make it easier for debtors to pay their debts and the central bank always has to print money and liberally provide credit—like they did in response to the economic plunge driven by the COVID pandemic and a lot of debt. The 1930s debt bust was the natural extension of the Roaring ’20s boom that became a debt-financed bubble that popped in 1929. That produced a depression that led to big central government spending and borrowing financed by big money and credit creation by the central bank.

The quicker the printing of money to fill the debt holes, the quicker the closing of the deflationary depression and the sooner the worrying about the value of money began. In the 1930s US case, the stock market and the economy bottomed the day that the newly elected president, Franklin D. Roosevelt, announced that he would default on the government’s promise to let people turn in their money for gold, and that the government would create enough money and credit so that people could get their money out of the banks and others could get money and credit to buy things and invest. That took three-and-a-half years from the initial stock market crash in October 1929.

Most cycles in history happen for basically the same reasons. For example, the 1907–19 period began with the Panic of 1907 in the US, which, like the 1929–32 money and credit crisis following the Roaring ’20s, was the result of a boom period (the Gilded Age in the US, which was the same time as the Belle Époque in continental Europe and the Victorian Era in Great Britain) becoming a debt-financed bubble that led to economic and market declines. These declines also happened when there were large wealth gaps that led to big wealth redistributions and contributed to a world war. The wealth redistributions, like those in the 1930–45 period, came about through large increases in taxes and government spending, big deficits, and big changes in monetary policies that monetized the deficits.

rising education leads to increased innovation and technology, which leads to an increased share of world trade and military strength, stronger economic output, the building of the world’s leading financial center, and, with a lag, the establishment of the currency as a reserve currency. And you can see how for an extended period most of these factors stayed strong together and then declined in a similar order. The common reserve currency, just like the world’s common language, tends to stick around after an empire has begun its decline because the habit of usage lasts longer than the strengths that made it so commonly used.

One timeless and universal truth that I saw go back as far as I studied history, since before Confucius, who lived around 500 BCE, is that those societies that draw on the widest range of people and give them responsibilities based on their merits rather than privileges are the most sustainably successful because 1) they find the best talent to do their jobs well, 2) they have diversity of perspectives, and 3) they are perceived as the fairest, which fosters social stability.

since one entity’s spending is another’s income, when one entity cuts its expenses, that will hurt not just that entity, but it will also hurt others who depend on that spending to earn income. Similarly, since one entity’s debts are another’s assets, an entity that defaults reduces other entities’ assets, which requires them to cut their spending. This dynamic produces a self-reinforcing downward debt and economic contraction that becomes a political issue as people argue over how to divide the shrunken pie.

The biggest problem that we now collectively face is that for many people, companies, nonprofit organizations, and governments, their incomes are low in relation to their expenses, and their debts and other liabilities (such as those for pensions, healthcare, and insurance) are very large relative to the value of their assets. It may not seem that way—in fact, it often seems the opposite—because there are many people, companies, nonprofit organizations, and governments that look rich even while they are in the process of going broke. They look rich because they spend a lot, have plenty of assets, and even have plenty of cash. However, if you look carefully, you will be able to identify those that look rich but are in financial trouble because they have incomes that are below their expenses and/or liabilities that are greater than their assets, so if you project what will likely happen to their finances in the future, you will see that they will have to cut their expenses and sell their assets in painful ways that will leave them broke.

In the real economy, supply and demand are driven by the amount of goods and services produced and the number of buyers who want them. When the level of goods and services demanded is strong and rising and there is not enough capacity to produce the things demanded, the real economy’s capacity to grow is limited. If demand keeps rising faster than the capacity to produce, prices go up and inflation rises. That’s where the financial economy comes in. Facing inflation, central banks normally tighten money and credit to slow demand in the real economy; when there is too little demand, they do the opposite by providing money and credit to stimulate demand. By raising and lowering supplies of money and credit, central banks are able to raise and lower the demand and production of financial assets, goods, and services. But they’re unable to do this perfectly, so we have the short-term debt cycle, which we experience as alternating periods of growth and recession.

Related to this confusion between the financial economy and the real economy is the relationship between the prices of things and the value of things. Because they tend to go together, they can be confused as being the same thing. They tend to go together because when people have more money and credit, they are more inclined to spend more and can spend more. To the extent that spending increases economic production and raises the prices of goods, services, and financial assets, it can be said to increase wealth because the people who already own those assets become “richer” when measured by the way we account for wealth. However, that increase in wealth is more an illusion than a reality for two reasons: 1) the increased credit that pushes prices and production up has to be paid back, which, all things being equal, will have the opposite effect when the bill comes due and 2) the intrinsic value of a thing doesn’t increase just because its price goes up. Think about it this way: if you own a house and the government creates a lot of money and credit, there might be many eager buyers who would push the price of your house up. But it’s still the same house; your actual wealth hasn’t increased, just your calculated wealth. It’s the same with any other investment asset you own that goes up in price when the government creates money—stocks, bonds, etc. The amount of calculated wealth goes up but the amount of actual wealth hasn’t gone up because you own the exact same thing you did before it was considered to be worth more. In other words, using the market values of what one owns to measure one’s wealth gives an illusion of changes in wealth that don’t really exist. As far as understanding how the economic machine works, the important thing to understand is that money and credit are stimulative when they’re given out and depressing when they have to be paid back. That’s what normally makes money, credit, and economic growth so cyclical.

when the central bank loses its ability to produce money and credit growth that passes through the economic system to produce real economic growth. Throughout history, central governments and central banks have created money and credit, which weakened their own currencies and raised their levels of monetary inflation to offset the deflation that comes from deflationary credit and economic contractions. This typically happens when debt levels are high, interest rates can’t be adequately lowered, and the creation of money and credit increases financial asset prices more than it increases actual economic activity. At such times those who are holding the debt (which is someone else’s promise to give them currency) typically want to exchange the debt they are holding for other storeholds of wealth. Once it is widely perceived that money and debt assets are no longer good storeholds of wealth, the long-term debt cycle is at its end, and a restructuring of the monetary system has to occur.

Debt assets (e.g., bonds) are held by investors who believe they are storeholds of wealth that can be sold to get money, which can be used to buy things. When holders of debt assets try to make the conversion to real money and real goods and services and find out that they can’t, a “run” occurs, by which I mean that lots of holders of that debt try to make the conversion to money, goods, services, and other financial assets. The bank, regardless of whether it is a private bank or a central bank, is then faced with the choice of allowing that flow of money out of the debt asset, which will raise interest rates and cause the debt and economic problems to worsen, or of printing money, in the form of issuing bonds and buying enough of the bonds to prevent interest rates from rising and hopefully reverse the run out of them. Inevitably the central bank breaks the link, prints the money, and devalues it because not doing that causes an intolerable deflationary depression. The key at this stage is to create enough money and devaluation to offset the deflationary depression but not so much as to produce an inflationary spiral. When this is done well, I call it a “beautiful deleveraging,” which I describe more completely in my book Principles for Navigating Big Debt Crises. Sometimes that buying works temporarily; however, if the ratio of claims on money (debt assets) to the amount of “hard” money there is and the quantity of goods and services there is to buy are too high, the bank is in a bind that it can’t get out of. It simply doesn’t have enough “hard” money to meet the claims. When that happens to a central bank it has the choice either to default or to break the link to the hard money, print the money, and devalue it. Inevitably the central bank devalues. When these debt restructurings and currency devaluations are too big, they lead to the breakdown and possible destruction of the monetary system. The more debt (i.e., claims on money and claims on goods and services) there is, the more it will be necessary to devalue the money.

The shift from a system in which the debt notes are convertible to a tangible asset (e.g., gold and silver) at a fixed rate to a fiat monetary system in which there is no such convertibility last happened in the US on the evening of August 15, 1971. As I mentioned earlier, I was watching on TV when President Nixon told the world that the dollar would no longer be tied to gold. I thought there would be pandemonium with stocks falling. Instead, they rose. Because I had never seen a devaluation before, I didn’t understand how it works. In the years leading up to 1971 the US government had spent a lot of money on military and social programs, then referred to as “guns and butter” policy, that it paid for by borrowing money, which created debt. The debt was a claim on money that could be exchanged for gold. Investors treated this debt as an asset because they got paid interest on it and because the US government promised that it would allow the holders of those notes to exchange them for the gold that was held in US vaults. As the spending and budget deficits grew, the US had to issue much more debt—i.e., create many more claims on gold—even though the amount of gold in the bank didn’t increase. Investors who were astute enough to notice could see that the amount of outstanding claims on gold was much larger than the amount of gold in the bank. They realized that if this continued the US would have to default, so they turned in their claims. Of course, the idea that the US government, the richest and most powerful government in the world, would default on its promise to give gold to those who had claims on it seemed implausible at the time. So, while most people were surprised by Nixon’s announcement and the effects on the markets, those who understood the mechanics of how money and credit work were not.

History has shown that we shouldn’t rely on governments to protect us financially. On the contrary, we should expect most governments to abuse their privileged positions as the creators and users of money and credit for the same reasons that you might commit those abuses if you were in their shoes. That is because no one policy maker owns the whole cycle. Each comes in at one or another part of it and does what is in their interest to do given their circumstances at the time and what they believe is best

When one can manufacture money and credit and pass them out to everyone to make them happy, it is very hard to resist the temptation to do so.5 It is a classic financial move. Throughout history, rulers have run up debts that won’t come due until long after their own reigns are over, leaving it to their successors to pay the bill. Printing money and buying financial assets (mostly bonds) holds interest rates down, which stimulates borrowing and buying. Those investors holding bonds are encouraged to sell them. The low interest rates also encourage investors, businesses, and individuals to borrow and invest in higher-returning assets, getting what they want through monthly payments they can afford.

The Fed announced that plan on April 9, 2020. That approach of printing money to buy debt (called “debt monetization”) is vastly more politically palatable as a way of shifting wealth from those who have it to those who need it than imposing taxes because those who are taxed get angry. That is why central banks always end up printing money and devaluing. When governments print a lot of money and buy a lot of debt, they cheapen both, which essentially taxes those who own it, making it easier for debtors and borrowers. When this happens to the point that the holders of money and debt assets realize what is going on, they seek to sell their debt assets and/or borrow money to get into debt they can pay back with cheap money. They also often move their wealth into better storeholds, such as gold and certain types of stocks, or to another country not having these problems. At such times central banks have typically continued to print money and buy debt directly or indirectly (e.g., by having banks do the buying for them) while outlawing the flow of money into inflation-hedge assets, alternative currencies, and alternative places.

While people tend to believe that a currency is pretty much a permanent thing and that “cash” is the safest asset to hold, that’s not true. All currencies devalue or die, and when they do, cash and bonds (which are promises to receive currency) are devalued or wiped out. That is because printing a lot of currency and devaluing debt is the most expedient way of reducing or wiping out debt burdens.

printing money is the most expedient, least well-understood, and most common big way of restructuring debts. In fact, it seems good rather than bad to most people because: It helps to relieve debt squeezes. It’s tough to identify any harmed parties that the wealth was taken away from to provide this financial wealth (though they are the holders of money and debt assets). In most cases it causes assets to go up in the depreciating currency that people measure their wealth in, so it appears that people are getting richer.

holding debt as an asset that provides interest is typically rewarding early in the long-term debt cycle when there isn’t a lot of debt outstanding, but holding debt late in the cycle, when there is a lot of debt outstanding and it is closer to being defaulted on or devalued, is risky relative to the interest rate being offered. So, holding debt is a bit like holding a ticking time bomb that rewards you while it is still ticking and blows you up when it stops. And as we’ve seen, that big blowup (i.e., big default or big devaluation) happens something like once every 50 to 75 years.

The goal of printing money is to reduce debt burdens, so the most important thing for currencies to devalue against is debt (i.e., increase the amount of money relative to the amount of debt, to make it easier for debtors to repay). Debt is a promise to deliver money, so giving more money to those who need it lessens their debt burden. Where this newly created money and credit then flow determines what happens next. In cases in which debt relief facilitates the flow of this money and credit into productivity and profits for companies, real stock prices (i.e., the value of stocks after adjusting for inflation) rise. When the creation of money sufficiently hurts the actual and prospective returns of cash and debt assets, it drives flows out of those assets and into inflation-hedge assets like gold, commodities, inflation-indexed bonds, and other currencies (including digital). This leads to a self-reinforcing decline in the value of money. At times when the central bank faces the choice between allowing real interest rates (i.e., the rate of interest minus the rate of inflation) to rise to the detriment of the economy (and the anger of most of the public) or preventing real interest rates from rising by printing money and buying those cash and debt assets, they will choose the second path. This reinforces the bad returns of holding cash and those debt assets.

Typically, a country loses its reserve currency status when there is an already established loss of economic and political primacy to a rising rival, which creates a vulnerability (e.g., the Netherlands falling behind the UK, or the UK falling behind the US), and there are large and growing debts monetized by the central bank printing money and buying government debt. This leads to a weakening of the currency in a self-reinforcing run that can’t be stopped because the fiscal and balance of payments deficits are too great for any cutbacks to close.

be successful the system has to produce prosperity for most people, especially the large middle class. As Aristotle conveyed in Politics: “Those states are likely to be well-administered in which the middle class is large, and stronger if possible than both the other classes… where the middle class is large, there are least likely to be factions and dissensions… For when there is no middle class, and the poor are excessive in number, troubles arise, and the state soon comes to an end.”

There is the rapidly increasing debt-financed purchases of goods, services, and investment assets, so debt growth outpaces the capacity of future cash flows to service the debts. So bubbles are created. These debt-financed purchases emerge because investors, business leaders, financial intermediaries, individuals, and policy makers tend to assume that the future will be like the past so they bet heavily on the trends continuing. They mistakenly believe that investments that have gone up a lot are good rather than expensive so they borrow money to buy them, which drives up their prices, which reinforces

this bubble process. That is because as their assets go up in value their net worth and spending-to-income level rise, which increases their borrowing capacity, which supports the leveraging-up process, and so the spiral goes until the bubbles burst. Japan in 1988–90, the US in 1929, the US in 2006–07, and Brazil and most other Latin American commodity producers in 1977–79 are classic examples. There is a shift in the spending of money and time to more on consumption and luxury goods and less on profitable investments. The reduced level of investments in infrastructure, capital goods, and R&D slows the country’s productivity gains and leads its cities and infrastructure to become older and less efficient. There is a lot of spending on the military at this stage to expand and protect global interests, especially if the country is a leading global power. The country’s balance of payments positions deteriorate, reflecting its increased borrowing and reduced competitiveness. If the country is a reserve currency country, this borrowing is made easy as the result of non-reserve currency country savers having a preference to save in/lend to the reserve currency. Wealth and opportunity gaps are large and resentments between classes emerge.

From studying 50-plus civil wars and revolutions, it became clear that the single most reliable leading indicator of civil war or revolution is bankrupt government finances combined with big wealth gaps. That is because when the government lacks financial power, it can’t financially save those entities in the private sector that the government needs to save to keep the system running

when the government runs out of money (by running a big deficit, having large debts, and not having access to adequate credit), it has limited options. It can either raise taxes and cut spending a lot or print a lot of money, which depreciates its value. Those governments that have the option to print money always do so because that is the much less painful path, but it leads investors to run out of the money and debt that is being printed. Those governments that can’t print money have to raise taxes and cut spending, which drives those with money to run out of the country (or state or city) because paying more taxes and losing services is intolerable. If these entities that can’t print money have large wealth gaps among their constituents, these moves typically lead to some form of civil war/revolution.

History shows that raising taxes and cutting spending when there are large wealth gaps and bad economic conditions, more than anything else, has been a leading indicator of civil wars or revolutions of some type.

History shows that lending and spending on items that produce broad-based productivity gains and returns on investment that exceed the borrowing costs result in living standards rising with debts being paid off, so these are good policies. If the amount of money being lent to finance the debt is inadequate, it is perfectly fine for the central bank to print the money and be the lender of last resort as long as the money is invested to have a return that is large enough to service the debt. History shows and logic dictates that investing well in education at all levels (including job training), infrastructure, and research that yields productive discoveries works very well.

When the causes that people are passionately behind are more important to them than the system for making decisions, the system is in jeopardy. Rules and laws work only when they are crystal clear and most people value working within them enough that they are willing to compromise in order to make them work well. If both of these are less than excellent, the legal system is in jeopardy. If the competing parties are unwilling to try to be reasonable with each other and to make decisions civilly in pursuit of the well-being of the whole, which will require them to give up things that they want and might win in a fight, there will be a sort of civil war that will test the relative powers of the relevant parties. In this stage, winning at all costs is the game and playing dirty is the norm.

History has shown that when things get bad, the doors typically close for people who want to leave. The same is true for investments and money as countries introduce capital controls and other measures during such times.

the biggest question is how much the system will bend before it breaks. The democratic system, which allows the population to do pretty much whatever it decides to do, produces more bending because the people can make leadership changes and only have themselves to blame. In this system regime changes can more easily happen in a peaceful way. However, the “one person, one vote” democratic process has the drawback of having leaders selected via popularity contests by people who are largely not doing the sort of thoughtful review of capabilities that most organizations would do when trying to find the right person for an important job. Democracy has also been shown to break down in times of great conflict.

To make matters even worse, when there was internal disorder, foreign enemies were more likely to challenge the country. This happens because domestic conflict causes vulnerabilities that make external wars more likely. Internal conflict splits the people within a country, is financially taxing on them, and demands attention that leaves less time for the leaders to tend to other issues—all things that create vulnerabilities for foreign powers to take advantage of. That is the main reason why internal wars and external wars tend to come close together. Other reasons include: emotions and tempers are heightened; strong populist leaders who tend to come to power at such times are fighters by nature; when there are internal conflicts leaders find that a perceived threat from an external enemy can bring the country together in support of the leader so they tend to encourage the conflict; and being deprived leads people/countries to be more willing to fight for what they need, including resources that other countries have.

While attempts have been made to make the external order more rule-abiding (e.g., via the League of Nations and the United Nations), by and large they have failed because these organizations have not had more wealth and power than the most powerful countries. When individual countries have more power than the collectives of countries, the more powerful individual countries rule. For example, if the US, China, or other countries have more power than the United Nations, then the US, China, or other countries will determine how things go rather than the United Nations. That is because power prevails, and wealth and power among equals is rarely given up without a fight. When powerful countries have disputes, they don’t get their lawyers to plead their cases to judges. Instead, they threaten each other and either reach agreements or fight. The international order follows the law of the jungle much more than it follows international law.

the two things about war that one can be most confident in are 1) that it won’t go as planned and 2) that it will be far worse than imagined. It is for those reasons that so many of the principles that follow are about ways to avoid shooting wars. Still, whether they are fought for good reasons or bad, shooting wars happen. To be clear, while I believe most are tragic and fought for nonsensical reasons, some are worth fighting because the consequences of not fighting them (e.g., the loss of freedom) would be intolerable.

Seeing things through your adversary’s eyes and clearly identifying and communicating your red lines to them (i.e., what cannot be compromised) are the keys to doing this well. Winning means getting the things that are most important without losing the things that are most important, so wars that cost much more in lives and money than they provide in benefits are stupid. But “stupid” wars still happen all the time for reasons that I will explain. It is far too easy to slip into stupid wars because of a) the prisoner’s dilemma, b) a tit-for-tat escalation process, c) the perceived costs of backing down for the declining power, and d) misunderstandings existing when decision making has to be fast. Rival great powers typically find themselves in the prisoner’s dilemma; they need to have ways of assuring the other that they won’t try to kill them lest the other tries to kill them first. Tit-for-tat escalations are dangerous in that they require each side to escalate or lose what the enemy captured in the last move; it is like a game of chicken—push it too far and there is a head-on crash. Untruthful and emotional appeals that rile people up increase the dangers of stupid wars, so it is better for leaders to be truthful and thoughtful in explaining the situation and how they are dealing with it (this is especially essential in a democracy, in which the opinions of the population matter).

When thinking about how to use power wisely, it’s also important to decide when to reach an agreement and when to fight. To do that, a party must imagine how its power will change over time. It is desirable to use one’s power to negotiate an agreement, enforce an agreement, or fight a war when one’s power is greatest. That means that it pays to fight early if one’s relative power is declining and fight later if it’s rising.

Deflationary depressions are debt crises caused by there not being enough money in the hands of debtors to service their debts. They inevitably lead to the printing of money, debt restructurings, and government spending programs that increase the supply of, and reduce the value of, money and credit. The only question is how long it takes for government officials to make this move. In the case of the US, it took three and a half years from the crash in October 1929 until President Franklin D. Roosevelt’s March 1933 actions. In Roosevelt’s first 100 days in office, he created several massive government spending programs that were paid for by big tax increases and big budget deficits financed by debt that the Federal Reserve monetized. He instituted jobs programs, unemployment insurance, Social Security supports, and labor- and union-friendly programs. After his 1935 tax bill, then popularly called the “Soak the Rich Tax,” the top marginal income tax rate for individuals rose to 75 percent (versus as low as 25 percent in 1930). By 1941, the top personal tax rate was 81 percent, and the top corporate tax rate was 31 percent, having started at 12 percent in 1930. Roosevelt also imposed a number of other taxes. Despite all of these taxes and the pickup in the economy that helped raise tax revenue, budget deficits increased from around 1 percent of GDP to about 4 percent of GDP because the spending increases were so large.5 From 1933 until the end of 1936 the stock market returned over 200 percent, and the economy grew at a blistering average real rate of about 9 percent. In 1936, the Federal Reserve tightened money and credit to fight inflation and slow an overheating economy, which caused the fragile US economy to fall back into recession and the other major economies to weaken with it, further raising tensions within and between countries.

Before going on to describe the hot war, I want to elaborate on the common tactics used when economic and capital tools are weaponized. They have been and still are: 1. Asset freezes/seizures: Preventing an enemy/rival from using or selling foreign assets they rely on. These measures can range from asset freezes for targeted groups in a country (e.g., the current US sanctions of the Iranian Revolutionary Guard or the initial US asset freeze against Japan in World War II) to more severe measures like unilateral debt repudiation or outright seizures of a country’s assets (e.g., some top US policy makers have been talking about not paying our debts to China). 2. Blocking capital markets access: Preventing a country from accessing their own or another country’s capital markets (e.g., in 1887 Germany banned the purchase of Russian securities and debt to impede Russia’s military buildup; the US is now threatening to do this to China). 3. Embargoes/blockades: Blocking trade in goods and/or services in one’s own country and in some cases with neutral third parties for the purpose of weakening the targeted country or preventing it from getting essential items (e.g., the US’s oil embargo on Japan and cutting off its ships’ access to the Panama Canal in World War II) or blocking exports from the targeted country to other countries, thus cutting off their income (e.g., France’s blockade of the UK in the Napoleonic Wars).

Posted in Review and Ideas | Tagged , , , , , , , | Leave a comment