Six Years After: A Personal Reflection

Random thoughts on a Friday night, to those who are lost and full of doubts.

Today is a special day for me. If I remember correctly, today marks the 6th year anniversary since I officially quit medical school, an event that not only impact the perspective I have on myself, but also alter the trajectory of my career, relationship, and life path in general.

I still vividly remember the feeling of being an outcast – somebody who does not belong – after handing out my resignation letter to the program director and walk out for the last time from the hospital I’ve been breathing in almost everyday for the past 4 months or so. It was an expression of guilt (for wasting my parents’ money on the relatively expensive medical school), failure (of pushing myself to follow the program after 4 years study), and shame (peer pressure was more intense when you are younger).

Now thinking back, from time to time, I wondered if my decision back then was the correct one, whether I should have finish my study to get my doctor title – as many rational others advocated me – or simply dropped out as I did.

I wished I could see a parallel universe where I did the former, and still come up well. But of course, no such thing exist and for the first few years I had been living in a state of anxiety, afraid of whatever I do next is going to be a mistake. Eventually the feeling numb out and I got carried away with my routine and forgot the issue.

——————————————————————

Yesterday morning, I woke up from the warmth of the sun shining inside my bedroom with its ray on my feet. It was a wonderful summer here in Montreal and despite the city going bustling back from the pandemic, there is a quiet and peaceful moment that day.

As usual , I checked my phone for important emails and made myself a large jug of english breakfast tea, before grabbing one chocolatine I bought days before. My boss sent me some jokes via email, which we had been trading the night prior.

I’m excited to start the day, working on a project I have been assigned on before. As a recently promoted analyst, I have to continue deliver to move ahead in the firm. But mind you, I love the work I do now, and even excited to do it as it could translate to monetary benefit for myself.

Life is good. My family is relatively healthy and economically secure. I’m earning well to support my lifestyle, and I have a decent relationship with my girlfriend and few other close friends.

——————————————————————

When you are hiking, often times there is multiple paths you could take to reach the top. There is a short path with a steep elevation, there is longer path with easier hike, and there is also a long path that goes around the mountain before resuming to the path that actually lead to the peak, which is your destination.

Sometimes I like to think myself taking the later. It could be categorized as a time-wasting and unnecessarily long hike. From a matter of efficiency, this is definitely not the best way to reach the peak, but sometimes it route us to beautiful landscapes along the way and taught certain skills that we may need later on. More importantly, when we are hiking, is it really reaching the destination that is so important, or the experience along the way? Do we not eat delicious food for the pleasure of the tongue rather than the stomach?

——————————————————————

My life was not always this easy. Learning something completely new could be a difficult and stressful endeavour, although having a passion in the subject could really helps.

In the three years after quitting medical school (June 2015 – June 2018), I have been busy like a mad man. During the period, I finished two Masters degree (one in local uni and one in top global uni), passed 3 levels of CFA exams, 2 levels of FRM exams, immigrate to Canada (passing B2 level of French exam), and found a permanent job there.

If there is anything I’m thankful (and proud) for myself, that is having resilience – some of my colleagues and friends who have Bachelor degree in finance told me I have robot-like discipline – and clear goals in life. And guess where I got those traits? You are right, medical school, where most students are forced to cram impossible amount of materials in short period of time, which force you to maintain some sort of inhuman level of absorbing knowledge and thinking it through in medical cases.

I also think that having your personal goals aligned with whatever you are doing is important to excel in life. It is as if we are sailing with a tailwind instead of trying to find an excuse for ourselves.

——————————————————————

Life does not always go according to what we want. Sometimes we are lost, but more surprisingly many people actually does not know where they want to go (talking in terms of life goals and career interest). I was lost, but now I am not and in between I tried to benefit from the tailwind to go faster.

I had arrived in the first checkpoint. That was three years ago. Last month, I passed the second one. In the meanwhile, I’ll continue sailing to my next one while purportedly deviating from the path from time to time.

——————————————————————

Two parting quotes to end the piece. Many MBA graduates would agree with this:

You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something — your gut, destiny, life, karma, whatever. – Steve Jobs

And the second is from one of my favourite song, “My Way”, which most of you must have known:

But through it all, when there was doubt

I ate it up and spit it out

I faced it all and I stood tall and did it my way

Posted in Review and Ideas | Tagged , , , , | Leave a comment

Key Points from Book: The Price of Peace

THE PRICE OF PEACE: MONEY, DEMOCRACY, AND THE LIFE OF JOHN MAYNARD KEYNES

by Zachary D. Carter

Published in December 1923, A Tract on Monetary Reform was, like its predecessor, a deceptively technical title filled with shocking ideas.19 It was not merely the sanctity of international debt contracts that must be abandoned, Keynes informed his readers, but the entire global financial system that had established the foundation of free exchange between nations. The gold standard, the benchmark of economic sanity for as long as anyone could remember, had become a barrier to peace and prosperity—a “barbarous relic” incompatible with “the spirit and the requirements of the age.”20 One by one, Keynes was taking aim at the sacred tenets of nineteenth-century capitalism. The world was about to change.

The new financial reality had spawned its own political ideology. In 1910, the British journalist Norman Angell published The Great Illusion, a book claiming to demonstrate that the international commercial entanglements of the twentieth century had made war economically irrational. No nation, Angell argued, could profit by subjugating another through military conquest. Even the victors would suffer financial harm, whatever the spoils might be.

The more currency a country circulated, the more economic activity it could support—so long as there was a corresponding amount of gold stored away in bank vaults to back up its bills. The financial thinkers of the day believed that without gold to give money some value independent of a government’s say-so, issuing fresh currency could not ultimately boost the economy. Instead, it would cause inflation, an overall increase in prices that would devalue the savings people had previously accumulated and eat away at the purchasing power of their paychecks.

The experience left a deep impression on Keynes. Financial markets, he had discovered, were very different from the clean, ordered entities economists presented in textbooks. The fluctuations of market prices did not express the accumulated wisdom of rational actors pursuing their own self-interest but the judgments of flawed men attempting to navigate an uncertain future. Market stability depended not so much on supply and demand finding an equilibrium as it did on political power maintaining order, legitimacy, and confidence. Twenty-two years later, those observations would become central tenets of the economic theory presented in Keynes’ magnum opus, The General Theory of Employment, Interest and Money: A large proportion of our positive activities depend on spontaneous optimism rather than on a mathematical expectation, whether moral or hedonistic or economic. Most, probably, of our decisions to do something positive…can only be taken as a result of animal spirits—of a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities. Enterprise only pretends itself to be mainly actuated by the statements in its own prospectus….Only a little more than an expedition to the South Pole, is it based on an exact calculation of benefits to come. Thus if the animal spirits are dimmed and the spontaneous optimism falters, leaving us to depend on nothing but a mathematical expectation, enterprise will fade and die;—though fears of loss may have a basis no more reasonable than hope of profit had before.

Markets, Keynes concluded, were social, not mathematical, phenomena. Their study—economics—was not a hard science bound by iron laws, like physics, but a flexible field of custom, rule of thumb, and adjustment, like politics. Market signals—the price of a good or the interest rate on a security—were not a reliable guide to consumer preferences or corporate risks in the real world. At best, they were approximations, always subject to change based on new attitudes about an uncertain future.

Principia Ethica was a sophisticated attack on the moral and political philosophy that had dominated English thought since the late eighteenth century—a doctrine that went by the name “utilitarianism.” Developed by Jeremy Bentham and John Stuart Mill, utilitarianism declared that pleasure was the basis of all morality. A good or right action would produce pleasure. The more pleasure a good deed produced for the more people, the more righteous it was. And so the aim of all government was to produce more pleasure. The best society was the happiest society.

Moore and the Apostles hoped to overturn utilitarianism without reverting back to the moral authority of the Church, which was quickly falling out of fashion in English culture. Things were not good because they produced pleasure, Moore argued. They were good because they were good. Pleasure itself could be either good or bad. People enjoyed all kinds of terrible things, and the pleasure they derived from them was not good but perverse. A good horse, a good piece of music, and a good person, meanwhile, all had something ineffable but vitally important in common: they were all good. But you could not find this goodness under a microscope. It could not be measured or derived from some set of facts about the natural world; it was a fundamental property, “simple, indefinable, unanalysable,”18 that could only be intuited directly by human reason. There were objective facts about value just as there were facts about colors; it wasn’t a matter of opinion whether the sky was blue or Goethe was a great poet. But good things could be understood solely in their “organic unity”; they could not be intellectually broken up into smaller components.

Utilitarianism and classical economics had developed alongside each other in English-language thought and shared important conceptual foundations. Both were concerned with efficiency. Economists following Adam Smith focused on the efficiency of agricultural and industrial production; utilitarian philosophers mused about the efficient production of pleasure. Both utilitarianism and the economics discipline were oriented around simple mathematical conceptual schemes: more was better and getting more with less better still. But after reading Principia Ethica, Keynes rejected the idea that efficiency could be the central organizing principle of a good society. No simple equation could approximate the best way to live.

He supported modest expansions of British social welfare programs, but his speeches to the Union from 1903 reflect a preoccupation with the Church—which he considered a source of sexual and intellectual tyranny—and unfettered trade. “I hate all priests and protectionists,” he declared in December 1903. “Free trade and free thought! Down with pontiffs and tariffs. Down with those who declare we are dumped and damned. Away with all schemes of redemption or retaliation!”

Tariffs projected a “Spirit of Nationalism,” which was “one of the most considerable hindrances to the progress of Civilisation”—“a feeling that anyone else’s prosperity is your damage, a feeling of jealousy, of hatred.”

Under traditional economic theory, markets were supposed to clear these problems by themselves. Prices would rise and fall according to supply and demand, encouraging goods to flow to where they were most needed. A country that produced too much iron could trade it to a country that produced too much wheat and vice versa. Keynes didn’t dispute the idea in principle, but he and other Allied policy makers recognized that battalions could run out of ammunition and cities could starve while everyone waited for markets to adjust. Free markets were a luxury that a nation at war could not afford.

Economists had long been aware that inflation was a common problem during wartime. When cash-strapped governments printed money to pay their bills, prices rose, reflecting, according to the theory, the higher quantity of money in circulation. In a nationally self-sufficient economy like that of Germany, Keynes argued, inflation functioned as “a concealed tax.” Wages couldn’t increase evenly with the prices of goods, because the German government had frozen workers’ pay rates for the duration of the war. So although the German people were taking home the same paychecks they had received in 1913, those paychecks didn’t have the same purchasing power they had once carried. Printing notes gave governments more money to spend on the war as it reduced the standard of living for the citizenry—transferring wealth from the public to the government, just as taxation might have done. That system might be attacked on grounds of “social justice”—why, after all, were “the working classes” being required to pay for the war instead of the very rich?—but there was no risk in Germany that inflation would lead to a runaway disaster during the war. When the German government stopped printing extra currency to pay its military bills, the price increases would stop.

But inflation would function much differently in the British economy. Because Britain relied so heavily on international trade, Keynes argued, inflation could serve only as a very temporary expedient. When British prices increased, it affected not only household budgets but also the prices the British paid for imports. At the same time, the prices British producers received for their exports did not increase; the amount they could fetch in foreign markets depended on the prevailing market prices abroad, not on the going rate at home. As a result, inflation had the effect of exacerbating the British trade deficit—the British were paying more to consume goods from abroad than they received from the sale of exports. And since foreign suppliers wanted to be paid in either foreign currency or gold, the British could inflate themselves into bankruptcy. A sustained trade deficit would deplete Great Britain’s gold reserves.

This was an important theoretical point in Keynes’ intellectual development. Money wasn’t just a passive force that people used to keep track of the value of goods and services; it was an active power in its own right. A problem in the monetary system could create unexpected trouble in the realm of what Keynes called “real resources”—the equipment, commercial products, and savings of a community.

Keynes and McKenna believed that the strongest weapon in the British arsenal was its economy. Great Britain was the richest nation in the conflict, providing money to Russia, France, Italy, and everyone else on the Allied side. The ultimate source of wealth in this war chest was the country’s formidable industrial sector, fueled by the resources of its vast global empire and its dominating navy. If Britain was to support its own soldiers, much less the entire Allied project, it would also need men on the home front running machines, harvesting fields, and performing essential economic work. A surge of troops would deplete essential manpower at home. It was a matter of both production and payment. The British needed men in factories to manufacture the weapons used on the front lines. But they also needed men to produce exports that could be sold abroad, particularly in the United States. When the British bought supplies from America, their U.S. trading partners had to be paid in dollars. And the most reliable way for the British to get dollars was to sell products to Americans. The government could sell off imperial assets for dollars—stocks, bonds, royal treasure—but a fire sale during wartime would probably yield disappointing prices and would permanently reduce the wealth of the empire.

And as with the financial crisis of August 1914, Keynes believed the question of money had become a question of power. Much of the British Empire’s economic might over the previous half century had been derived from its status as a creditor nation. When other countries needed funds, they turned to London, which gave the British a unique ability to influence how that money was spent and whom it would benefit. But the war had forced Great Britain to look abroad for its own financing needs, and Keynes recognized that as the empire became increasingly dependent on foreign help, it ceded geopolitical influence.

Even before hostilities had formally ended, the Treasury had asked Keynes to calculate precisely the amount Germany could afford to pay. Keynes identified a maximum of £2 billion—half paid up front, the other half spread out over the next three decades.21 The actual costs of the war, of course, had been vastly higher, but a more exacting indemnity would prove counterproductive. To generate the wealth needed to make reparation payments, Germany would have to boost its exports, taking international market share from British producers and thus ultimately undercutting British wealth. If the Allies tried instead to seize German gold, German mines, or German factories, they would only undermine Germany’s ability to generate future wealth that could be devoted to tribute. “If Germany is to be milked,” Keynes wrote in a report for the British delegation, “she must not first of all be ruined.”

Like everyone else at Paris, Wilson blamed Germany for the war. Unlike many, however, he did not blame the German people. Indeed, to Wilson’s mind, German citizens had been victims of the kaiser’s autocratic excess before the peoples of Belgium, France, Russia, and Great Britain had been, as he told Congress in April 1917: “We have no quarrel with the German people. We have no feeling towards them but one of sympathy and friendship. It was not upon their impulse that their Government acted in entering this war. It was not with their previous knowledge or approval. It was a war determined upon as wars used to be determined upon in the old, unhappy days when peoples were nowhere consulted by their rulers and wars were provoked and waged in the interest of dynasties or of little groups of ambitious men who were accustomed to use their fellow men as pawns and tools.”27 In short, he believed that the war had been caused by autocracy—an idea bound up inexorably with empire, since conquered peoples were denied their own government. Its solution was democracy—and by implication, the end of imperialism. “A steadfast concert for peace can never be maintained except by a partnership of democratic nations. No autocratic government could be trusted to keep faith within it or observe its covenants….Only free peoples can hold their purpose and their honour steady to a common end and prefer the interests of mankind to any narrow interest of their own.”

The war debts of Allies and enemies alike were so massive that they would be stirring up social turmoil for years to come. Governments would have to curb services to their citizens in order to meet foreign interest payments. Taxes would need to be raised in order to ship money overseas. The notion that this was a fair return for America’s help in the war might resonate with financiers and government officials, but it would make little sense to citizens. A farmer who had lost his son and half his acreage would not feel a rush of gratitude at the prospect of diverting a huge portion of his labor to the enrichment of American bankers.

His battle over reparations and inter-ally debt had made him a lifelong enemy of austerity—the doctrine that governments can best heal troubled economies by slashing government spending and paying down debt. When a government was burdened with too much debt, Keynes had come to believe, it was generally better to swear off the debt than to pay it off by burdening the public with a lower standard of living.

The product of his labors, Economic Consequences, still stands today as both a landmark of political theory and one of the most emotionally compelling works of economic literature ever written. Like all of Keynes’ best work, it is not fundamentally a work of economics at all, but a treatment of the great political problem of the twentieth century—a furious tirade against autocracy, war, and weak politicians. It is at once a howl of rage directed against the most powerful men in the world and an ominous prophecy of the violence that would again sweep the continent in the years to come. The book opens with a sunny portrait of the global financial order that persisted between the close of the Franco-Prussian War and the summer of 1914, describing the free international trade system as an engine of prosperity unparalleled in human history. Economic inequality had been the essential ingredient of that social progress, creating large personal fortunes that the rich could invest in new enterprises that addressed society’s needs and advanced the progress of “civilisation.” Though the mechanisms of growth were inherently unfair, with capitalists at the top reaping far more economic fruit than workers at the bottom, the gains improved the lives of all who participated: better food, nicer fineries, all the extravagances of La Belle Époque that could be purchased at ever-declining prices by an ever-expanding middle class. “Society was working not for the small pleasures of to-day but for the future security and improvement of the race,—in fact for

The steady piling up of material riches over the decades had created the impression of a strong and resilient system. But Keynes believed the arrangement was a fragile historical anomaly. It depended on “a double bluff or deception”: the system would only work if workers believed in it, and workers would not believe in it unless it worked. Break the collective faith in a better tomorrow, and workers would walk off the job, riot in protest, or worse.

One of the great rhetorical tricks of Economic Consequences is the ease with which Keynes moves from images of “terrible exhaustion” in Austria and Germany to the prospect of continent-wide economic crisis. The “oppressive interest payments to England and America” still on the books would soon reduce France, Italy, and Belgium to the same condition as Germany. The economic fate of Europe, Keynes insisted, was already indivisible, and that economic union would write its political future. Governments burdened with heavy debts, Keynes predicted, would resort to inflation to ease the burden, just as they had during the war—a situation that would quickly prove politically destabilizing. Inflation had unequal effects. People with substantial savings—a small minority of the population in 1919—were hit hardest, as the value of their nest egg was eroded; it was a “hidden tax” on a particular economic demographic. Such a morally arbitrary “rearrangement of riches” would fuel anger at the “capitalist classes.” “Lenin was certainly right,” Keynes wrote. “There is no subtler, no surer means of overturning the existing basis of society than to debauch the currency.”18 (Though this has become one of the Marxist leader’s most popular aphorisms over the years, the prose is pure Keynes; he was paraphrasing an interview Lenin had given to a New York newspaper.)

He agreed with Burke that governments were justified not by inalienable individual rights but by their results—their ability to achieve social stability and public happiness—and he shared with his predecessor a profound fear of social upheaval. But though he agreed with Burke’s aims and his mode of analysis, he rejected many of his methods. Burke, like the population theorist Thomas Malthus, had seen economic scarcity as an inescapable fact of human life. There just wasn’t enough wealth to go around, and if humanity was to realize any abiding cultural achievements, mitigating inequality could not be a function of government. Democracy, to Burke, would lead to collective poverty and the end of all fine living. A monarchy that protected the rights of private property was the only way to secure a decent society.

The warnings Keynes issued in the pages of The Economic Consequences of the Peace would reverberate through European history as militant demagogues rose across Europe, exploiting inequality, austerity budgeting, inflation, and uncertainty to take power by preaching vengeance and hate. Benito Mussolini would march on Rome in three years’ time. In Germany, hyperinflation and Adolf Hitler’s Beer Hall Putsch would follow soon after, the rise of Josef Stalin a short time after that. Keynes’ slim masterpiece remains essential today not because of its statistical prowess or its analytical detail but because the mass psychology he presented would prove so integral to the great tragedies of the twentieth century. And the explanatory power of his narrative can be applied with only modest revisions to the great problems of the twenty-first century. Substitute the financial crisis of 2008 for the Great War, swap European austerity budgets and the American foreclosure crisis for war debts and reparations, and the result is a modern recipe for militant far-right nationalism.

“The moderate people can do good and perhaps the extremist can also do good; but it is no use for a member of the latter class to pretend that he belongs to the former,” Keynes wrote to Arthur Salter, who had been secretary at the Supreme Economic Council in Paris. “Besides, it is much a hopeless business trying to calculate the psychological effect of one’s actions; and I have come to feel that the best thing in all the circumstances is to speak the truth as bluntly as one can.”

Keynes argued that there is a difference between probabilities and statistical frequencies. To say that some state of affairs is probable, according to Keynes, is not to simply state that mathematically, it will occur a certain percentage of times in a simulation (that is, if fifty of the one hundred coins in a bag are quarters, I have a 50 percent probability of drawing a quarter every time I reach in). Mathematical data might be useful in a person’s assessment of probability, but it cannot be probability itself.

That doctrine—that managing the overall supply of money was the best way for governments to achieve economic growth and stability—became known as monetarism. It was a radical rethinking of the way central banks should operate.63 The Bank of England typically managed its gold reserves with an eye to fluctuations in international trade, ensuring that Great Britain didn’t run out of gold due to too many imports or a shortage of exports. If Britain was running a trade deficit, then money—gold—would be flowing out of the country, because Britain was effectively purchasing more goods from abroad than it was selling to foreigners. In that situation, the Bank would raise interest rates, effectively lowering the price of British goods on the international market until trade levels were balanced. The idea was to have the real terms of trade determining the price level. Keynes was suggesting the opposite, regulating prices to ensure stability—a strategy that would have implications for the course of trade. It was a step away from the laissez-faire doctrine that public officials should not meddle in economic affairs. Governments would find themselves forced to choose between maintaining a stable exchange rate and a stable price level. When the choice came, Keynes argued, there should be no hesitation: Keep prices stable, and adjust exchange rates. It might be true that “over the long run,” rashes of inflation and deflation would burn themselves out. “But this long run is a misleading guide to current affairs,” Keynes observed. “In the long run, we are all dead.

There is an unresolved tension running throughout Keynes’ work between his desire to democratize the trappings of ruling-class life and his own reverence for that same ruling class. “The great trouble with Keynes was that he was an idealist,” his colleague and collaborator Joan Robinson once wrote.7 His faith that “an intelligent theory would prevail over a stupid one”8 was hard to square with a world in which “vested interests” often rejected reforms that carried broad benefits for all, preferring even a dysfunctional status quo as long as it maintained their place at the top of the social pecking order.

By the time he presented The End of Laissez-Faire to a lecture audience at Oxford in November 1924, British unemployment had been in double digits for nearly five consecutive years. Instead of creating equality and harmony, laissez-faire had generated vast inequality and social unrest, so much of each that all the splendid things liberal individualism was supposed to foster—fresh thinking, great art, fine wine, exciting conversation—were now threatened by social instability. It was time to move on.

“One of the most interesting and unnoticed developments of recent decades,” he wrote, “has been the tendency of big enterprise to socialise itself”19 by responding to public need rather than private profit.

“The political problem of mankind is to combine three things: economic efficiency, social justice, and individual liberty,” Keynes wrote.

Lower wages were in a very real sense the point of deflationary policy; the idea was to bring down the price of everything, including labor. Under classical economic theory, this cost cutting did not have to result in mass layoffs. “Unemployment is a problem of wages, not of work,” Keynes’ Austrian contemporary Ludwig von Mises wrote in 1927.50 As high interest rates imposed higher costs of credit on employers—or reduced demand for their goods—companies could reduce labor costs by cutting pay all around. Lower wages wouldn’t really hurt workers, the thinking went, because with the price of goods falling, workers wouldn’t need as much money as they had before. Based on this reasoning, Conservatives, bankers, and even Liberal politicians blamed the British jobs crisis on trade unions. People had to be laid off, these critics insisted, because companies had signed collective bargaining contracts that required them to keep wages artificially high. Since wages couldn’t be lowered, firms had no other choice but to fire people to bring down their costs. Firms that couldn’t fire people had to close. Keynes lampooned what he called the “orthodox” explanation: “Blame it on the working man for working too little and getting too much.”51 All of that might make sense on paper, Keynes argued, but it was totally divorced from what happened in the real world. “Deflation does not reduce wages ‘automatically,’ ” he observed in the Evening Standard. “It reduces them by causing unemployment.”52 Keynes had little enthusiasm for unions, but by 1925 he believed that steep deflation could never be accomplished without mass layoffs unless the government became deeply involved in managing the affairs of the business world. It was not only collective bargaining that stood in the way of uniform wage reductions; it was human psychology. No sane worker negotiating with his boss would accept a pay cut in the name of broader social welfare without some guarantee that other workers would take the same deal. He could easily find himself shortchanged for nothing. “Those who are attacked first are faced with a depression of their standard of life, because the cost of living will not fall until all the others have been successfully attacked too,” Keynes wrote. “Nor can the classes, which are first subjected to a reduction of money wages, be guaranteed that this will be compensated later by a corresponding fall in the cost of living, and will not accrue to the benefit of some other class. Therefore they are bound to resist so long as they can; and it must be a war, until those who are economically weakest are beaten to the ground.”53 Contrary to the conventional wisdom, then, it was not the departure from gold that was causing Great Britain’s economic malaise, it was the country’s enthusiasm to return to gold at the exchange rates that had prevailed before the war.

The collective faith of the citizenry in the ability of the nation’s economic system to deliver steady, predictable gains had collapsed. Millions of British workers had joined together in an attempt to shut down the entirety of the nation’s commercial life. People—most people—had actively harmed their own society in order to make a political point. The unrest had extended well beyond the ranks of the unemployed; only people who had jobs could go on strike, after all. There was clearly no sense among the public that their welfare rested on secure foundations. It was as if the “double bluff” of the prewar years had been reversed, creating a downward spiral of doubt and decay. People had once accepted an unequal system because it had improved their lives; because they had embraced it, the system had been able to generate prosperity. Now everyone from the coal miner to the investment house magnate had come to believe in a bleak, limited future (whatever the bankers said about the virtues of the gold standard, the paucity of actual investment in the economy was a more telling measure of their true feelings). That collective doom and gloom could not be broken by individual acts of courage.

Money, he argued, was an inherently political tool. It was the state that determined what substance—gold, paper, whatever—actually counted as money—what “thing” people and the government would accept as valid payment. The state thus created money and had always regulated its value. “This right is claimed by all modern states and has been so claimed for some four thousand years at least.”45 The significance of gold to economic history was both relatively recent—it had only really mattered in the past few decades—and arbitrary. The true source of monetary stability was the public legitimacy of the political authority that happened to choose gold as its preferred medium of exchange. Money had no meaning absent political authority.

The Treatise, then, was an all-out assault on the intellectual foundations of laissez-faire. There was no such thing as a free market devoid of government interference. The very idea of capitalism required active state economic management—the regulation of money and debt.

Ideally, according to Keynes, the savings of the people would equal the investments of the business world. But things could go haywire; there was no process by which savings were automatically converted into investment. The impetus to save and the impetus to build were different motivations. “It has been usual to think of the accumulated wealth of the world as having been painfully built up out of that voluntary abstinence of individuals from the immediate enjoyment of consumption which we call thrift,” he wrote in the Treatise. “But it should be obvious that mere abstinence is not enough by itself to build cities….It is enterprise which builds and improves the world’s possessions….If enterprise is afoot, wealth accumulates whatever may be happening to thrift.”54 The role of the banking system was to ensure that the savings of society were perfectly tuned to society’s capacity for investment. If the interest rates lenders offered were set correctly, savings would equal investment and society would operate happily at full employment. But if total investment exceeded the total amount that a society wanted to save, the result would be inflation. And if the reverse occurred—if a society saved more than it invested—the result would be a “slump.”

Keynes and Marx also shared the unfortunate fate of being right about the revolutions to come and wrong about their social implications. As Marx predicted, Communists overthrew capitalists all over the world in the twentieth century. Keynes, for his part, got his math about right. If anything, he was overly pessimistic about the economic potential about to be unleashed. By 2008, the Nobel laureate economist Joseph Stiglitz has noted, global economic output reached a level sufficient to raise every man, woman, and child on the face of the earth above the U.S. poverty line—a very great improvement for the domestic poor and an astounding achievement for the global poor.81 According to a recent analysis by Harvard University economist Benjamin M. Friedman, we are, moreover, on track for an eightfold increase in the standard of living in the United States by 2029—if standard of living is taken to mean the total economic output per person.82 “The numbers hang together,” observed another Nobel laureate, Robert Solow83—even though the world did not, in fact, escape several catastrophic wars in the decades since Keynes’ essay. But the age of farmer-critic-fishermen is not yet upon us. We do not live in a utopia where all people work fifteen hours a week, reserving the rest of their time for painting, literature, and walks in the park. What went wrong? In his essay, Keynes distinguished between human needs essential to survival and semi-needs whose “satisfaction lifts us above, makes us feel superior to, our fellows. Needs of the second class, those which satisfy the desire for superiority, may indeed be insatiable.”84 This effort to keep up with the Joneses has no doubt played a role in lengthening the workweek. But the primary culprit is simple inequality. The tremendous expansion of output and productivity over the past ninety years has been harvested for the most part by a very small section of society. For everyone else, economic prospects are roughly where they were in the mid-1920s (although a decline in the overall workweek from 1930 to 1970 suggests very clearly that people are not really eager to work the hours they do). As any working family can attest, they work because they have to.

Both tariffs and monetary adjustments were efforts to alter the flow of trade, thus expanding domestic production and employment. One functioned by changing the price of goods, the other by changing the price of money, but the effect was the same.

Free trade, Ricardo had explained, allowed countries to specialize in what they did best, enabling the world economy to produce more than if each individual country tried to supply itself with homegrown goods. But technology had eliminated many of the advantages of national specialization. International trade was dominated by heavy manufacturing products that could now be made for the same price just about anywhere.

“The class-war faction believe that it is well known what ought to be done; that we are divided between the poor and good who would like to do it, and the rich and wicked who, for reasons of self-interest, wish to prevent it; that the wicked have power; and that a revolution is required to depose them from their seats. I view the matter otherwise. I think it extremely difficult to know what ought to be done, and extremely difficult for those who know (or think they know) to persuade others that they are right—though theories, which are difficult and obscure when they are new and undigested, grow easier by the mere passage of time.” Compared to the persuasive power of good ideas, he insisted, “the power of self-interested capitalists to stand in their way is negligible.”

FDR had a remarkable capacity to show different faces of his political persona to different audiences when it suited him, and his rhetoric didn’t always match his policy agenda. But over the coming years, he would show that he meant what he said on his first day in office. The abandonment of laissez-faire in banking didn’t happen all at once, but it proved to be extremely thorough. Roosevelt would leave the gold standard, socialize the deposit system, nationalize the Federal Reserve System, synchronize monetary policy with fiscal policy by placing the Fed under Treasury oversight, and force the nation’s biggest banks to break up into smaller institutions with narrower lines of business. In sum, he broke the political back of the American financial sector and began using it as an instrument of economic recovery, directed by the federal government. It would prove a triumph of Keynesian policy more comprehensive than Keynes had ever imagined possible in the United States—a fundamental change in the relationship among the state, society, and money.

During the Great Depression, more than half of the country’s population still lived on farms or in the small towns that served as local hubs for agricultural trade (today, about 80 percent of Americans live in cities). And a staggering one-half of all farm loans were in default when FDR came into office.38 The crushing deflation of the Depression had done what it always did to farmers: Though the prices for their produce fell, the loan balances farmers had taken on to seed and harvest their fields remained high. When farmers were forced to sell their crops for less, their debts became overwhelming. FDR established an array of programs to get farmers more attractive loans. But lower rates on mortgages could help only at the margins if the president couldn’t stop the relentless decline of commodity prices. In the summer of 1933, he dispatched his economic adviser, George Warren, to Europe to survey monetary strategies abroad. Warren returned with a grim political assessment. “Hitler is a product of deflation,” he wrote to Roosevelt. “It seems to be a choice between a rise in prices or a rise in dictators.”39 Events at home, meanwhile, had already convinced Roosevelt of the need to take drastic measures. Three weeks after the president had ordered citizens to turn over their gold coins, Judge Charles C. Bradley had taken up a slate of foreclosure cases in Le Mars, Iowa. A total of fifteen farms were at risk of being repossessed when 250 angry farmers descended on Bradley’s courtroom and demanded that he impose a countywide moratorium on foreclosures. The agitators stormed the bench, threw a rope around Bradley’s neck, and dragged him to a country crossroads, where they “nearly lynched him.”40 Roosevelt had prevented a financial collapse on inauguration day, but rural America remained on the verge of revolution. With half of the country living off the land, somewhat higher grocery bills resulting from higher crop prices would have been worth the sacrifice. But Roosevelt decided to bring crop prices up primarily by bringing the value of the dollar down. If it worked, the price of everything, including wages, would effectively go up, easing the effect of higher food costs on household budgets. “It is simply inevitable that we must inflate,” FDR wrote to Woodrow Wilson’s old aide, Colonel Edward M. House. “Though my banker friends may be horrified.”

The government, he argued, should act directly to expand economic “output” and consumer “purchasing power” through deficit-financed expansion. Whatever else FDR might do in office, the fundamental imperative was to spend, spend, spend: “I lay overwhelming emphasis on the increase of national purchasing power resulting from governmental expenditure which is financed by loans and is not merely a transfer through taxation, from existing incomes. Nothing else counts in comparison with this.”

Cheap credit and an expanded money supply were not enough. The government would have to actually spend that new money it created in order to get the economy moving again. Relying on monetary policy alone, Keynes argued, was “like trying to get fat by buying a larger belt. In the United States today your belt is plenty big enough for your belly. It is a most misleading thing to stress the quantity of money, which is only a limiting factor, rather than the volume of expenditure, which is the operative factor.”

This remains the popular understanding of Keynesian economics to this day: in a slump, governments should borrow money and spend it on useful projects to kick-start a recovery. When the government spends this money, it goes into the pockets of its citizens, who in turn can spend it on other wants and needs, expanding the total size of the economy and ensuring a prosperous recovery rather than a downward spiral in which retrenched spending feeds unemployment and further reductions in spending.

Under the competitive market paradigm, economists had been able to argue that workers were paid a wage equal to the true value they added to the business. With competition whittling away waste and excess, workers would end up receiving what economists called the “marginal productivity” of their work. Each worker would be paid an amount exactly equal to how much more productive he or she made the operation. That meant, particularly for the Austrian economists Hayek and Mises, that complaints about low wages were really complaints about worker productivity. If workers wanted better pay, the only sustainable way to get it was by working harder. But that argument would fall apart if it could be shown that labor markets were not perfectly competitive—if, instead, they exhibited some of the features of monopoly. If the only jobs in town were at the coal mine, then the mine owners wouldn’t have to compete with other employers by offering better pay. When Robinson showed that markets were almost always at least somewhat anticompetitive, she believed she had “hacked through” a “prop to laisser-faire ideology.”11 Capitalists, according to Robinson, were chronically underpaying their staff.

The economic system was understood to be apolitical and self-correcting, akin to population dynamics in the natural world. Everything—wages, commodity prices, interest rates, profits—responded automatically to any unexpected change in other areas, quickly bringing the system to an equilibrium in which the maximum amount of goods was being produced and consumed, so that social needs were met to the greatest extent possible.

The material abundance of the Gilded Age had sown doubts in Keynes about the supposed scarcity of resources, but it was the ravages of the Depression that made him certain the old order had it wrong. Clearly the trouble was not a shortage of production. Crops were rotting in the fields while children went hungry in the streets. Producers were not cutting back because they couldn’t afford to meet the high wage demands of workers; laborers were roaming from town to town, desperate for any work at all. As he wrote in the opening chapter, “It is not very plausible to assert that unemployment in the United States in 1932 was due either to labour obstinately refusing to accept a reduction of money-wages or to its obstinately demanding a real wage beyond what the productivity of the economic machine was capable of furnishing.”32 For Keynes, the empirical fact of the Depression proved that the classical theory was wrong. The economy was not self-correcting. Even if politicians were messing things up with bad policy, the system should at some point between 1919 and 1936 have been able to sort itself out. A bad level for gold in 1925 or a wrong-headed tariff in 1931 should have been no different than a bad harvest or a fire, something quickly remedied by the automated magic of supply, demand, and the price mechanism.

Say’s Law meant that there could not be unspent income in a society. Because the supply of new products created its own demand for them, increased production automatically brought the economic system of payment and consumption into equilibrium at a higher level of activity. When the producer of a good accepted its purchase price and passed that income on to workers in the form of wages (enjoying some himself in the form of profits), he created a new source of demand in society exactly equal to the value of what he had produced. That money would be spent on other goods, ensuring that there could be no deficiency of total demand in the economy. Even the money that people set aside as savings was just another form of spending: spending on the future. Say acknowledged that overproduction might occasionally arise in particular industries but insisted that such problems were “only a passing evil” that couldn’t apply to the economy as a whole for any meaningful period of time.

The possibility of excessive savings carried tremendous consequences. Capitalism would be in a state of overproduction. The supply of goods and services would exceed the demand for those goods and services because money—savings—was not being spent. Producers would respond by cutting production and laying people off. That would bring supply and demand into equilibrium, but it would be a bad equilibrium in which nobody made the investments necessary to hire people and expand production. Unemployment could creep in as a permanent part of a low-functioning economy.

Keynes recognized that money was not only a mechanism for transmitting information about the relative values of different goods; it was also a store of value, which enabled people to make and express judgments about their own material security through time.

“Consumption,” he wrote, “is the sole end and object of all economic activity.”38 But money enables us to put off consumption to another day and another day and another indefinitely without losing our ability to consume at some point. We may substitute holding money for realizing actual material satisfaction not out of vice or confusion but out of simple fear for our future prospects. But when we refuse to consume, we deny others their income. This not only forces society to live with less—it risks making our fear into a contagion, realized in the form of decreased production, layoffs, and suffering amid surplus.

People didn’t actually bet on the value of different enterprises; they bet on the judgments of other speculators. As Keynes put it in one of the few accessible passages from The General Theory: “Professional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest.”39 This didn’t just mean that financial markets were prone to panic and instability, as excitement and emotion overtook cool reasoning; it meant there was no reason to believe that markets ever accurately gauged the value of various investments. Wall Street and the City were perfectly capable of turning extraordinary profits for themselves without doing much for the greater good—indeed, they could do active social harm without intending to. “There is no clear evidence from experience that the investment policy which is socially advantageous coincides with that which is most profitable.”

The chief economic question facing each society, Keynes believed, was no longer what it could afford but how its members would like to live. A titan of industry could not shrug off poverty as an inevitable element of every society. Democracies could choose different paths. Keynes was no longer telling a story about adjusting a machine that generally tended toward a functional, prosperous equilibrium. The General Theory did not prove that governments may need to intervene in the operations of a free market from time to time to correct excesses or imbalances. It showed, instead, that the very idea of a free market independent of government structure and supervision was incoherent. For markets to function, governments had to provide demand. Eras of laissez-faire prosperity like the British golden age before the war were very rare—a “special case” resulting from unique psychological and material circumstances that were impossible to replicate with any regularity through speculative financial markets, in which “the capital development of a country becomes a by-product of the activities of a casino.”

Keynes had, he believed, destroyed “one of the chief social justifications of great inequality of wealth.”48 In his youth, he had understood saving as a virtue that benefited society at large. The fortunes of the rich, accumulated over generations, created a source of investment capital that could be deployed for the benefit of all. With The General Theory, Keynes demonstrated that capital growth was not the result of virtuous saving by the affluent; it was a by-product of the income growth of the masses. Creating large amounts of savings at the top of society did not bring about higher levels of investment. The causal arrow pointed the other way: Creating large amounts of investment caused higher levels of savings. And so “the removal of very great disparities of wealth and income” would improve social harmony and economic functionality.

Keynes argued that the utilitarian moral philosophers of the eighteenth and nineteenth centuries had popularized “a perverted theory of the state” guided by “business arithmetic” in which the final judgment on the social value of any activity was to be found in whether it turned a profit.51 But the market, he argued, was not a reliable statement of society’s preferences, and it could not invisibly guide a polity to salvation. The market simply failed to deliver a host of real social goods that the public enjoyed, particularly art. The things that make life meaningful—beauty, community, a vibrant and multifaceted culture—all required collective, coordinated action. “Our experience has demonstrated plainly that these things cannot be successfully carried on if they depend on the motive of profit and financial success. The exploitation and incidental destruction of the divine gift of the public entertainer by prostituting it to the purposes of financial gain is one of the worser crimes of present-day capitalism.”

Do not discount the power of ideas to triumph over the economic interests of the ruling class. The vested interests of the capitalists, he argued, did not reign sovereign over the great gears of human history; the beliefs and ideas of the people did. They could choose to shrug off the suffering and dysfunction of the past two decades without resorting to violent revolutionary upheaval. All they needed was to be convinced by an idea. Is the fulfillment of these ideas a visionary hope?…The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas. Not, indeed, immediately, but after a certain interval; for in the field of economic and political philosophy there are not many who are influenced by new theories after they are twenty-five or thirty years of age, so that the ideas which civil servants and politicians and even agitators apply to current events are not likely to be the newest. But, soon or late, it is ideas, not vested interests, which are dangerous for good or evil.

“When F.D.R. came to office in March 1933, so desperate was the economic position that for the business and financial community he was an angel of rescue,” he later wrote. “By 1934, things were enough better so that his efforts on behalf of farmers and the unemployed, his tendency to make light of economic orthodoxy, could be disliked and even feared. Roosevelt had become ‘that man in the White House’ and ‘the traitor to his class.’ ”34 The ill will between Roosevelt and the rich was a matter of power, not results. No peacetime U.S. president in the years since has matched the economic growth achieved during the first three full years of FDR’s administration. Adjusted for inflation, the economy grew by a monumental 10.8 percent, 8.9 percent, and 12.9 percent during 1934, 1935, and 1936, respectively.35 Over the course of his first term, the unemployment rate plunged from over 20 percent to less than 10 percent, as the ranks of the unemployed were thinned by more than half, from roughly 11.5 million to 4.9 million (there were about 1.4 million unemployed prior to the stock market crash).36 Only once has a U.S. wartime economy matched Roosevelt’s initial economic miracle—a few years later, during the mobilization for World War II. Though FDR had to wrestle with Congress, the Supreme Court, and even himself over spending, taxes, regulations, budget deficits, and everything else that made up the New Deal, he was in fact spending a lot of money, nearly doubling the expenditures of the federal government from $4.6 billion to $8.2 billion as the deficit surged from $2.6 billion to $4.3 billion—though he offset some of the deficit impact of his new programs by increasing taxes on the wealthy. Those figures were modest compared to what Keynes had advocated and indeed compared to what was to come. In his 1934 trip to the United States, Keynes had advocated annual deficits of $4.8 billion to members of the administration. In 1936, federal outlays still accounted for less than one-tenth of the total U.S. economy. By the end of the war, government projects would total $92.7 billion a year and account for more than 40 percent of all U.S. economic activity (since the beginning of Ronald Reagan’s presidency, spending has fluctuated by a few percentage points around 20 percent of gross domestic product).37 All of this offended the policy sensibilities of the elite, who hated progressive taxation, deficits, and devaluation as much as the British banking establishment did. But there was more at stake than Wall Street’s bottom line. The New Deal did not, in fact, crimp legitimate business on Wall Street; Roosevelt just reorganized it. In 1935, with the United States off gold and onto Glass-Steagall, and with the SEC policing traders and the federal government incurring unheard-of deficits, the amount of securities offerings underwritten by investment banks expanded to four times the level of the previous year.38 With the economy growing rapidly, brokers and traders had more work to do. Everybody did. But the rich, as a group of Harvard economists observed, continued to “complain bitterly” of their tax burden, which they perceived as a violation of “divine right”—even though “the additions to their incomes, resulting from the government’s activities, are far greater in amount than the additional taxes they pay.”39 Jack Morgan, according to one chronicler of the family, viewed the New Deal “less as a set of economic reforms than as a direct, malicious assault on the social order.”40 Which, of course, it was. Morgan was only the most obvious, iconic embodiment of what was quickly becoming a hereditary American nobility. Close friends with King George V, adored by the king’s infant granddaughter who would one day become the second Queen Elizabeth, Jack enjoyed traditional aristocratic recreations, shooting pheasant when the affairs of his firm overtaxed his nerves. But whereas the landed European gentry of the nineteenth century had understood themselves as a chosen elect, Morgan and his elite countrymen believed they had won their place in society through business acumen and the sound stewardship of a grateful society. That was an incredible idea for a man who had been handed the most powerful post in American finance from his father, who in turn had inherited the banking house from his father before him. It was nevertheless sincere. Even the great scourge of Wall Street, Ferdinand Pecora, commended Morgan for his “deeply genuine” testimony before his Senate committee, in which Jack stated it was impossible for a “private banker” to “become too powerful,” because such status was attained “not from the possession of large means, but from the confidence of the people” and the “respect and esteem of the community.”41 This self-conception was fed by the energy both Jack and his father had devoted to philanthropy, paying hundreds of thousands of dollars a year in salaries for Episcopalian clergy and underwriting social services offered by the church. Jack even opened his father’s study and art collection to the public as a museum. That was standard social stewardship for the Carnegies, Mellons, and Fricks who dominated the U.S. economy. The New Deal dynamited the whole worldview. Not only had FDR shackled families like the Morgans with new taxes, regulations, auditors, and overlords, his system actually worked. It was not the great genius of financial patricians that made the economy grow at unheard-of rates; it was, as Keynes had argued, the purchasing power of the masses. It sent Morgan into paroxysms of fury. Even the mention of Teddy Roosevelt prompted him to scream “God damn all Roosevelts!”42 As his sense of self-worth and place in society collapsed, he retreated to the safety of his banking fief, discarding his former sense of noblesse oblige. “I just want you to know,” he shouted to Dawes Plan architect Owen Young, “that I don’t care a damn what happens to you or anybody else. I don’t care what happens to the country….All I care about is this business! If I could help it by going out of this country and establishing myself somewhere else I’d do it—I’d do anything.”43 “Regardless of party and regardless of region, today, with few exceptions,” wrote Time, “members of the so-called Upper Class frankly hate Franklin Roosevelt.”44 The president returned the favor. Subjected to relentless attacks from “the Wall Street bankers” throughout his first term, he denounced them as “economic royalists” in a fiery speech to the Democratic National Convention in 1936. “They had begun to consider the Government of the United States as a mere appendage to their own affairs,” he roared from the podium. “We know now that Government by organized money is just as dangerous as Government by organized mob. Never before in all our history have these forces been so united against one candidate as they stand today. They are unanimous in their hate for me—and I welcome their hatred!”45 There was at least as much political calculation in FDR’s posture as genuine outrage. His inner circle still included a few baffled but pragmatic bankers, typically from outsider firms or those allied with new industries. Sidney Weinberg, head of the then-minor investment bank Goldman Sachs, was an FDR confidant from the 1932 campaign until the president’s death.46 And FDR studiously courted advice from and sought avenues for agreement with Morgan partner Owen D. Young. A conservative Democrat, Young tried his best to cooperate, though in moments of weakness he wondered if a “totalitarian state” might not be better equipped than Roosevelt’s version of democracy to administer “economically desirable” “self-discipline”—particularly corporate tax cuts.47 But Roosevelt’s counterpunches against the elite had a powerful effect on public opinion. The financiers who denounced him were not going to vote Democrat, but attacks raining down onto Roosevelt from such prestigious men could erode support among voters who were genuinely on the fence. Roosevelt called into question the legitimacy of his opponents and rallied his own supporters against them. Anti-FDR fervor was no longer a reasoned critique from learned men but merely the kind of thing you could expect from people who didn’t like democracy. “When Roosevelt countered, a whole generation joined on his side,” Galbraith observed. “If the privileged were against Roosevelt, we obviously must be against privilege. If Roosevelt found the moral posture of big business unconvincing or fraudulent, it must be so.”

Consciously or not, FDR was taking the ideas of The Economic Consequences of the Peace and expanding them into a foreign policy doctrine of breathtaking ambition: In the future days, which we seek to make secure, we look forward to a world founded upon four essential human freedoms. The first is freedom of speech and expression—everywhere in the world. The second is freedom of every person to worship God in his own way—everywhere in the world. The third is freedom from want—which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants—everywhere in the world. The fourth is freedom from fear—which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbor—anywhere in the world. That is no vision of a distant millennium. It is a definite basis for a kind of world attainable in our own time and generation. That kind of world is the very antithesis of the so-called new order of tyranny which the dictators seek to create with the crash of a bomb.

Subsequent American war advocates have invariably cited the protection of human rights abroad as an overriding moral concern, often attesting to high ideals to divert attention from less benign motivations: claims on resources, imperial strategy, or simple belligerence. The pattern began in World War II. While FDR pitched the conflict to Americans as a fight for human rights “anywhere in the world,” the U.S. State Department—the chief organ of American diplomacy—repeatedly refused aid to Jewish refugees. On the West Coast, more than 100,000 Japanese Americans were forced from their homes and ordered to report to internment camps, a policy that originated in Roosevelt’s War Department.

But with the war orders coming in on a massive scale, Keynes insisted that it was only a matter of time until rapid price increases took effect. Americans would need to have a battle plan ready when they did. First, he said, speculators anticipating an increase in production from the war would bid up the prices of key commodities—everything from cotton for uniforms to iron, coal, and cement. Next, as workers joined the military or filled positions in military manufacturing, employers would begin offering higher salaries to attract and retain talent. After that, labor unions, correctly perceiving their greater leverage with employers, would begin to demand—and receive—higher pay under collective bargaining contracts. All of this would have an impact on prices. Commodity speculation would raise the cost of raw goods for manufacturers and force them to charge retailers more, while retailers would sense the better purchasing position of their customers and raise prices themselves. The entire phenomenon would be exacerbated by the fact that enormous segments of the economy, though operating at full tilt and with essentially no unemployment, would be producing war materiel for use overseas rather than consumer goods to be purchased at home. The purchasing power created by widespread availability of good-paying jobs would face a shortage of products it could actually buy. Demand would rage far ahead of supply. Without “heavy taxation, a high pressure savings campaign or rationing on a wide scale,” the United States was in store for an inflationary explosion.

During World War I, rising prices had accrued to industrialists in the form of higher profits, which were then taxed away by the government, borrowed by the government, or spent on consumer goods, further driving up their prices. When those profits were borrowed, the industrialists received an asset—bonds—that their workers did not. Workers benefited only in the form of higher pay—and that was cold comfort, since the value of their paychecks was steadily being inflated away. The most egalitarian method, of course, would have been to tax profits to the hilt—but there was a limit to how much governments could actually tax. In the United States, for instance, the tax rate on the highest incomes would eventually reach 94 percent during the war. For taxes to really do the trick, they would ultimately have to hit working people of more modest means. By forcing workers to accept a program of “deferred pay,” Keynes was attempting to redistribute postwar wealth from the investor class to the working class. The title of the piece is misleading. Compulsory savings wouldn’t really “pay” for anything. By hook or by crook, the British government was going to maximize war production. When it wanted bombs, it would make them, and, since the gold standard was long gone, it could print the money to pay for them without having to yoke its printing presses to the amount of gold at the Bank of England. Mandatory savings were a way of managing inflation. By pulling demand out of the economy—reducing the purchasing power of ordinary people—Keynes wanted to limit their ability to bid up retail prices. This was a critical observation about the way money, debt, and even taxes functioned in a post–gold standard world. In 1931, it had been possible for the British government to spend so much money that it could not meet its debt obligations, because it could print only so much money; its debts were written in pounds tied to a certain amount of gold. Under the gold standard, it was possible for a government to run out of money; there was only so much gold in the vaults. But a government that controlled its own currency, Keynes observed, could not go bankrupt. Under the fiat currency that had prevailed in Great Britain since 1931, the government could easily print its way out of excessive debt. Taken to extremes, the consequence of that strategy would be inflation, of course. And so the purpose of taxes—or deferred savings or any similar instrument—was not to “pay” for government services but to regulate the value of money.

By declaring “freedom from want” a human right, FDR had presented the social reforms of the New Deal as a moral imperative every bit as pressing as the military defeat of Nazism. By including it in the Atlantic Charter, he and Churchill had declared personal economic security a defining characteristic of any democracy, a bedrock guarantee that distinguished a free society from tyranny. Hayek turned this argument on its head—a daring maneuver at the height of the war that had transformed FDR and Churchill into figures of public adulation. The very idea of “economic freedom,” Hayek argued, was antithetical to what true advocates of political freedom had championed for centuries. “Freedom from necessity,” he claimed, was an inherently “socialist” idea. It was not a bulwark for the democracies against Nazism but an ingredient of Nazism and Soviet communism alike, which could only be effectively implemented by a violent dictatorship that crushed other political rights.

The antigovernment refrain of The Road to Serfdom was perfectly in key with Mises’ uncompromising libertarian tract Bureaucracy, published in the same year, in which Hayek’s mentor forcefully declared New Deal liberalism a variant of authoritarian communism. “Capitalism means free enterprise, sovereignty of the consumers in economic matters, and sovereignty of the voters in political matters,” he wrote. “Socialism means full government control of every sphere of the individual’s life….There is no compromise possible between these two systems.”12 You could have laissez-faire, or you could have Soviet Russia; there was no middle ground. Hayek recognized that the all-or-nothing severity of his old instructor was a political dead end in an era in which every government seemed to be pursuing new Keynesian reforms. And so, like Lippmann before him, Hayek attempted to graft his laissez-faire conception of liberty onto something compatible with the emerging modern nation-state. The government might be allowed to maintain some basic minimum standard of living for everyone, after all. He drew a distinction between “regulation”—which was merely designed to solve obvious problems—and dangerous “planning”—which could only be achieved by a dictator orchestrating the lives and limiting the choices of free individuals. The size and scope of corporate enterprises, he argued, should be closely limited and monitored to prevent big firms from interfering with free competition in the marketplace.

In The End of Laissez-Faire, Keynes had argued that liberalism could not stand on abstract principles alone; it had to actually deliver the goods for the people who lived under it. Laissez-faire had led to vast inequality and grinding depression, failing a basic test for democratic legitimacy. By shrugging off the practical shortcomings of laissez-faire, Keynes argued, Hayek had deluded himself about the causes of dictatorship in Germany. The economic fuel for the rise of Hitler had been the suffering and despair generated by deflation—not the social welfare policies Hayek decried as “socialism.” The democracies of the world could not turn their backs on the economic strategies that had rejuvenated them in the late 1930s and 1940s; doing so would only unleash a new wave of political uncertainty, encouraging new authoritarian social movements. Hayek’s call to abandon the New Deal and Keynesian economic management was a recipe for more strongmen. “What we need therefore, in my opinion, is not a change in our economic programmes, which would only lead in practice to disillusion with the results of your philosophy,” he wrote, “but perhaps even the contrary, namely, an enlargement of them.”

Hayek and Keynes agreed that democracy was not the fundamental organizing principle of society; it was a tool for achieving more important goals. They even agreed that the most critical function of democracy was its ability to produce a vibrant, elite culture. The value Keynes placed on Bloomsbury was in some respects very similar to Hayek’s appreciation for the old Viennese aristocracy. But to Keynes, nothing was lost in guiding all the world to Bloomsbury, while for Hayek, aristocracy was inherently exclusive; the whole point was that not everyone could be an aristocrat.

Unemployment was a breeding ground for fascism. It created dangerous political instability and a source of anger that could easily be weaponized. The terms of trade might help or hurt efforts to establish international goodwill, but tariffs or no tariffs, the legitimacy of an international economic order depended entirely on whether it did, in fact, provide for mutual prosperity.

The gold standard, he maintained, had broken down because it forced countries into deflationary corners. Countries that ran trade deficits became entirely responsible for restoring trade balance, Keynes believed, and they would eventually be placed in a position where they could only achieve competitive prices for their goods abroad by forcing down domestic wages, causing mass domestic unemployment. If Britain, for instance, ran a trade deficit with the United States by importing more than it exported, it would result in a balance-of-payments problem: Britain would be paying out more money to the United States than it was taking in. If the situation persisted long enough, Britain would run out of money with which to pay for American goods. This problem could in theory be resolved by international lending. If Americans, flush with money earned by exporting so many goods, made loans on reasonable terms to Great Britain, then the British would have the money needed to keep buying exports.

Under the ethical norms of the gold standard, the resulting suffering was the price a country had to pay for being weak or lazy. Keynes readily accepted that many countries had ineffective economic infrastructure. But nations often ran trade deficits because they had to, not because they were any more or less reckless than countries running trade surpluses. What’s more, governments that ran surpluses weren’t in fact being injured by countries that ran deficits. Though the deficit country would run up large financial debts, the surplus country enjoyed a fat export trade that employed its workers and raised its standard of living. The gold standard ethic heaped shame upon countries for piling up large debts, but it was the surplus countries that benefited most from those debts—and benefited at the expense of the debtor country employment. Keynes recognized that in the international order, as in ordinary life, the real villains were rarely beggars.

In Samuelson’s hands, human behavior and the economy more broadly were best understood as rational, profit-maximizing endeavors. Markets would clear themselves, and supply and demand would find their own rational equilibrium, just as David Ricardo and Adam Smith had posited long ago. But they would only do so, according to Samuelson, when the economy was operating at something close to full employment. By deploying Keynesian deficit spending or providing Keynesian tax cuts, policy makers could keep the economy from slipping “into a topsy-turvy wonderland where right seems left and left is right; up seems down; and black, white.”11 So long as unemployment did not spin out of control, the rational, profit-maximizing behavior of human beings would allow statistics to reliably predict when and where economic forces would reach equilibrium—if the data were sufficiently accurate.

With the age of economic scarcity ended, Galbraith believed that many of the objections economists had raised about economic organization in the past were no longer significant. Corporate monopolies might well be wasteful, but waste was not very important. What mattered was power. And even tremendous concentrations of power such as those of the modern corporation were not necessarily a problem so long as they were “countervailed” by other great powers—other large corporations in the supply chain or distribution scheme or, more important, powerful labor unions and a powerful government.

The whole point of The General Theory, she believed, was to show that economic production could not be understood as a self-sustaining set of processes independent from social norms and political realities.

American Capitalism had celebrated the end of scarcity. Now The Affluent Society decried the country’s increasing dependence on unnecessary production to establish the financial security of most families. The relentless postwar reliance on boosting economic output as the chief, if not only, means of improving the American standard of living had subjugated the work of democracy to the mechanics of the market. Nobody in her right mind would choose to work longer hours for dirty public parks. But that was what the logic of the market was dictating, because the market could only reward ideas that turned a profit. Nobody stood to profit from clean parks; they were just nicer to live with than dirty parks. But if nobody made the political judgment that clean parks were better, a society organized around profit incentives from production alone would almost automatically end up with dirty parks. The market was not an impartial guide to the beliefs of the public, and some of its verdicts were crazy.

Under the leadership of Marriner Eccles in the 1930s, the Fed board of governors in Washington had effectively fused with the Treasury Department, allowing the United States to pursue a unified fiscal and monetary agenda. Under the arrangement, the Fed pursued a monetary policy that kept interest rates low and money cheap for both banks and the federal government. Inflation and unemployment were managed not by interest rate adjustments but by fiscal policy—government spending and taxation—and, during the war, price controls. From 1937 to 1947, the Fed kept the discount rate at 1 percent, and beginning in 1942, it publicly coordinated monetary policy with the Treasury Department to keep down the interest rate on World War II bonds. Even after the war, when inflation briefly shot up after price controls were eliminated, the United States didn’t battle rising prices with high interest rates and the unemployment high interest rates created. As late as 1951, the discount rate was still just 1.75 percent, and the Fed remained formally committed to guaranteeing a specific, predictable interest rate on U.S. government debt.

Helping the rich get richer, Kennedy had argued, was the surest way to help the country. “I am not sure,” Galbraith had previously told Kennedy, “what the advantage is in having a few more dollars to spend if the air is too dirty to breathe, the water too polluted to drink, the commuters are losing out on the struggle to get in and out of the cities, the streets are filthy, and the schools so bad that the young, perhaps wisely, stay away.”

According to Friedman, there was a “natural rate” of unemployment below which no policy maker, fiscal or monetary, could push the economy without causing inflation. It was hard to pinpoint just what this “natural rate” was; it depended on technology, productivity, unionization rates, and regulatory policies. But tinkering with fiscal or monetary policy to boost employment was a fool’s errand.

Two weeks after the president told Connally to go off on Galbraith, the British Treasury informed the Nixon administration that it was about to shore up the pound by redeeming $3 billion in U.S. assets—dollars and Treasury securities—for gold. It was, in essence, a vote of no confidence against American inflation management. The United States was the only country in the Bretton Woods system with a currency convertible into gold. For the British, there was no difference between holding a dollar bill and holding the dollar’s exchange weight in gold—unless they expected the value of those dollars to decline. The United States had been leaking gold for years thanks to inflationary pressures and the new phenomenon of an American trade deficit. And so U.S. trading partners increasingly preferred holding gold to holding dollars. Great Britain’s decision was sure to rattle financial markets all over the world. A bold, multibillion-dollar gesture from a close diplomatic ally might even spark a run on the dollar.

Academic economics became dominated by conservative ideas. Monetarism quickly faded once Volcker found he couldn’t accurately or effectively target the precise supply of money in the economy. It was replaced by the rational expectations hypothesis, formulated by future Nobel laureate Robert Lucas. The rational expectations school essentially took Friedman’s ideas about price expectations and applied them to government policy. Rational people, according to Lucas, would factor the future effects of any change in tax rates or regulatory arrangements into their economic decisions. Increasing government spending to boost the economy was futile, according to this thinking, because people would recognize that the resulting budget deficit would eventually have to be cured through higher taxes and would therefore save any money they received in anticipation of future tax bills. As a result, it was impossible for policy makers to make any lasting improvement in the lives of citizens through macroeconomic management; the market would quickly adjust and subsequently overrule the government meddlers. It was as if Keynes had never existed; uncertainty had given way to hyperrationality and the ability to see the future. Lucas even went so far as to claim that his work had rendered the entire field of macroeconomics superfluous.

At its core, The General Theory of Employment, Interest and Money was a book about the dangers and limitations of financial markets. Given uncertainty about the future, it was impossible for markets to accurately price the full slate of risks attached to any financial asset. Investors were constantly processing new, unexpected information and attitudes, including their own. If a society relied excessively on financial markets to allocate resources, develop research, and improve industry, Keynes believed, it was destined for underperformance, instability, and unemployment. He had designed a theory and a policy agenda in which financial markets were subjugated to the authority of the state, believing the coordinated action of a government was capable of meeting the investment needs of society which financial markets could only secure through fleeting accidents. The Clinton administration was doing the opposite of what Keynes had prescribed: subjugating both the governing agenda of American democracy and the direction of global economic development to the currents of international capital markets.

As Joseph Stiglitz concluded in 2017, globalization “was an agenda that was driven by large corporations at the expense of workers, consumers, [and] citizens in both the developed and developing world.”39 The social milieus of citizens and shareholders became increasingly divergent, leading to disparities not only in wealth but in education and physical health, with those further down the income ladder registering lower test scores and shorter life expectancies, according to the OECD.40 The result has been heightened political tension not only between different countries but within individual nation-states as economically insecure populations question whether they do in fact belong to the same political project as their more affluent neighbors. “I think globalization has contributed to tearing societies apart,” argues economist Dani Rodrik.

The central problems of the twentieth century, Keynes argued, were best solved by alleviating inequality. Enterprise and economic growth were driven not by the unique genius and vast fortunes of the very rich but by the purchasing power of the masses, which created markets for new ideas. To put people to work, governments needed to create systems of support for the poor and the middle class, not new favors for the rich.

Posted in Review and Ideas | Tagged , , , , , , | Leave a comment

Key Points from Think Again: The Power of Knowing What You Don’t Know

by Adam Grant

When people reflect on what it takes to be mentally fit, the first idea that comes to mind is usually intelligence. The smarter you are, the more complex the problems you can solve—and the faster you can solve them. Intelligence is traditionally viewed as the ability to think and learn. Yet in a turbulent world, there’s another set of cognitive skills that might matter more: the ability to rethink and unlearn. Imagine that you’ve just finished taking a multiple-choice test, and you start to second-guess one of your answers. You have some extra time—should you stick with your first instinct or change it? About three quarters of students are convinced that revising their answer will hurt their score. Kaplan, the big test-prep company, once warned students to “exercise great caution if you decide to change an answer. Experience indicates that many students who change answers change to the wrong answer.” With all due respect to the lessons of experience, I prefer the rigor of evidence. When a trio of psychologists conducted a comprehensive review of thirty-three studies, they found that in every one, the majority of answer revisions were from wrong to right. This phenomenon is known as the first-instinct fallacy.

We don’t just hesitate to rethink our answers. We hesitate at the very idea of rethinking.

Part of the problem is cognitive laziness. Some psychologists point out that we’re mental misers: we often prefer the ease of hanging on to old views over the difficulty of grappling with new ones. Yet there are also deeper forces behind our resistance to rethinking. Questioning ourselves makes the world more unpredictable. It requires us to admit that the facts may have changed, that what was once right may now be wrong. Reconsidering something we believe deeply can threaten our identities, making it feel as if we’re losing a part of ourselves.

We favor the comfort of conviction over the discomfort of doubt, and we let our beliefs get brittle long before our bones. We laugh at people who still use Windows 95, yet we still cling to opinions that we formed in 1995. We listen to views that make us feel good, instead of ideas that make us think hard.

A hallmark of wisdom is knowing when it’s time to abandon some of your most treasured tools—and some of the most cherished parts of your identity.

We’re swift to recognize when other people need to think again. We question the judgment of experts whenever we seek out a second opinion on a medical diagnosis. Unfortunately, when it comes to our own knowledge and opinions, we often favor feeling right over being right. In everyday life, we make many diagnoses of our own, ranging from whom we hire to whom we marry. We need to develop the habit of forming our own second opinions.

Two decades ago my colleague Phil Tetlock discovered something peculiar. As we think and talk, we often slip into the mindsets of three different professions: preachers, prosecutors, and politicians. In each of these modes, we take on a particular identity and use a distinct set of tools. We go into preacher mode when our sacred beliefs are in jeopardy: we deliver sermons to protect and promote our ideals. We enter prosecutor mode when we recognize flaws in other people’s reasoning: we marshal arguments to prove them wrong and win our case. We shift into politician mode when we’re seeking to win over an audience: we campaign and lobby for the approval of our constituents. The risk is that we become so wrapped up in preaching that we’re right, prosecuting others who are wrong, and politicking for support that we don’t bother to rethink our own views.

If you’re a scientist by trade, rethinking is fundamental to your profession. You’re paid to be constantly aware of the limits of your understanding. You’re expected to doubt what you know, be curious about what you don’t know, and update your views based on new data.

Research reveals that the higher you score on an IQ test, the more likely you are to fall for stereotypes, because you’re faster at recognizing patterns. And recent experiments suggest that the smarter you are, the more you might struggle to update your beliefs.

In psychology there are at least two biases that drive this pattern. One is confirmation bias: seeing what we expect to see. The other is desirability bias: seeing what we want to see. These biases don’t just prevent us from applying our intelligence. They can actually contort our intelligence into a weapon against the truth. We find reasons to preach our faith more deeply, prosecute our case more passionately, and ride the tidal wave of our political party. The tragedy is that we’re usually unaware of the resulting flaws in our thinking.

Research shows that when people are resistant to change, it helps to reinforce what will stay the same. Visions for change are more compelling when they include visions of continuity. Although our strategy might evolve, our identity will endure.

In theory, confidence and competence go hand in hand. In practice, they often diverge. You can see it when people rate their own leadership skills and are also evaluated by their colleagues, supervisors, or subordinates. In a meta-analysis of ninety-five studies involving over a hundred thousand people, women typically underestimated their leadership skills, while men overestimated their skills.

They found that in many situations, those who can’t . . . don’t know they can’t. According to what’s now known as the Dunning-Kruger effect, it’s when we lack competence that we’re most likely to be brimming with overconfidence.

As Dunning quips, “The first rule of the Dunning-Kruger club is you don’t know you’re a member of the Dunning-Kruger club.”

If we’re certain that we know something, we have no reason to look for gaps and flaws in our knowledge—let alone fill or correct them. In one study, the people who scored the lowest on an emotional intelligence test weren’t just the most likely to overestimate their skills. They were also the most likely to dismiss their scores as inaccurate or irrelevant—and the least likely to invest in coaching or self-improvement.

It’s when we progress from novice to amateur that we become overconfident. A bit of knowledge can be a dangerous thing. In too many domains of our lives, we never gain enough expertise to question our opinions or discover what we don’t know. We have just enough information to feel self-assured about making pronouncements and passing judgment, failing to realize that we’ve climbed to the top of Mount Stupid without making it over to the other side.

What he lacked is a crucial nutrient for the mind: humility. The antidote to getting stuck on Mount Stupid is taking a regular dose of it. “Arrogance is ignorance plus conviction,” blogger Tim Urban explains. “While humility is a permeable filter that absorbs life experience and converts it into knowledge and wisdom, arrogance is a rubber shield that life experience simply bounces off of.”

What we want to attain is confident humility: having faith in our capability while appreciating that we may not have the right solution or even be addressing the right problem. That gives us enough doubt to reexamine our old knowledge and enough confidence to pursue new insights.

From time to time, though, a less crippling sense of doubt waltzes into many of our minds. Some surveys suggest that more than half the people you know have felt like impostors at some point in their careers. It’s thought to be especially common among women and marginalized groups. Strangely, it also seems to be particularly pronounced among high achievers.

Plenty of evidence suggests that confidence is just as often the result of progress as the cause of it. We don’t have to wait for our confidence to rise to achieve challenging goals. We can build it through achieving challenging goals. “I have come to welcome impostor syndrome as a good thing: it’s fuel to do more, try more,” Halla says. “I’ve learned to use it to my advantage. I actually thrive on the growth that comes from the self-doubt.”

In a classic paper, sociologist Murray Davis argued that when ideas survive, it’s not because they’re true—it’s because they’re interesting. What makes an idea interesting is that it challenges our weakly held opinions.

“Presented with someone else’s argument, we’re quite adept at spotting the weaknesses,” journalist Elizabeth Kolbert writes, but “the positions we’re blind about are our own.”

he genuinely enjoys discovering that he was wrong, because it means he is now less wrong than before.

He’s a scientist devoted to the truth. When I asked him how he stays in that mode, he said he refuses to let his beliefs become part of his identity. “I change my mind at a speed that drives my collaborators crazy,” he explained. “My attachment to my ideas is provisional. There’s no unconditional love for them.”

Most of us are accustomed to defining ourselves in terms of our beliefs, ideas, and ideologies. This can become a problem when it prevents us from changing our minds as the world changes and knowledge evolves. Our opinions can become so sacred that we grow hostile to the mere thought of being wrong, and the totalitarian ego leaps in to silence counterarguments, squash contrary evidence, and close the door on learning. Who you are should be a question of what you value, not what you believe. Values are your core principles in life—they might be excellence and generosity, freedom and fairness, or security and integrity. Basing your identity on these kinds of principles enables you to remain open-minded about the best ways to advance them. You want the doctor whose identity is protecting health, the teacher whose identity is helping students learn, and the police chief whose identity is promoting safety and justice. When they define themselves by values rather than opinions, they buy themselves the flexibility to update their practices in light of new evidence.

The single most important driver of forecasters’ success was how often they updated their beliefs. The best forecasters went through more rethinking cycles. They had the confident humility to doubt their judgments and the curiosity to discover new information that led them to revise their predictions.

That was a common mistake in 2016. Countless experts, pollsters, and pundits underestimated Trump—and Brexit—because they were too emotionally invested in their past predictions and identities. If you want to be a better forecaster today, it helps to let go of your commitment to the opinions you held yesterday. Just wake up in the morning, snap your fingers, and decide you don’t care. It doesn’t matter who’s president or what happens to your country. The world is unjust and the expertise you spent decades developing is obsolete! It’s a piece of cake, right? About as easy as willing yourself to fall out of love. Somehow, Jean-Pierre Beugoms managed to pull it off.

If we’re insecure, we make fun of others. If we’re comfortable being wrong, we’re not afraid to poke fun at ourselves. Laughing at ourselves reminds us that although we might take our decisions seriously, we don’t have to take ourselves too seriously. Research suggests that the more frequently we make fun of ourselves, the happier we tend to be.

What forecasters do in tournaments is good practice in life. When you form an opinion, ask yourself what would have to happen to prove it false. Then keep track of your views so you can see when you were right, when you were wrong, and how your thinking has evolved.

Andrew Lyne is not alone. Psychologists find that admitting we were wrong doesn’t make us look less competent. It’s a display of honesty and a willingness to learn. Although scientists believe it will damage their reputation to admit that their studies failed to replicate, the reverse is true: they’re judged more favorably if they acknowledge the new data rather than deny them. After all, it doesn’t matter “whose fault it is that something is broken if it’s your responsibility to fix it,” actor Will Smith has said. “Taking responsibility is taking your power back.”

Relationship conflict is destructive in part because it stands in the way of rethinking. When a clash gets personal and emotional, we become self-righteous preachers of our own views, spiteful prosecutors of the other side, or single-minded politicians who dismiss opinions that don’t come from our side. Task conflict can be constructive when it brings diversity of thought, preventing us from getting trapped in overconfidence cycles. It can help us stay humble, surface doubts, and make us curious about what we might be missing. That can lead us to think again, moving us closer to the truth without damaging our relationships.

We learn more from people who challenge our thought process than those who affirm our conclusions. Strong leaders engage their critics and make themselves stronger. Weak leaders silence their critics and make themselves weaker. This reaction isn’t limited to people in power. Although we might be on board with the principle, in practice we often miss out on the value of a challenge network.

Agreeableness is about seeking social harmony, not cognitive consensus. It’s possible to disagree without being disagreeable. Although I’m terrified of hurting other people’s feelings, when it comes to challenging their thoughts, I have no fear. In fact, when I argue with someone, it’s not a display of disrespect—it’s a sign of respect. It means I value their views enough to contest them. If their opinions didn’t matter to me, I wouldn’t bother. I know I have chemistry with someone when we find it delightful to prove each other wrong.

Experiments show that simply framing a dispute as a debate rather than as a disagreement signals that you’re receptive to considering dissenting opinions and changing your mind, which in turn motivates the other person to share more information with you. A disagreement feels personal and potentially hostile; we expect a debate to be about ideas, not emotions. Starting a disagreement by asking, “Can we debate?” sends a message that you want to think like a scientist, not a preacher or a prosecutor—and encourages the other person to think that way, too.

A good debate is not a war. It’s not even a tug-of-war, where you can drag your opponent to your side if you pull hard enough on the rope. It’s more like a dance that hasn’t been choreographed, negotiated with a partner who has a different set of steps in mind. If you try too hard to lead, your partner will resist. If you can adapt your moves to hers, and get her to do the same, you’re more likely to end up in rhythm.

In a war, our goal is to gain ground rather than lose it, so we’re often afraid to surrender a few battles. In a negotiation, agreeing with someone else’s argument is disarming. The experts recognized that in their dance they couldn’t stand still and expect the other person to make all the moves. To get in harmony, they needed to step back from time to time.

Most people think of arguments as being like a pair of scales: the more reasons we can pile up on our side, the more it will tip the balance in our favor. Yet the experts did the exact opposite: They actually presented fewer reasons to support their case. They didn’t want to water down their best points. As Rackham put it, “A weak argument generally dilutes a strong one.”

We won’t have much luck changing other people’s minds if we refuse to change ours. We can demonstrate openness by acknowledging where we agree with our critics and even what we’ve learned from them. Then, when we ask what views they might be willing to revise, we’re not hypocrites.

Research suggests that the effectiveness of these approaches hinges on three key factors: how much people care about the issue, how open they are to our particular argument, and how strong-willed they are in general. If they’re not invested in the issue or they’re receptive to our perspective, more reasons can help: people tend to see quantity as a sign of quality. The more the topic matters to them, the more the quality of reasons matters. It’s when audiences are skeptical of our view, have a stake in the issue, and tend to be stubborn that piling on justifications is most likely to backfire. If they’re resistant to rethinking, more reasons simply give them more ammunition to shoot our views down.

When someone becomes hostile, if you respond by viewing the argument as a war, you can either attack or retreat. If instead you treat it as a dance, you have another option—you can sidestep. Having a conversation about the conversation shifts attention away from the substance of the disagreement and toward the process for having a dialogue. The more anger and hostility the other person expresses, the more curiosity and interest you show. When someone is losing control, your tranquility is a sign of strength. It takes the wind out of their emotional sails. It’s pretty rare for someone to respond by screaming “SCREAMING IS MY PREFERRED MODE OF COMMUNICATION!”

Research shows that in courtrooms, expert witnesses and deliberating jurors are more credible and more persuasive when they express moderate confidence, rather than high or low confidence.

there’s evidence that people are more interested in hiring candidates who acknowledge legitimate weaknesses as opposed to bragging or humblebragging.

We might as well get credit for having the humility to look for them, the foresight to spot them, and the integrity to acknowledge them. By emphasizing a small number of core strengths, Michele avoided argument dilution, focusing attention on her strongest points. And by showing curiosity about times the team had been wrong, she may have motivated them to rethink their criteria. They realized that they weren’t looking for a set of skills and credentials—they were looking to hire a human being with the motivation and ability to learn.

In every human society, people are motivated to seek belonging and status. Identifying with a group checks both boxes at the same time: we become part of a tribe, and we take pride when our tribe wins. In classic studies on college campuses, psychologists found that after their team won a football game, students were more likely to walk around wearing school swag. From Arizona State to Notre Dame to USC, students basked in the reflected glory of Saturday victories, donning team shirts and hats and jackets on Sunday. If their team lost, they shunned school apparel, and distanced themselves by saying “they lost” instead of “we lost.” Some economists and finance experts have even found that the stock market rises if a country’s soccer team wins World Cup matches and falls if they lose.

Once we’ve formed those kinds of stereotypes, for both mental and social reasons it’s hard to undo them. Psychologist George Kelly observed that our beliefs are like pairs of reality goggles. We use them to make sense of the world and navigate our surroundings. A threat to our opinions cracks our goggles, leaving our vision blurred. It’s only natural to put up our guard in response—and Kelly noticed that we become especially hostile when trying to defend opinions that we know, deep down, are false. Rather than trying on a different pair of goggles, we become mental contortionists, twisting and turning until we find an angle of vision that keeps our current views intact.

In an ideal world, learning about individual group members will humanize the group, but often getting to know a person better just establishes her as different from the rest of her group. When we meet group members who defy a stereotype, our first instinct isn’t to see them as exemplars and rethink the stereotype. It’s to see them as exceptions and cling to our existing beliefs.

In ancient Greece, Plutarch wrote of a wooden ship that Theseus sailed from Crete to Athens. To preserve the ship, as its old planks decayed, Athenians would replace them with new wood. Eventually all the planks had been replaced. It looked like the same ship, but none of its parts was the same. Was it still the same ship? Later, philosophers added a wrinkle: if you collected all the original planks and fashioned them into a ship, would that be the same ship?

We found that it was thinking about the arbitrariness of their animosity—not the positive qualities of their rival—that mattered. Regardless of whether they generated reasons to like their rivals, fans showed less hostility when they reflected on how silly the rivalry was. Knowing what it felt like to be disliked for ridiculous reasons helped them see that this conflict had real implications, that hatred for opposing fans isn’t all fun and games.

In psychology, counterfactual thinking involves imagining how the circumstances of our lives could have unfolded differently. When we realize how easily we could have held different stereotypes, we might be more willing to update our views.* To activate counterfactual thinking, you might ask people questions like: How would your stereotypes be different if you’d been born Black, Hispanic, Asian, or Native American? What opinions would you hold if you’d been raised on a farm versus in a city, or in a culture on the other side of the world? What beliefs would you cling to if you lived in the 1700s?

Psychologists find that many of our beliefs are cultural truisms: widely shared, but rarely questioned. If we take a closer look at them, we often discover that they rest on shaky foundations. Stereotypes don’t have the structural integrity of a carefully built ship. They’re more like a tower in the game of Jenga—teetering on a small number of blocks, with some key supports missing. To knock it over, sometimes all we need to do is give it a poke. The hope is that people will rise to the occasion and build new beliefs on a stronger foundation.

Motivational interviewing starts with an attitude of humility and curiosity. We don’t know what might motivate someone else to change, but we’re genuinely eager to find out. The goal isn’t to tell people what to do; it’s to help them break out of overconfidence cycles and see new possibilities. Our role is to hold up a mirror so they can see themselves more clearly, and then empower them to examine their beliefs and behaviors.

The process of motivational interviewing involves three key techniques: Asking open-ended questions Engaging in reflective listening Affirming the person’s desire and ability to change

Listening well is more than a matter of talking less. It’s a set of skills in asking and responding. It starts with showing more interest in other people’s interests rather than trying to judge their status or prove our own. We can all get better at asking “truly curious questions that don’t have the hidden agenda of fixing, saving, advising, convincing or correcting,” journalist Kate Murphy writes, and helping to “facilitate the clear expression of another person’s thoughts.”*

New research suggests that when journalists acknowledge the uncertainties around facts on complex issues like climate change and immigration, it doesn’t undermine their readers’ trust. And multiple experiments have shown that when experts express doubt, they become more persuasive. When someone knowledgeable admits uncertainty, it surprises people, and they end up paying more attention to the substance of the argument.

Evidence shows that if false scientific beliefs aren’t addressed in elementary school, they become harder to change later. “Learning counterintuitive scientific ideas [is] akin to becoming a fluent speaker of a second language,” psychologist Deborah Kelemen writes. It’s “a task that becomes increasingly difficult the longer it is delayed, and one that is almost never achieved with only piecemeal instruction and infrequent practice.” That’s what kids really need: frequent practice at unlearning, especially when it comes to the mechanisms of how cause and effect work.

Lectures aren’t designed to accommodate dialogue or disagreement; they turn students into passive receivers of information rather than active thinkers. In the above meta-analysis, lecturing was especially ineffective in debunking known misconceptions—in leading students to think again. And experiments have shown that when a speaker delivers an inspiring message, the audience scrutinizes the material less carefully and forgets more of the content—even while claiming to remember more of it. Social scientists have called this phenomenon the awestruck effect, but I think it’s better described as the dumbstruck effect. The sage-on-the-stage often preaches new thoughts, but rarely teaches us how to think for ourselves. Thoughtful lecturers might prosecute inaccurate arguments and tell us what to think instead, but they don’t necessarily show us how to rethink moving forward.

I was teaching a semester-long class on organizational behavior for juniors and seniors. When I introduced evidence, I wasn’t giving them the space to rethink it. After years of wrestling with this problem, it dawned on me that I could create a new assignment to teach rethinking. I assigned students to work in small groups to record their own mini-podcasts or mini–TED talks. Their charge was to question a popular practice, to champion an idea that went against the grain of conventional wisdom, or to challenge principles covered in class. As they started working on the project, I noticed a surprising pattern. The students who struggled the most were the straight-A students—the perfectionists. It turns out that although perfectionists are more likely than their peers to ace school, they don’t perform any better than their colleagues at work. This tracks with evidence that, across a wide range of industries, grades are not a strong predictor of job performance.

I believe that good teachers introduce new thoughts, but great teachers introduce new ways of thinking. Collecting a teacher’s knowledge may help us solve the challenges of the day, but understanding how a teacher thinks can help us navigate the challenges of a lifetime. Ultimately, education is more than the information we accumulate in our heads.

Rethinking is more likely to happen in a learning culture, where growth is the core value and rethinking cycles are routine. In learning cultures, the norm is for people to know what they don’t know, doubt their existing practices, and stay curious about new routines to try out. Evidence shows that in learning cultures, organizations innovate more and make fewer mistakes.

Over the past few years, psychological safety has become a buzzword in many workplaces. Although leaders might understand its significance, they often misunderstand exactly what it is and how to create it. Edmondson is quick to point out that psychological safety is not a matter of relaxing standards, making people comfortable, being nice and agreeable, or giving unconditional praise. It’s fostering a climate of respect, trust, and openness in which people can raise concerns and suggestions without fear of reprisal. It’s the foundation of a learning culture. In performance cultures, the emphasis on results often undermines psychological safety. When we see people get punished for failures and mistakes, we become worried about proving our competence and protecting our careers. We learn to engage in self-limiting behavior, biting our tongues rather than voicing questions and concerns. Sometimes that’s due to power distance: we’re afraid of challenging the big boss at the top. The pressure to conform to authority is real, and those who dare to deviate run the risk of backlash.

How do you know? It’s a question we need to ask more often, both of ourselves and of others. The power lies in its frankness. It’s nonjudgmental—a straightforward expression of doubt and curiosity that doesn’t put people on the defensive.

It takes confident humility to admit that we’re a work in progress. It shows that we care more about improving ourselves than proving ourselves.* If that mindset spreads far enough within an organization, it can give people the freedom and courage to speak up.

Research shows that when we have to explain the procedures behind our decisions in real time, we think more critically and process the possibilities more thoroughly.

When we dedicate ourselves to a plan and it isn’t going as we hoped, our first instinct isn’t usually to rethink it. Instead, we tend to double down and sink more resources in the plan. This pattern is called escalation of commitment. Evidence shows that entrepreneurs persist with failing strategies when they should pivot, NBA general managers and coaches keep investing in new contracts and more playing time for draft busts, and politicians continue sending soldiers to wars that didn’t need to be fought in the first place. Sunk costs are a factor, but the most important causes appear to be psychological rather than economic. Escalation of commitment happens because we’re rationalizing creatures, constantly searching for self-justifications for our prior beliefs as a way to soothe our egos, shield our images, and validate our past decisions.

In some ways, identity foreclosure is the opposite of an identity crisis: instead of accepting uncertainty about who we want to become, we develop compensatory conviction and plunge head over heels into a career path. I’ve noticed that the students who are the most certain about their career plans at twenty are often the ones with the deepest regrets by thirty. They haven’t done enough rethinking along the way.*

Psychologists find that the more people value happiness, the less happy they often become with their lives. It’s true for people who naturally care about happiness and for people who are randomly assigned to reflect on why happiness matters. There’s even evidence that placing a great deal of importance on happiness is a risk factor for depression. Why? One possibility is that when we’re searching for happiness, we get too busy evaluating life to actually experience it. Instead of savoring our moments of joy, we ruminate about why our lives aren’t more joyful. A second likely culprit is that we spend too much time striving for peak happiness, overlooking the fact that happiness depends more on the frequency of positive emotions than their intensity. A third potential factor is that when we hunt for happiness, we overemphasize pleasure at the expense of purpose. This theory is consistent with data suggesting that meaning is healthier than happiness, and that people who look for purpose in their work are more successful in pursuing their passions—and less likely to quit their jobs—than those who look for joy. While enjoyment waxes and wanes, meaning tends to last. A fourth explanation is that Western conceptions of happiness as an individual state leave us feeling lonely. In more collectivistic Eastern cultures, that pattern is reversed: pursuing happiness predicts higher well-being, because people prioritize social engagement over independent activities.

when it comes to careers, instead of searching for the job where we’ll be happiest, we might be better off pursuing the job where we expect to learn and contribute the most. Psychologists find that passions are often developed, not discovered.

When my students talk about the evolution of self-esteem in their careers, the progression often goes something like this: Phase 1: I’m not important Phase 2: I’m important Phase 3: I want to contribute to something important I’ve noticed that the sooner they get to phase 3, the more impact they have and the more happiness they experience. It’s left me thinking about happiness less as a goal and more as a by-product of mastery and meaning. “Those only are happy,” philosopher John Stuart Mill wrote, “who have their minds fixed on some object other than their own happiness; on the happiness of others, on the improvement of mankind, even on some art or pursuit, followed not as a means, but as itself an ideal end. Aiming thus at something else, they find happiness by the way.”

At work and in life, the best we can do is plan for what we want to learn and contribute over the next year or two, and stay open to what might come next. To adapt an analogy from E. L. Doctorow, writing out a plan for your life “is like driving at night in the fog. You can only see as far as your headlights, but you can make the whole trip that way.”

Posted in Review and Ideas | Tagged , , , , , , | Leave a comment

Deng Xiaoping and the Transformation of China

Recently, in an effort to better understand the Chinese culture and history, I decided to read a well-acclaimed book on Chinese history between the 1960s until early 1990s that centered on the leadership of Deng Xiaoping, arguably one of the most respected leader during the era. In the book, Ezra F. Vogel managed to provide a relevant background for policies chosen by the Chinese government, which many times would puzzled Western readers unaccustomed to the local culture. The quotations below serve as a reminder to me personally on the important policy decision the CCP and Chinese government made, that has translated to the magnificent growth of the country in the 21st Century.

He realized what some free-market economists did not, that one could not solve problems simply by opening markets; one had to build institutions gradually.

In short, Deng faced a tall order, and an unprecedented one: at the time, no other Communist country had succeeded in reforming its economic system and bringing sustained rapid growth, let alone one with one billion people in a state of disorder.

He had disciplined himself not to display raw anger and frustration and not to base his decisions on feelings but on careful analysis of what the party and country needed.

And although he welcomed what he considered constructive suggestions to resolve problems, he bristled when foreigners and political dissidents criticized the party. He vividly remembered the chaos of the civil war and the Cultural Revolution and believed that social order in China was fragile; when he judged that it was at risk, he would respond forcefully.

In 1978, because of the Soviet Union’s aggressive behavior following the American withdrawal from Vietnam, Western countries were receptive to helping China loosen its ties with the Soviet Union. With the global expansion of trade that followed, China had access to new markets and advanced technologies—Japan, Taiwan, South Korea, Hong Kong, and Singapore—and nearby examples for how latecomers to the international scene could modernize quickly.

While Deng was studying in Moscow, the Soviet Union had not yet built its socialist structure. The Soviet Union was still under the National Economic Policy (NEP). Under the NEP, independent farmers, small businesspeople, and even larger businesses were encouraged to prosper while the socialist economy was beginning to develop heavy industry. Foreigners, too, were invited to invest in the Soviet Union. Deng believed, as did others at that time, that such an economic structure—whereby private enterprise was allowed and foreign investment was encouraged, all under Communist Party leadership—promoted faster economic growth than could be achieved in capitalist economies.17 The fundamentals of the NEP, a market economy under Communist leadership, were similar to those of the economic policies that Deng would carry out when he was in charge of China’s Southwest Bureau in 1949–1952 and those that he would reintroduce in the 1980s.

Deng’s speech to the United Nations was received with an unusually long period of applause. Because of its size and potential, China was seen as a rallying force among the developing countries. The delegates of the developing countries were especially pleased with Deng’s statement that China would never become a tyrant and that if it were to ever oppress or exploit others, then the rest of the world, especially the developing countries, should expose China as a “social imperialist” country and, in cooperation with the Chinese people, overthrow the government.

This was vintage Deng. Paint the broad picture, tell why something needed to be done, focus on the task, cover the ideological bases, and seek public support for replacing officials who were not doing their jobs.

If people did not perform their jobs, they were to be fired. They should “shit or get off the pot hole” (buyao zhan maokeng bu lashi).58

Deng went on to explain that it would not do to take what Mao did on one occasion and to make that the explanation for something Mao did in a different place and time. Mao himself admitted he made errors; anyone who does things makes mistakes. If what a person did was 70 percent correct, that is very good. If after my death people say that what I did was 70 percent correct, Deng said, that would be quite good.

Deng thought it was a terrible waste to send young intellectuals off to do physical labor when they should be advancing Chinese science. Although he did not use the term, in fact he believed in a meritocratic elite. He sought to attract the best and the brightest and to provide the conditions that would allow them to achieve the most for China.

Wang Dongxing had exploded just a week earlier when an article entitled “Pay According to the Work Performed” (anlao fenpei) had appeared, demanding to know which Central Committee had authorized that article (only later did he find out that Deng Xiaoping and his staff had supported it).

Marshal Ye believed deeply that the errors of the Great Leap Forward and the Cultural Revolution had been caused by the excessive concentration of power in the hands of one person. He urged both Hua Guofeng and Deng to work together in leading the party and the country. When Ye met with Deng, Deng agreed that they should strengthen the collective leadership and limit the publicity given to a single person.

People, he said, must be allowed to express their views about the real situation. “Centralism can be correct only when there is a full measure of democracy. At present, we must lay particular stress on democracy, because for quite some time . . . there was too little democracy. . . . The masses should be encouraged to offer criticisms. . . . There is nothing to worry about even if a few malcontents take advantage of democracy to make trouble . . . the thing to be feared most is silence.” Deng did not then or at any other time advocate unlimited free speech. In fact, by November 29, a few days after some people began posting their views on a wall not far from Tiananmen Square, Deng had already stated that some opinions posted on “Democracy Wall” were incorrect.

Deng expressed the prevailing view at high levels that China’s two huge disasters, the Great Leap and the Cultural Revolution, were caused by a system that allows one person to dominate without any input from other voices. China therefore needed to develop a legal system so that a single individual, no matter how able, will not dominate. If laws are initially imperfect and incomplete, they can be made fair and just, step by step, over time.

Deng declared that the theory of collective responsibility had meant, in practice, “that no one is responsible.” He advocated assigning responsibilities to individuals and acknowledged that to do so, one must also give individuals power.

On November 26, the day after Hua Guofeng addressed the work conference and publicly backed away from the “two whatevers,” Deng Xiaoping told Sasaki Ryosaku, the head of the Japanese Democratic Socialist Party, “The writing of big-character posters is permitted by our constitution. We have no right to negate or criticize the masses for promoting democracy and putting up big-character posters. The masses should be allowed to vent their grievances.”5 He rhetorically asked, “What is wrong with allowing people to express their views?”6 In addition, Marshal Ye and Hu Yaobang both expressed support for the people posting their opinions.

During his visit, Deng not only saw things that previously he had only read about; he wanted to study how Japanese organized workers to maximize their dedication and efficiency, which he summed up as “management.” From his trip he concluded, “We must firmly grasp management. Just making things isn’t enough. We need to raise the quality.”36 A century earlier, Chinese patriots had insisted on retaining the “Chinese spirit” while adopting Western technology. By using the neutral term “management” to refer to studying Western ways, and by keeping his unwavering commitment to socialism and the Communist Party, Deng allowed the introduction of far more than technology while reducing the resistance of Chinese conservatives. Indeed, Deng argued that socialism could also use modern management, and the Communist Party could champion it.

Deng asserted that the United States and Japan could make a contribution to world peace if they urged Taiwan to negotiate with Beijing and if the United States reduced arms sales to Taiwan. He told Carter that Beijing would go to war over Taiwan only if, over a long period of time, Taiwan refused to talk with Beijing, or if the Soviets became involved in Taiwan.77

Mao had talked of how a single spark could set off a prairie fire of revolution, but China after 1979 underwent a revolution far greater and longer lasting than the one Mao began. This massive revolution ignited from many sources, but no single spark spread more rapidly than the one resulting from Deng’s visit to the United States.

When Fallaci asked about the mistakes of the Great Leap Forward, Deng replied that they were not Mao’s alone; rather, they were mistakes for which all those who had worked with Mao must share the blame.45 When she inquired about Mao’s selection of Lin Biao, Deng said that it was feudalistic for a leader to choose his own successor. Deng’s implication was unmistakable: it was also wrong for Mao to have chosen Hua Guofeng as successor. And when asked how experiences like the Cultural Revolution could be avoided in the future, Deng explained that party leaders were looking into restructuring China’s institutions in order to achieve socialist democracy and uphold socialist law.

Deng took no notes when he read. Documents were to be delivered to his office before 10 a.m., and he returned them the same day. He left no papers around his office, which was always clean and neat.

Deng did reserve the right to make final decisions, but he was ordinarily not a micromanager; rather he set the agenda and let Hu and Zhao carry out his directives as they thought best. In making the final decisions, Deng did consider the overall political atmosphere and the views of other key leaders. He was authoritarian and bold but in fact he was constrained by the overall atmosphere among Politburo members.

Deng embraced the notion of “inner-party democracy,” by which he meant that leaders would listen to “constructive opinions” to reduce the danger of making serious errors. But once a decision was made, party members, following “democratic centralism,” had to implement it.

The “cat theory”—“it doesn’t matter if the cat is black or white as long as it catches the mouse”—was a creative way of winning further support for diminishing the importance of Mao’s ideology; it suggested that doing what worked was more important than following a particular ideology. If Deng had simply said “ideology is unimportant,” he would have provoked enormous controversy, but his “cat theory” made people smile (in fact, some entrepreneurs even made and sold decorations with the cat theme). Another saying, “some people can get rich first,” helped lower the expectations of many who hoped to get rich quickly after the reforms, and helped disarm those who might feel envious of those who prospered before the benefits of reform had reached everyone.

For Deng, being a successful leader meant not just determining the correct strategic direction for the long run, but also knowing how to shape the atmosphere and how to time his bold steps so that they occurred when other officials and the public were ready to jump on board.

As Guangdong officials put it, “Beijing has its policies and we have our counter-policies” (shang you zhengce, xia you duice).

Guangdong’s progress cannot be explained simply by “opening markets,” for many countries with open markets did not achieve the progress that Guangdong made. Instead, in Guangdong, a Communist organization that less than a decade earlier had engaged in class warfare became an effective vehicle to promote modernization. The party provided overall discipline and encouraged study and competition, and Hong Kong and Japanese enterprises were quick to offer assistance. The special policy for Guangdong and Fujian and the unique leeway given to the SEZs made these areas into incubators for developing people who would be able to function well in modern factories, stores, and offices in cosmopolitan settings. Many of the lessons learned from these enterprises spread quickly from Guangdong to other places.

By the fall of 1978, officials in Anhui, cheered by the successful midyear harvests produced by the smaller work groups, reported their successes, setting off arguments with those who supported large-scale cooperatives. At a meeting of the National Agricultural Economic Association held in Suzhou in the fall of 1978, an official from the Anhui Agricultural Policy Research Office had the courage to say that one should not blindly follow the Dazhai model and that the government should not launch so many political movements that interfered with local economic initiatives.55 But on the other side, Chen Yonggui, still vice premier in charge of agricultural affairs, accused Wan Li of secretly promoting individual household farming. Newspaper articles, too, denounced Wan Li for opposing Dazhai and for restoring capitalism. But Wan Li had gained confidence from the successful harvests in the areas that had tried decentralized work assignments and he was rapidly winning support within the party. In November 1978, when criticized by Chen Yonggui, Wan Li, living up to his reputation for bravery, replied: “You say you are speaking from the Dazhai experience; I say Dazhai is an ultra-leftist model. . . . You go your way and I’ll go mine. . . . Don’t impose your views on me and I won’t impose mine on you. As for who is right and who is wrong, let’s see which way works best.”

After household farming was introduced, grain production continued to rise rapidly. Indeed, as early as 1984 grain production surpassed 400 million tons, compared to 300 million tons in 1977. After 1981, the growth in the grain supply led the government to encourage farmers to diversify into vegetables, fruits, and industrial crops. Official estimates of per capita grain consumption rose from 1977 to 1984 from 195 kilograms to 250 kilograms, and consumption of pork, beef, poultry, and eggs increased even more sharply.72   The government had been completely unprepared for the huge grain harvest of 1984. As a result, there was not enough warehouse space to store the grain, and some local governments, lacking sufficient funds to purchase all the grain that had been produced, had to give the farmers paper IOUs. Before then, the government, fearing urban unrest, since 1978 had not passed on to the urban consumer the increase in prices paid to the farmers for rice. This subsidy was a strain on the government budget, and after 1984 the costs were passed on to the urban consumer. On January 1, 1985, the government announced that it was no longer obligated to buy grain produced by the farmers. Because farmers planting their fields in 1985 worried that they might not get full payment for rice, they planted smaller rice crops and grain production consequently dropped 28 million tons, or about 7 percent (which was still 60 million tons more than that produced in 1980, when household farming first began to take hold). It took several years after the 1985 adjustments for grain production to recover to the 1984 levels and to put rural production on an even keel, but by 1989 grain output had surpassed the 1984 peak, and it continued at high levels thereafter.73 By then, there was sufficient rice production so that the government abolished grain rationing and consumers could buy all the rice they needed.

Deng had scored another victory by using his basic approach to reform: Don’t argue; try it. If it works, let it spread.

The conference conclusions supported a dual-price system—that is, one set of prices for items on the state plan and another set of prices that would be more responsive to market changes. State-owned enterprises that met their quotas would be allowed to sell whatever other products they could make at market prices. As a result, many enterprises would likely orient their practices to the market, while still relying on set prices to provide some stability during the transition to increased use of markets. Some World Bank officials criticized the dual-price system because it created opportunities for officials at state companies to purchase goods at state prices and then to make a quick profit by selling them in the market at higher prices. Higher-level Chinese officials, however, felt confident that they could keep the corruption under control with administrative punishments.

The cumulative effect of the new machinery and the new systems introduced by firms based in Japan, Europe, Hong Kong, and (beginning in the late 1980s) Taiwan had at least as much of an influence on economic growth as the system reforms introduced by Beijing officials. The new opening had, in effect, brought about an imported industrial revolution, information revolution, and consumer revolution.

Deng advanced step by step, rather than with a “big bang.” After 1991, Russia had followed the advice of economists who recommended opening markets suddenly, with a “big bang.” In contrast, Deng, with the advice of experts brought in by the World Bank, accepted the view that a sudden opening of markets would lead to chaos. He understood what many Western economists who took institutions for granted did not: that it was vitally important to take the time to build national institutions with structures, rules, laws, and trained personnel adapted to the local culture and local conditions. China did not have the experience, rules, knowledgeable entrepreneurs, or private capital needed to convert suddenly to a market economy.

Deng knew China would face huge adjustment problems from changes wrought by outsiders and from returning students, but he firmly believed that nations grow best when they remain open. Unlike some of his colleagues who feared that China would be overwhelmed by foreigners and foreign practices, Deng was confident that the Communist Party was strong enough to control them. He strongly supported sending officials and students abroad, translating foreign books and articles, and welcoming foreign advisers and businessmen to China. He was prepared to face criticism from those who feared that Chinese lifestyles and interests would be adversely affected by foreign competition. He believed competition from foreign companies would not destroy the Chinese economy but rather stimulate Chinese businesses to become stronger. He also did not worry if a substantial percentage of those who went abroad did not return, for he believed that they too would continue to help their motherland.

In reports to Beijing, all these organizations exaggerated the support for communism in Hong Kong, thus causing Deng and other officials to underestimate the extent to which ethnic Chinese residents in Hong Kong were in fact content with British rule. In fact, most residents feared what China, having just undergone the Cultural Revolution, might do to Hong Kong.

In talks with British officials, Deng vowed that political power after 1997 would be in the hands of the people of Hong Kong. Always focused on training successors, Deng requested that during the remaining fifteen years, Hong Kong leaders in business, education, and culture suggest the names of promising “patriotic” Hong Kong young people who could begin immediately preparing for responsible positions in various fields after 1997, thereby ensuring a smooth handover and continued stability and prosperity.

In a BBC interview before leaving China, she said, “If one party to a treaty or a contract says, ‘I cannot agree to it, I am going to break it,’ you cannot really have a great deal of confidence that any new treaty they make will be honored.” China specialists in the British Foreign Office cringed when she repeated these comments at a press conference in Hong Kong, for they knew that these words would dampen the goodwill with China that they had been working to build. As they expected, China complained, strongly. In the week after the Thatcher visit, the Hong Kong stock market fell 25 percent, and by the end of October, the Hang Seng Stock Index, which had registered 1,300 in June, had fallen to 772.66

The local Communists in Hong Kong, long accustomed to passing on what Beijing wanted to hear, had been repeating the mantra that the residents of Hong Kong were opposed to the imperialists and were eagerly awaiting liberation by the mainland. Even Hong Kong businesspeople, who were always eager to win Beijing’s favor, would report how enthusiastic the people of Hong Kong were about the prospect of Communist leadership. Xu, however, bravely relayed the unpleasant truth: he reported that the people of Hong Kong had a deep mistrust of the Communist Party and sometimes felt doomed.74 He also described the dominant view of Chinese businesspeople in Hong Kong, which was that they respected British administration and the rule of law and doubted that Beijing would be able to provide good leadership. Moreover, many businesspeople in Hong Kong who had fled the mainland soon after 1949 felt they could never again trust the Communists. They had seen how the Communists in the 1950s had betrayed their promises to work with businesspeople who had cooperated with them, by attacking them and appropriating their businesses.75 Disturbed by Xu’s reports, Li Xiannian responded by saying that Beijing’s top priority should be to win over the Hong Kong public.

At the meeting, when Sze-yuen Chung, head of the Hong Kong Executive Council, expressed doubts about the capacity of lower-level Communist officials to manage the complex problems of Hong Kong, Deng snapped back that this view amounted to saying that only foreigners can govern Hong Kong. Such an attitude reflects, he said, the influence of colonial mentality. Deng continued by telling the group that they should seek a better understanding of the Chinese people and of the People’s Republic of China. He assured them that Hong Kong’s capitalist system would be in place for fifty years, and he added that a patriot is one who respects the Chinese nation, supports China’s resumption of sovereignty, and does not want to hurt prosperity and stability in Hong Kong.

In Hong Kong, the basic political and administrative policies would not change for fifty years. He added that Hong Kong had been operating under a system different from that of Britain and the United States, so it would not be appropriate to adopt a fully Western system with three separate branches of government. He then articulated the kind of personal freedoms the public should expect: After 1997, China would still allow people in Hong Kong to criticize the Communist Party but if they should turn their words into action, opposing the mainland under the pretext of democracy, then Beijing would have to intervene. Troops, however, would be used only if there were serious disturbances.92 Deng’s speech provided the kind of straight talk that the people of Hong Kong were hoping for. It eased their concerns, even as it effectively ended all discussion of establishing three separate branches of government.

After the Basic Law was announced, it was received warmly in both China and Hong Kong.   Only four months after the signing, however, the optimism in Hong Kong was destroyed by the news of the tragedy in Tiananmen Square. To Hong Kong people, the specter that they would soon be ruled by a regime that could shoot its own people on the streets was terrifying. On June 4, 1989, out of sympathy for the students protesting for freedom in Beijing and out of concern for their own future, an estimated one million of Hong Kong’s five million people took to the streets. The demonstrations were far larger than any in the history of Hong Kong. After June 4, thousands of Hong Kong people who could afford it purchased foreign property, sent their children abroad to study, and took out foreign citizenship. Sino-British relations, which had been proceeding smoothly prior to June 4, deteriorated rapidly.

China’s problems with Tibetans erupted after 1955 when provincial leaders throughout China were told to accelerate the collectivization of agriculture. Mao said that “democratic reforms,” including collectivization, would be implemented among minority peoples if conditions seemed right, but they were not yet to be implemented in Tibet itself. The two million Tibetans outside Tibet proper were largely living in Sichuan, Yunnan, Qinghai, and Gansu. The leaders of Sichuan put together a plan not only to collectivize agriculture rapidly, but also to start “democratic reforms” in Sichuan’s Tibetan and other minority areas. Collectivization that was launched in the Tibetan areas in Sichuan at the beginning of 1956, including the taking over of some monasteries, quickly precipitated a serious and bloody uprising in Sichuan’s Tibetan areas, especially among the Khampa Tibetans, who constituted a large portion of Tibetans in Sichuan. The uprising was bloody because virtually every family in the Khampa Tibetan areas in Sichuan, where blood vengeance and raiding were endemic, had modern firearms and knew how to use them. After initial successes, the Khampas were overwhelmed by the much stronger PLA; in 1957–1958, they fled to Tibet proper with their guns. In 1957 at the height of the Cold War, the CIA began to train a small number of Khampas in Colorado and then dropped them back into Tibet to collect intelligence.106 Beijing directed the Dalai Lama to send the Khampas back to Sichuan, but the Dalai Lama refused. India had earlier invited the Dalai Lama to settle in India, and in March 1959 he led many of the most militant Tibetans across the mountains into India. Other Tibetans followed over the next two to three years.

Chinese leaders, frustrated by the growing resistance of Tibetan monks as a result of the Dalai Lama’s success abroad, have used whatever leverage they have with foreign groups to isolate the Dalai Lama. Some foreigners have yielded to Chinese pressures, but overall, Chinese efforts have increased foreign attention to the Dalai Lama and strengthened foreign criticism of China. In Tibet, the growing resistance of monks caused Chinese officials to fortify their security forces and to exercise stricter control over monasteries.

By the middle of the 1980s a tragic cycle had emerged that continues to this day: The Dalai Lama’s popularity abroad emboldens local Tibetans to resist, leading to a crackdown by Beijing. When foreigners learn of the crackdown, they complain, emboldening Tibetans to resist, and the cycle continues. But the Tibetans and Han Chinese both recognize there is a long-term change that began with the opening of Tibet to outside markets in the mid1980s and the input of economic aid to Tibet: an improvement in the standard of living and a decline of economic autonomy. In the 1950s outsiders settling in Tibet were mostly Han party officials and troops sent in by Beijing. After the mid-1980s settlers from the outside were overwhelmingly merchants who went to take advantage of economic opportunities generated by inputs of Chinese economic assistance to Tibet; many were members of Hui or other minorities from nearby poor provinces. Almost no outsiders settled in Tibetan villages but by the late 1990s, outsiders were already threatening to outnumber Tibetans in Lhasa.116 With more Tibetan youth learning Mandarin and receiving a Chinese education to further their careers, both Tibetans and Chinese see that the long-term trend is toward Tibetans absorbing many aspects of Chinese culture, and becoming integrated into the outside economy, while not giving up their Tibetan identity and loyalty.

Cambodian leader Pol Pot, who by the summer of 1978 had begun to realize the seriousness of the Vietnamese threat, asked Deng to send Chinese “volunteers” to Cambodia to resist the invasion of the Vietnamese, as Mao had done in Korea to resist the invasion of the South Koreans and the Americans. Deng was ready to cooperate with Pol Pot despite the atrocities he had committed against his own people and the vehement opposition these acts had caused in the West because Deng judged him to be the only Cambodian leader capable of offering significant resistance to Vietnam.

Meanwhile, China used these continuing border skirmishes—and occasional larger conflicts involving entire Chinese divisions—to train its troops. By the 1980s units from most of the infantry armies in China had been rotated to the Vietnam border to take part in the border skirmishes. As military analysts noted, assigning Chinese troops to fight against some of the most experienced ground troops in the world provided excellent combat training. The presence of large numbers of Chinese troops also made the Soviets cautious about sending additional aid to the Vietnamese.

In addition to negotiating with the Soviet Union, Deng also sought to reduce the risk of Soviet and Vietnamese advances by involving the United States. Deng knew that the United States was then in no mood to engage in a land war in Asia; what better way to ensure that the Soviets would not dominate the seas near Vietnam than to have a large American oil company conduct oil explorations there?

Deng’s reactions to the 1980 Polish strikes resembled Mao’s reactions to the Hungarian and Polish uprisings in 1956. First, allow more open criticism to help correct some of the worst features of the bureaucracy and to win over those critics who felt some changes were needed. But if hostility to the party threatened party control, clamp down. Having noted how Mao’s virulent anti-rightist campaign in 1957 had destroyed the support of intellectuals, Deng in 1980 tried to walk a fine line between curbing expressions of freedom and retaining intellectuals’ active support for modernization.

In March 1985, during this atmosphere of greater freedom, leading investigative journalist Liu Binyan, who three decades earlier had been labeled a rightist, published The Second Kind of Loyalty, in which he contrasted party members who automatically accept orders from higher party officials with party members with a conscience who serve the ideals of the party. Liu Binyan’s book hit a deep nerve among those who had agonized about whether to carry out party policy during the Great Leap and the Cultural Revolution. It also had a tremendous influence on idealistic Chinese youth who sought independence from the party. Deng, who always believed in the importance of party discipline, regarded Liu’s message as a challenge to party leadership, and as a result in 1987 Liu was expelled from the party.

In dealing with protests, Deng, like other Chinese Communist leaders, tried to maintain tight control while alleviating the cause of the complaints. As news of demonstrations spread abroad, Deng continued to explain to the Chinese public that the socialist system of public ownership was superior to bourgeois democracy; he pointed to the capitalists’ exploitation of workers and to the difficulties of making timely decisions in countries where there was a separation of powers among the executive, legislative, and judicial branches. But Deng also was determined to stay ahead of the popular movements by introducing timely political reform. He therefore directed that China undertake serious study of political systems to determine which systems endure for the long term, which systems collapse, and why.

The 1986 demonstrations were the first large student demonstrations in China since April 1976, when students had taken to the streets to honor Zhou Enlai and support Deng Xiaoping. On May 29, 1987, some weeks after these Chinese student demonstrations subsided, Zhao Ziyang explained to Singapore’s Deputy Prime Minister Goh Chok Tong that when China opened up, its students, who had had no previous contact with the outside world, could not judge what was good or bad. When they saw that the United States and Japan were more advanced, some came to the wrong conclusion, advocating total Westernization for China, without understanding that this was not possible in China where conditions were so different. Zhao admitted that it was not surprising some students had come to this conclusion, because the socialist system before 1978 did have its failures. But Zhao blamed the loosening of party controls for the demonstrations.74 He did not mention the name of the official who was considered responsible for this loosening: Hu Yaobang.

When launching his four modernizations, Deng had warned that some would get rich first, but in the view of most students, the people getting rich first were the least deserving—greedy individual entrepreneurs and corrupt officials—not the morally upright government employees working in the national interest after years of hard study. Students often lived in poor conditions, crowded eight to a small room. Able students who had sacrificed for years to be among the very small percentage to pass the examinations to enter good schools were outraged that the children of high officials received better opportunities and lived in a grander style because of their connections.75 Furthermore, university graduates were then not yet free to choose jobs; they were assigned jobs by the state based in part on reports compiled by the political guides who lived with the students. Many students felt they had no choice but to ingratiate themselves to these political guides, who often appeared to them to be arbitrary, arrogant, and poorly educated.76

On December 30, 1986, Deng summoned Hu Yaobang, Zhao Ziyang, Wan Li, Hu Qili, Li Peng, and others and announced to them that it was necessary to end the permissiveness toward the student movement. He told them, “When a disturbance breaks out in a place, it is because the leaders there didn’t take a firm clear-cut stand. . . . It is essential to adhere firmly to the Four Cardinal Principles; otherwise bourgeois liberalization will spread unchecked.” Hu Yaobang, aware that he was being held responsible for this lack of a “clear-cut stand,” knew that it was time to submit his resignation.

Deng also aimed to counter the broader appeal of Western ideals such as humanism, freedom, and democracy that in his view were being used to challenge the ultimate authority of the party.

The student movements that senior leaders had taken part in before 1949 were well organized, with thought-through plans and agenda, and by 1949, the student leaders had worked together for many years. Students in the late 1960s had experience as Red Guards. But the tight controls in the decade before 1989 had prevented the growth of an independent organized student movement. In 1989 the students who came together did not have any experience in organizing. Articulate orators emerged as leaders, but, lacking organization, an agenda, and procedures for ensuring compliance, they had no basis for negotiating with political leaders on behalf of other students.   Urban residents did not join in restraining the demonstrators, for they sympathized with their complaints. Even some older intellectuals who tried to keep the students from taking radical actions in fact admired the students for boldly expressing views that they themselves, beaten down by years of political pressures, were afraid to express. What began as an unplanned peaceful outpouring of mourning for Hu Yaobang was transformed into parades, political forums, campouts, angry protests, hunger strikes, and clashes that spiraled out of control.   Student demonstrators wanted improvements in their living conditions and they were upset that they were receiving fewer economic rewards for their ability and hard work than were uneducated entrepreneurs.

Chinese leaders, for their part, could see that foreign attention and support encouraged the protestors. They found it difficult to believe that Chinese citizens could be that angry at the leadership and found it easy to believe that the protests were being controlled behind the scenes by domestic and foreign “black hands.” Stories and rumors of such “black hands” circulated widely among high officials and were used by the conservatives to push Deng to take stronger action.

As students invoked the memory of Hu Yaobang to advance the cause of freedom and democracy, the parallels between the April 5, 1976, demonstrations (to mourn Zhou Enlai) and the April 1989 demonstrations (to mourn Hu Yaobang) were striking enough to inspire the demonstrators and to worry the Chinese leaders. The demonstrations in 1989 were taking place in the very same place as the April 1976 “Tiananmen Incident.” Like Zhou Enlai, Hu Yaobang had fought to protect the people and had died a tragic death. In both 1976 and 1989, the public was outraged that a man whom they revered had not been treated with more respect. In 1976 the demonstrators had taken advantage of the occasion to attack the Gang of Four. Now, was it not possible to use the occasion to criticize Deng Xiaoping and Premier Li Peng? By the fall of 1978, too, those arrested in the spring of 1976 had been rehabilitated and called patriotic. In the same way, was it not possible that the demonstrators in 1989 would later be called patriotic as well? Among those Chinese who hoped for a more humane government, Hu Yaobang had replaced Zhou as the great hero of the time.

Most Chinese students in the late 1980s were less concerned about political freedoms than about their personal freedoms, such as the ability to choose their own jobs and to escape from their “political guides.” After already having proved their talent and dedication by preparing for the difficult university entrance examinations, they felt entitled to pursue whatever jobs they wanted. But in 1989, with a shortage of trained graduates in key industries and government offices, government policy still mandated that graduates be assigned their jobs. Since one’s job assignment was based in part on what the political guides who lived with the students wrote in the “little reports” in each student’s secret records, the political guides became the symbol of government surveillance. The political guides were rarely as well educated as the students on whom they were reporting; some were suspected of favoritism and flaunted their authority to influence a student’s future. Many cosmopolitan, independent-minded students detested the constant worry about pleasing them. “Freedom,” to them, meant eliminating these political guides and being able to choose their jobs and careers on their own.

Party and government workers, state enterprise employees, and others with fixed salaries were furious to see rich private businesspeople flaunting their material wealth and driving market prices higher, threatening the ability of salaried workers to pay for their basic food and clothing needs. The problem was exacerbated by corruption: township and village enterprise workers were enriching themselves by siphoning off needed materials and funds from state and public enterprises; independent entrepreneurs were making fortunes, in part due to government loopholes; and “profiteering officials” were finding ways to use society’s goods to line their own pockets as the incomes of law-abiding officials stagnated.

Protesting students, furious at “profiteering” officials, demanded that these officials’ incomes and expenses be revealed, along with the number of villas they owned and the sources of their children’s money.7 In 1966 many children of high officials had joined the Red Guards against those who had “taken the capitalist road,” but in 1989 few children of high officials joined the protestors. Instead they were under attack, along with their parents, for the privileges they enjoyed as a result of turning their powerful positions into sources of profit in the new market economy.

Sympathy for the students was so widespread that Li Peng had difficulty retaining support of lower-level officials for the crackdown. Hu Qili, the Politburo Standing Committee member who supervised propaganda work, explained to his fellow officials that many newspaper reporters were upset because their articles about what was actually happening in the square were not being published. University officials who were told to quiet down the demonstrations dutifully passed along the message to the students, but for many their hearts were not in it.23 Li Peng could not even count on the official media to support him. For several days no newspapers of any kind appeared. On one national television station, reporters describing what was taking place in the square were interrupted, and for a brief time the picture went dark and the voiceover simply stopped. One day, an announcer said, “There is no news today.”24 After June 4, the head of the Propaganda Department and the editor of People’s Daily, who were considered too sympathetic to the students, were both removed from their positions.

The world press, assembled in Beijing to cover the reconciliation between China and the Soviet Union, found the student movement spellbinding; indeed, the dramatic events on the square quickly eclipsed the Gorbachev visit as the center of media attention. For foreign reporters, it was impossible not to get caught up in the idealism and enthusiasm of the students, who were far more open than Chinese had previously dared to be. With a vast international audience watching, the students grew even more confident that the PLA would not attack. Some, recognizing an opportunity to present their case to the world, assigned English-speaking demonstrators to the outside columns of the marchers, so they could tell the world about their desire for freedom and democracy and the need to end high-level corruption. A few persistent foreign reporters, trying to maintain balance, reported that most students in fact knew little about democracy and freedom and had little idea about how to achieve such goals.

Outside the Politburo, a group of retired liberal officials on the Central Advisory Commission—including Li Chang, Li Rui, Yu Guangyuan, and Du Runsheng—gathered to make final arrangements for releasing a declaration that the student movement should be declared patriotic. And early the next morning, with his back to the wall, Zhao called Deng’s office, hoping that if he could meet privately with Deng, he might be able to persuade him not to bring in the troops.

In Hungary, for example, national leaders had made concessions that had only led to further demands. If Chinese leaders were to yield again, China would be finished. In Shanghai, Deng added, Jiang Zemin had successfully restored order in 1986 by taking a tough, top-down approach, closing down the World Economic Herald for failing to follow directions (which had helped calm student demonstrations there). Deng believed that a similar steely resolve was needed now. But at present, Deng concluded, the police in Beijing were insufficient to restore order: troops were needed. These troops would have to be moved in quickly and decisively, and for the time being, plans for their deployment needed to remain secret.4 When some in the room expressed worries that foreigners would react negatively to any use of force, Deng replied that swift action was required and the “Westerners would forget.”5   Li Peng and Yao Yilin immediately supported Deng’s views, and although Hu Qili raised some concerns, only Zhao Ziyang clearly disagreed. When Zhao spoke up, he was reminded that the minority must follow the lead of the majority. Zhao replied that as a party member he accepted this, but he still had some personal reservations.6 As general secretary, Zhao realized that he would be expected to announce the imposition of martial law and then to oversee its implementation. He feared that the decision to bring in the military, even if unarmed, would only inflame the conflict.   Immediately after the meeting with Deng, Zhao asked his assistant, Bao Tong, to prepare his letter of resignation. Zhao knew that he could not bring himself to implement martial law and that this decision would mean the end of his career, but he also was confident that his decision would place him on the right side of history.

Neither the students in the square nor the high officials anticipated what happened next: the people of Beijing overwhelmed and completely stalled the 50,000 troops coming in from the north, east, south, and west, on six major and several minor routes. In his May 20 diary entry Li Peng simply noted: “We had not expected great resistance” and he then went on to record that troops everywhere had been stopped. Some troops had tried to enter Tiananmen Square by subway, but the subway entrances were blocked. Some had attempted to come in by suburban trains, but people lay on the tracks. In one instance, two thousand troops coming from some distance managed to arrive at the train station, but as soon as they got off the train, they were surrounded and unable to move.15 Cell phones were not yet available, but people used regular phones to call acquaintances, and those with walkie-talkies set themselves up at key crossings to warn of the arrival of troops so that people could swarm to attempt to stop them. People organized motorcycle corps to speed ahead and carry news of the troops’ movements as they entered Beijing. Some officials blame Zhao Ziyang’s assistant Bao Tong for leaking to the student protesters the plans for how and where the troops would arrive, but even if Bao Tong were a brilliant organizer, he could not have been able to alert or organize the vast throngs that took to the streets.

In his diary entry of May 22, Li Peng acknowledged that the troops were unable to move for fifty hours. He also reports that Deng was worried that the “soldiers’ hearts may not be steady” (junxin buwen). For Deng, this became the crucial issue. Would the soldiers maintain order when so many young people opposed them? Might the soldiers be influenced by the students and lose their determination to impose discipline? Some soldiers appeared weary and hungry.18

Melanie Manion, a perceptive Western scholar who was there at the time, explained Deng’s rationale. In her view, it was “highly probable that even had riot control measures cleared the streets on June 3, they would not have ended the protest movement. . . . The protestors would have retreated only temporarily, to rally in even greater force at a later date . . . the force used on June 4 promised to end the movement immediately, certainly, and once and for all.”34 Deng’s family reported that despite all the criticism he received, he never once doubted that he had made the right decision.35 Many observers who saw the dwindling numbers in Tiananmen Square toward the end of May believe it may have been possible to clear it without violence. But Deng was concerned not only about the students in the square but also about the general loosening of authority throughout the country, and he concluded that strong action was necessary to restore the government’s authority.

In explaining his rationale for sending in the troops, Deng acknowledged that political reform was needed, but he was firm about maintaining the four cardinal principles: upholding the socialist path, supporting the people’s democratic dictatorship, maintaining the leadership of the Communist Party, and upholding Marxist–Leninist–Mao Zedong Thought.

The leaders had expected some resistance from demonstrators on June 2, but they had underestimated the strength of the opposition: Chen Xitong reported that people “surrounded and beat soldiers. . . . Some of the rioters even seized munitions and military provisions. Offices of the Central Government and other major organs came under siege.” Li Peng was so distraught at the scale and determination of the resistance that for the first time he used the term “counterrevolutionary riot,” indicating that those resisting would be treated like enemies.

Students of this generation, as well as the following generations, took away from their tragic experience the lesson that direct confrontation with the leadership would likely cause a reaction so forceful that it was not worth the costs.

The tragedy in Tiananmen Square evoked a massive outcry in the West, far greater than previous tragedies in Asia of comparable scale elicited.59 For instance, on February 28, 1947, as the Guomindang took over Taiwan, the Guomindang general Chen Yi killed off thousands of the most prominent local leaders so as to eliminate any local leader who might have resisted the Guomindang. In Taiwan the incident embittered relations between “locals” and “outsiders” for decades, but it received little attention abroad. In 1980, too, Korean president Chun Doo Hwan led a bloody crackdown during which he slaughtered far more people than were killed in Beijing in 1989 in order to eliminate local resistance in Kwangju. Yet the Kwangju events were not covered by Western television, and global condemnation of the South Korean leaders did not compare with the condemnation of the Chinese leaders after the Tiananmen tragedy.

In the aftermath of the Tiananmen incident, businesspeople, scholars, and U.S. government officials who believed that U.S. national interests required working with the Chinese government were vulnerable to criticism for cooperating with the “evil dictators” in Beijing. As the Cold War was coming to a close, many outspoken U.S. liberals were arguing that our policies should reflect our values, that we should not coddle dictators but instead should stand on the side of democracy and human rights. And what better way to display Western commitments to these ideals than to condemn those responsible for the Tiananmen crackdown? After June 4, then, Deng Xiaoping was confronted not only by disaffected youth and urban residents in China, but also by Western officials who espoused the same values as the Chinese demonstrators.

For the Westerners, the killing of innocent students protesting for freedom and democracy in Beijing was a far worse crime than the decisions of their countries that had brought about the deaths of many more civilians in Vietnam, Cambodia, and elsewhere. Western human rights groups began lecturing Chinese about freedom and regard for human life. High-level Western officials stopped visiting China, and restrictions were placed on the export of technology, especially military technology. Foreign trade and tourism suffered.

Deng began by expressing his sorrow over the deaths of the soldiers and police who had died while heroically defending the interests of the party and the people during the struggle. He said that given the global atmosphere and the environment in China, such conflict was inevitable. It was fortunate, Deng said, that the conflict had occurred when many experienced senior military leaders—men who had the strength and courage to resolve the issue—were still around. He acknowledged that some comrades did not understand the need for their action, but he expressed confidence that eventually they would come to support the effort. Difficulties arose, Deng claimed, because some bad people who had mixed with students and onlookers had the ultimate goal of overthrowing the Communist Party, demolishing the socialist system, and establishing a bourgeois republic that would be the vassal of the West.

Deng began the meeting by reminding them that, as he had often declared in the past, one of his final responsibilities would be to establish a mandatory retirement system, so that aged officials would automatically pass on their responsibilities to younger leaders. Deng expressed the view to his assembled colleagues that the lack of a mandatory retirement age had been a critical weakness in the system, not only in Mao’s later years but in imperial days as well.

Particularly devastating to the Chinese and to Deng personally was the growing mass movement in Romania against China’s friend Nicolai Ceauescu and his wife that culminated on December 25, 1989, with their execution. Ceauescu was the only Eastern European leader to order troops to fire on civilians, and no Chinese leader could avoid seeing the parallels with the recent military action in Beijing just seven months earlier. Indeed, the sudden turn of events in Romania that led to his execution caused Chinese leaders to wonder if they were immune to the fate of Ceauescu, who had earlier expressed approval of Beijing’s June 4 crackdown.

Like Deng in 1957, Jiang affirmed that democracy is a worthy target and that the amount of democracy achieved will depend on the political steadiness of the situation in China.

In February 1990, as the Soviet party plenum discussed giving up the party’s monopoly over political power, the People’s Daily printed nothing. Instead, on the day the plenum ended, without mentioning the Soviet Union, the People’s Daily announced, “In China, without the strong leadership of the Chinese Communist Party, new turmoil and wars would surely arise, the nation would be split, and the people, not to mention state construction, would suffer.” The following day the paper carried the news that the Moscow plenum had agreed to give up the party’s monopoly of power.54 As the Soviet Union was falling apart, some Chinese intellectuals were as joyful as many Westerners. Some even repeated to trusted friends one of the great Chinese slogans of the 1950s when China was introducing Soviet-style industrialization, now used with a very different connotation: “The Soviet Union’s today is our tomorrow.”

The collapse of communism in Eastern Europe and the USSR had revealed that youth in the Communist world had lost faith in Marxism-Leninism, the socialist economy, and Communist orthodoxy. Deng and his fellow party elders realized that political training in Marxism-Leninism or even Maoism could no longer be expected to appeal to the sensibilities of Chinese youth. Nor, even if Deng had personally supported it, would class struggle against the landlord and bourgeois classes resonate with the youth as it had at the height of the Mao era.   What should replace Marxism-Leninism and Maoist ideology to win the hearts and minds of China’s youth? The answer seemed obvious: patriotism.67 Patriotic education that emphasized the history of the century of humiliation by foreign imperialists had been the main theme of propaganda in the 1940s, and it had never disappeared. It had, however, played only a secondary role as China had built up socialism beginning in the 1950s, and it had languished in the 1980s as Deng tried to build closer relations with the West. Yet after 1989, when Western countries were imposing sanctions, there was a widespread patriotic reaction against foreign sanctions. To many Westerners, sanctions on China were a way of attacking Chinese leaders who used force on June 4, but to Chinese people the sanctions hurt all Chinese. Patriotic “education” linked nationalism to the Communist Party, as the Communists in World War II appealed to patriotism and nationalism to rally support against the Japanese. Conversely, criticism of the Communist Party was ipso facto unpatriotic.

The sanctions imposed by foreign countries and the criticism of foreigners that followed June 4 provided Deng and his colleagues with a useful vehicle for enhancing this patriotism. Within weeks after the Tiananmen tragedy, Deng began emphasizing his patriotic message. The Propaganda Department skillfully publicized anti-Chinese statements by foreigners that caused many Chinese, even students who advocated democracy, to feel outraged. The efforts by foreign countries to keep China out of the GATT (General Agreement on Tariffs and Trade, which in 1994 was replaced by the World Trade Organization) were publicized so as to focus Chinese anger on the prejudices of foreigners toward China. The refusal by foreign countries to supply modern technology was framed as an effort to unfairly prevent the Chinese from sharing in the fruits of modernization. Foreign criticism of China for its treatment of Tibetans, Uighurs, and other minority groups was presented to the Chinese public as part of an organized effort by foreign powers to weaken China. The West’s support for Taiwan and resistance to China’s claims to the islands in the South China Sea and the East China Sea were also offered up to the public as examples of efforts to keep China down. These stories and others had their intended effect. In the years after 1989, students who had shouted slogans against the government for corruption and for not granting more democracy and freedom began supporting the government and the party by shouting slogans against foreigners, who they felt were unfairly criticizing China.

He cautioned: “China should maintain vigilance against the right but primarily against the left.”29 In frank talks with local officials, Deng countered his critics who said that the SEZs were capitalistic and controlled by foreigners by saying that only a quarter of the investment came from foreigners. Moreover, Deng said, China had political control over all foreign-owned firms, so it could be certain that they served Chinese interests. Instead of worrying about the current level of foreign involvement, Deng advised, China should increase foreign investment and form more joint ventures: foreign firms pay Chinese taxes and provide local workers with jobs and wages.

He praised local leaders’ success in using markets to further the cause of socialism, and credited socialism in turn for aiding in that success: he said that capitalism could not match the socialist system in terms of focusing on talent to make things happen quickly.

On the issue of governance and freedom, Deng said that the concept of “democratic centralism” was still the “most rational system” and should remain the country’s basic governing principle. Leaders should find ways to encourage people to express their opinions, but once a decision is made, people should follow the collective decision.

Deng’s successors are under pressure for not being more successful in stopping China’s widespread corruption and for not doing more to resolve the problems of inequality. And it may be even harder in the future to combat these problems: given global economic fluctuations, China faces the potential of an economic slowdown before a substantial portion of the population has had the chance to enjoy the benefits of the earlier rapid growth period. To prepare for this possibility, Chinese leaders will have to look beyond fast economic growth for legitimacy and accelerate progress on some of the issues that the public is most concerned about: reducing corruption and inequality, providing a reasonable level of universal medical care and welfare, and finding a way to show that public opinion is being respected in the selection of officials.

Deng, as the first Chinese leader to address the UN General Assembly in 1974, said that China would never become a tyrant and that if it ever oppressed and exploited other nations, the world, and especially the developing countries, should expose China as a “social imperialist” country and, in cooperation with the Chinese people, overthrow the Chinese government. In August 1991, upon receiving the news that Soviet leader Gennady Yanayev had staged a coup against Gorbachev, Wang Zhen sent a telegram to the party center proposing that they lend support for Yanayev’s coup. Deng replied “taoguang yanghui, juebu dangtou, yousuo zuowei.”14 (Incorrectly translated by some Westerners as “avoid the limelight, don’t take the lead, bide your time.” What it means is “avoid the limelight, never take the lead, and try to accomplish something.”) In Deng’s view, China should not get involved in other countries’ domestic affairs.

Posted in Review and Ideas | Tagged , , , , , , | Leave a comment

Key Points from The Cost of Free Money

Another well-written and provocative economic book I read last month was written by Paola Subacchi, Professor of International Economics at Queen Mary University of London and thought leader in global capital flow. In her book published last year, she argued that unfettered capital flow has caused disruption in Emerging Markets in the past few decades and the need for multilateral institutions similar to the Bretton Woods era post-WW II. Her book touched on the subject of inequality, geopolitics and the role of China in global order. The key points below serve as a personal note for me and my friends to come back to in the future.

As money moves around, it binds the global economy together. Developing countries have enjoyed strong economic growth in the last thirty years by becoming more integrated in the world economy through trade and investment – a process that is referred to as globalisation. We, as consumers, have all benefited from lower prices – but only if we don’t include environmental and human costs in our calculations. But what happens, then, when the world enters into a phase of transition where economic growth no longer lifts all boats,4 where rules become confused, confidence evaporates and politics becomes conflictual?

The Bretton Woods monetary system was designed to limit the scope for domestic policies that are detrimental to other countries, indeed ‘beggar-thy-neighbour’ policies – such as securing an unfair advantage for one’s own country through a competitive devaluation of the exchange rate, at the expense of another country. It also aimed to establish a level playing field for international trade, while at the same time providing the flexibility to pursue domestic interests such as full employment. Under these arrangements, the dollar came to replace sterling as the key international currency and the United States came to replace Great Britain as the leading global power, rendering it responsible for the provision of public goods: finance for development and the global financial safety net.

In desperate need of exchange rate stability, many developing countries have anchored their currencies to the dollar, but in doing so they have tied themselves to the monetary and policy decisions of the United States and consequently run into a whole heap of problems.

Until the end of the 1970s, living standards in the United States grew in line with the growth of the economy, but between 1980 and 2016, 90 per cent of the population’s income grew at a pace that was slower than the national average, with that of workers in the bottom 40 per cent of the income distribution growing by 0.3 per cent a year. Over more than three decades, pre-school teachers and carers saw their annual income grow from $26,400 to just $29,800. On the other hand, those in the top 0.1 per cent of the income distribution – for example, an investment banker or a corporate lawyer – almost quadrupled their post-tax income.

The post-Second World War golden age was the result of extraordinary economic and political conditions. These conditions supported the expectations that progress could only be linear and ascendant – that is, that social, economic and physical conditions could only improve. In the opening of this chapter I identified four trends – demographics, health conditions, economic growth and geopolitics – that, through their interplaying, came to shape the golden age.

The postwar years were blessed with exceptional economic activity that resulted in strong and sustained economic growth. In the two decades after the war, the advanced economies, including Western Europe and the United States, saw their annual real gross domestic product (GDP) grow at a steady average rate of 5 per cent.11 Unemployment was low. By the mid-1960s, the average unemployment rate across Europe was 1.5 per cent – effectively a situation of full employment. The United States also saw a significant improvement, with the unemployment rate dropping to around 4 per cent at the same time.

The overlapping of these trends meant that, for the first time ever in human history, people who were not endowed with wealth and capital, exceptional talent or even just sheer luck, could aspire to a decent life for themselves and their children. For many people from blue-collar backgrounds, a white-collar middle-class future was no longer a wild dream.

Brands have become global because the world economy has become global. The world has changed beyond recognition in the last forty years; it has become larger, ‘flatter’19 and more connected. The reduction or removal of trade barriers (as in the case of Europe’s single market), the opening of new markets and the integration of transnational supply chains have become the defining stories of our time.

Barriers to mobility started to crumble around the time of the fall of the Berlin Wall in 1989, which marked the end of the Cold War and the beginning of an intense period of trade and financial integration. With the barriers to mobility removed or reduced, markets opened up, fostering innovation in technology, information, ideas, governance and institutions. This in turn created the conditions for more cross-border business, shaping the development path of many countries and underpinning the transformation of the world economy. These dynamics are reflected in the dramatic increase in international trade flows over the last three decades.

What really differentiates the later phase of globalisation, however, is not the speed of integration within a relatively short space of time, nor the rate at which international trade grew. It instead lies in the fact that the countries that had, for the last seventy years, largely remained at the periphery of the world economy, are now key components. These include China, India, Russia and Brazil, with South Africa added at a later stage – the BRICS as they have become known (as I will discuss in chapter 7) – but also Mexico, Indonesia, Thailand, Vietnam, Nigeria and Turkey. All of these countries have come to epitomise the broadening of the world economy both in terms of the increase in the share of the world GDP they currently produce (about 40 per cent at current market prices) and their contribution to global growth (approximately 60 per cent).

Capital flows, even more than trade, have grown robustly since 1990. This is true of both portfolio investments and foreign direct investments.

In the mid-1990s, gross cross-border capital flows accounted for approximately 5 per cent of world GDP; at their pre-crisis peak in 2007 they were about 20 per cent. Capital flows, then, increased at a pace approximately three times faster than that of world trade flows.39 Nowadays, foreign direct investments account for around 1.4 per cent of the world’s total GDP.40 (Foreign direct investment are a type of investment that reflects lasting interest and control by a foreign investor, such as when an investor who resides in one country buys or establishes a firm in another country.) In 1990 this figure was much lower, at approximately 0.9 per cent. Portfolio investment – such as when an investor buys shares in a foreign company or a portion of a country’s or a company’s debt such as stocks and bonds – are by far the bulk of the overall investment activity. They account for approximately $58.7 trillion – approximately 68 per cent of global GDP.41 Since 2001, when the data series began, they have increased by four times their level.

Capital flows are a positive force for the economy as they support economic activity and growth. If they are directed towards activities that increase productivity and add value, they can – directly or indirectly – create new jobs and have a long-term impact. However, when international capital flows are directed towards speculative activities without any intrinsic value creation, they can often end up feeding speculative bubbles or excessive and unstable credit growth, generating significant risks for financial stability. When this happens the recipient countries can make themselves hostages to fortune and vulnerable to external shocks. Not only do these developments make it difficult to manage the domestic economy – for instance, by creating inflationary pressures – but they make countries vulnerable if money inflows suddenly reverse. The Organisation for Economic Co-operation and Development (OECD) has estimated that after large capital inflows, the probability of a banking crisis or a sudden stop increases by a factor of four.

The outbreak of the First World War, however, showed that geopolitical rivalries and even personal antagonism could overcome commercial and economic considerations.55 Despite this, the call for a sound international economic order that promoted cooperation and minimised ‘beggar-thy-neighbours’ actions remained embedded in the intellectual debate of the interwar years.

In The Economic Consequences of the Peace (published in 1919), Keynes identifies trade as the key driver of prosperity which, in turn, was believed to promote domestic order and moderation, resulting in international stability and peace. He further argued that obstructions to trade lead to impoverishment, which then fosters domestic extremism and disorder, and eventually international conflict. By the same token, those who identify their interests with trade are more likely to pursue peace than those who do not, and those who recognise that their well-being depends on trade will be much more likely to pursue policies of international ‘peace and amity’.

Playing by the rules seems to work better when economic conditions are symmetric, such as when the countries in Western Europe were rebuilding their economies at the end of the Second World War. Asymmetric economic conditions – in terms of economic growth and domestic welfare, for example – prevail when some countries experience stronger activity and better labour market conditions than others, as is currently the case in Europe. Even within Europe’s monetary union, divergent economic and financial conditions indicate that some countries (such as Italy) find it more difficult than others (such as Germany) to play by the rules and reduce public spending in order to rein in the public debt.

three supranational institutions – the European Commission, the ECB and the IMF, otherwise known as the Troika – played critical roles during the resolution of the sovereign debt crisis that affected Greece between 2010 and 2015. As I will discuss in chapter 5, the urgency of the crisis and the need to minimise the risk of financial contagion to other economies in Europe – especially Italy with a public debt far larger than that of Greece – put the Troika, i.e., unelected public servants, in charge of crisis resolution. This raised the question of whether the management of global capitalism – of which crisis resolution and crisis prevention are important components – transcends the traditional model of democracy, possibly even making it impracticable.

In a document that the British government had published in 1940 in response to Nazi Germany’s plan for a ‘New Order’ in international economic relations,6 Keynes made it clear that the mistakes of 1919 could not be repeated; a return to the Gold Standard – the monetary arrangements in place until the First World War and reinstated from 1925 to the early 1930s – was not a viable option.

The experiences of the interwar period – during which unemployment rates peaked at roughly 20 per cent in Great Britain and roughly 25 per cent in the United States – had also shown the need for a system that could accommodate domestic policy objectives, such as full employment, as well as the objective of maintaining the external balance. The risk, otherwise, was to again undergo a competitive struggle for markets – the key economic cause of war.

The system worked as long as wages and prices were flexible enough to allow adjustments and maintain the external balance. In the event of a deficit in the trade balance, for instance, domestic prices and wages would drop, making exports more competitive and imports more expensive relative to domestic prices. Restoring the trade balance thus ensured that gold reserves were maintained to back national currencies – having too low gold reserves increased the risk of a convertibility crisis, i.e., when countries were unable to convert their outstanding liabilities into gold at the fixed parity.22 The system, however, was not suitable for an expanding world economy. There simply wasn’t enough gold to support the economic expansion of the late 1890s and early 1900s, for example, and it was difficult for domestic prices to adjust during this period.

Being reinstated in 1925 as the Gold Exchange Standard,23 it came under pressure during the Great Depression that followed the Wall Street Crash in 1929, when the commitment to gold convertibility proved to be an impossible stranglehold for the countries that adhered to it. As countries in the euro area know all too well (I’ll discuss this in Chapter 5) attempts to restore the external balance in a situation where the exchange rate is fixed lead to an undesirable dilemma. In these circumstances, labour productivity – that is, the output produced given a certain amount of labour force – needs to increase for the balance to be restored. As productivity increases, the cost per unit goes down; exports then become cheaper and therefore more competitive, helping to restore the required balance. Output per worker can be increased through improvements in skills – due to education and training – and in technology and innovation. But both these routes take time to deliver the desired effect and so are of little use when the situation requires urgent action. In this case, there are two options. The first one is to decrease wages, either by paying less for the hours worked or getting people to work for longer for the same pay; the second is to decrease the number of workers, leaving the remainder to pick up the slack. The internal adjustment therefore requires an internal devaluation that in turn is conditioned on an increase in unemployment and/or a cut in wages; neither option is fair or politically feasible.

To combine rules with flexibility, the new system had to focus on achieving the two objectives of internal and external balance – rather than subordinating one to the other as was the case within the Gold Standard. Fiscal policy and adjusting exchange rates when they came to differ from their ‘fundamental equilibrium exchange rate’ values became the key policies of the new system.26 Fiscal policy would be used to manage domestic demand to ensure full employment and contain unemployment and the consequent downward impact on wages. At the same time, the cooperative approach to the exchange rate adjustment would restrain governments’ discretionary power of pursuing competitive devaluations and their potential ‘beggar-thy-neighbour’ impact. The way to achieve this was through an adjustable peg system of fixed parities that could be changed only under exceptional circumstances.

Put otherwise, the IMF was designed as a financial safety net – an insurance policy that member countries could turn to when they could no longer control the exchange rate. Its goals were to help countries maintain full employment, support rapid economic growth, keep exchange rates stable and avoid competitive devaluations. In addition, the IMF was tasked with creating a multilateral payments system, eliminating exchange restrictions and supplying funds to backstop and contain balance of payments disequilibria.

Thus the whole international monetary system relied on the ability of the United States to maintain liquidity in the system – that is, its ability to supply dollars ‘on demand’. As the economy of Western Europe was expanding on the back of the postwar reconstruction, the private and official demand for dollars was growing too. Dollars were supplied through private and official long-term capital outflows in excess of a current account surplus.

There were three issues that were undermining the Bretton Woods system and would ultimately result in the American decision to unilaterally unravel it. The first problem concerned the adjustment of countries with a persistent deficit in the balance of payment, as was the case for Britain. Like in the 1920s, deficit countries were bearing the burden of adjustment,

The second problem consisted in an increased risk of a run on the dollar. Although the dollar convertibility outdid the Gold Standard in creating liquidity, the link between the dollar and gold had been exacerbated by the perceived gold shortage – even more so than it had been the case in the past. As the United States provided dollars to the fastest growing economies, such as those of continental Europe and Japan, they ran persistent deficits in the balance of payments. The result was that the outstanding dollar holdings increased relative to the US monetary gold stock, ever widening the gap. The third problem, related to the second one, was brought to light by economist Robert Triffin in his 1960 testimony before the US Congress. He heeded that the ‘dollar overhang’65 was growing larger than America’s gold stock. The persistent deterioration of the United States’ net reserve position would ultimately undermine confidence in the value of the dollar. Lacking this confidence, the dollar would lose its standing as the world’s leading reserve currency. The so-called Triffin dilemma came to express the choice that the United States faced. On the one hand, they could improve confidence in the dollar by embracing contractionary policy that would have a deflationary impact on international liquidity. Alternatively, they could support liquidity by embracing expansionary policy, but in doing this they would risk undermining dollar-holders’ confidence.66 Put otherwise, the United States could retain confidence in the dollar by reducing the deficit, but only at the cost of reducing liquidity in the global system and constraining the growth of the domestic economy.

The deterioration of confidence in the United States vis-à-vis the dollar indicates the fundamental problem within the Bretton Woods system that eventually resulted in its dismissal. Indeed, once confidence was damaged, governments and central banks began to question what they should do with their large dollar holdings and whether they would be better off converting them into gold before the dollar is devalued. If the Bretton Woods countries had requested to convert just a quarter of their dollar holdings at the same time, the United States would not have been able to meet its obligation.

As public opinion in the United States was becoming more hostile towards the war in Vietnam, it was also becoming increasingly aware and less tolerant of the cost of putting the needs of foreign allies before domestic policy objectives. In particular, there was a growing concern over the commercial threat posed by Europe and Japan. Governments and central banks in Europe and Japan, in turn, were growing increasingly uneasy about holding dollars. The Europeans and Japanese had just one critical tool to hand that could rein in the monetary policy autonomy of the United States – the right to demand to convert their dollar holdings into gold. As each side accused the other of being uncooperative, the cracks in the Bretton Woods system deepened even further.

Thus the end of Bretton Woods was more than just the end of dollar convertibility; it ushered in a new light-touch policy framework focused on deregulating domestic financial systems and limiting fiscal policy so to minimise political discretionality and attempts to manipulate the economy. As a result, monetary policy became the main policy tool for managing domestic demand. Alongside floating exchange rates, monetary policy can manipulate movements in the exchange rate which can result in the required changes in demand. Indeed, demand can be stimulated by cutting interest rates, which will in turn support growth and boost employment rates. Part of this stimulus happens through the depreciation of the exchange rate, as a reduction in the interest rate curbs demand for financial assets denominated in the domestic currency. A weaker exchange rate helps increase exports while making imports more expensive. As such, it was concluded that all that was needed in terms of active policies was flexible exchange rates, disciplined fiscal policies and government budget deficits that are balanced in the longer term.

Starting in the 1980s, the advanced economies and many developing countries removed capital controls to stimulate international trade and expand output. It was the beginning of the financialisation of the world economy. Thatcher liberalised Britain’s capital movements in 1979 – one of her government’s first moves. This was followed by the French socialist government’s ‘tournant de la rigueur’ in 1983, which paved the way for more financial liberalisation within Europe. As a result, capital has accumulated globally and the global economy has developed strong interdependencies, trans-border linkages and global networks; compared with earlier versions, its scale and scope are both wider and deeper. This has translated into bigger markets, lower labour costs, tax cuts, less regulation and new opportunities for accumulating wealth through intangible assets such as information and knowledge.77 This process of financial integration has considerably delinked money and finance from territorial space.

the liberalisation of capital movements that followed the breakup of Bretton Woods, coupled with the belief that markets should be left to their own devices with little intervention and limited regulation, resulted in recurrent episodes of financial instability. These, in turn, have eroded the rules-based international order

grappling with capital flows becomes even more daunting when foreign capital is underpinning the domestic banking sector; the necessary adjustment of the exchange rate to support the real economy can trigger speculation and eventually capital outflows. This can result in the collapse of domestic banks, as happened in Asia in 1997 and then in Argentina in 2001. The British and Italian ‘Black Wednesday’ of 1992, which I also discuss, seems like an odd choice, but it is here to show the risks of embracing a monetary straightjacket while keeping free movements of capital – exactly the opposite of Bretton Woods.

After the 1990–91 recession in the United States, the Fed responded to high and rising unemployment rates by reducing the federal funds rate from 6 per cent in mid-1991 to 3 per cent by October 1992, where it stayed until February 1994. These Mexican and US policy decisions resulted in large amounts of capital flowing into Mexico in the early 1990s, as many mutual funds and other financial intermediaries were chasing higher returns. As a result, portfolio capital inflows became a critical source of foreign savings for Mexico.

The large amounts of capital that were flowing into Mexico did not help. The appreciation of the exchange rate worsened even further as the supply of domestic non-tradables – for example, services such as hairdressing that need to be performed locally – came under pressure, undermining the policy of containing inflation. This caused actual inflation to again be higher than the target, even though the Mexican government’s reforms had overall contributed to reducing it. Savings fell in line with an increase in investments, and widened the savings-investment gap. Private investments increased on the back of positive expectations about Mexico’s future economic performance; the Mexican government’s market-oriented reforms (which began in the mid-1980s) complemented its stabilisation efforts and turned investors’ sentiment.

The real exchange rate appreciation penalised exports and encouraged imports, resulting in a widening gap between what was produced and what was consumed, and a growing disequilibrium in the current account. There is an important point here that is worth stressing. The fixed exchange rate system embraced by Mexico could be sustainable, but only if the growth in productivity was fast enough to impact on the real exchange rate. Otherwise, the economy would experience sluggish growth or – at worst – if imports continued to be stronger than exports, Mexico would fall into a balance of payments crisis yet again. Such a crisis could be held off for as long as the current account deficit was financed by foreign capital inflows.

For a while, Mexico’s current account deficit was ‘overfinanced’; there was so much capital flowing into the country, that its central bank was able to build up its reserves. This strategy, however, was not sustainable. The low domestic savings rate meant that Mexico was vulnerable to problems in debt servicing and subsequent swings in international investors’ sentiment.

There are three important lessons to be learned from the Mexican crisis.14 The first is that countries that rely on foreign capital instead of domestic savings do so at their peril. However, as we will see with subsequent crises, relying on domestic savings is easier said than done. This is because financial globalisation has made it much easier for developing countries to achieve high rates of GDP growth by attracting foreign capital than it is for them to do so by developing their economies slowly and sustainably. The second point, related to the first, is that policy measures to deter speculative capital flows should be applied – even if that implies reducing such flows in the short term. The final point is that exchange rate policies need to be flexible. For if policymakers can’t make adjustments without losing credibility, damaging capital flight is certain to ensue. A corollary is that when capital movements are unrestrained, a flexible exchange rate allows adjustments that a fixed exchange rate system doesn’t. The debacle of the European Monetary System (EMS) shows the problems with rigidity and lack of international cooperation,

Like the United States’ decision to end the convertibility of the dollar into gold in 1971, the suspension of the EMS was a key moment in the development of the postwar economic order. The relationship between Germany and France, and Germany and the United Kingdom, was put under strain – with consequences that last to this day. The French felt that they were paying for the cost of German reunification, just like in the 1960s when they felt that the United States was using the dollar to fund their budget deficit.20 The British shared this sentiment. As Prime Minister John Major wrote to German Chancellor Helmut Kohl: ‘German reunification is at heart of these problems [. . .] Britain strongly supported [this] but many in Britain believe that we are now having to pay a high price.’21 The British and Italians felt that the speculative attacks on sterling and the lira could have been mitigated, or even avoided, if Germany had been willing to actively drive the value of the deutsche mark down, allowing the other currencies to adjust. Recall that equilibrium in the ERM was achieved by participating nations selling the strongest currencies for the weakest ones – and the mark was the strongest currency at the time. There wasn’t a technical obstacle preventing the Bundesbank from purchasing sufficient quantities of British pounds and Italian lira to avert the precipitous depreciation of sterling and the lira against the mark. But the Germans were persuaded that their economy was booming and monetary policy should be steered towards avoiding inflation regardless of the impact on other countries in Europe.

Prior to the Asian crisis, the broad exchange rate stability and rapid credit growth muted investors’ and banks’ capacity to adequately assess risk. As such, currency mismatches on corporate balance sheets and the highly leveraged positions of the borrowers went under the radar. Banks were also increasingly exposed to maturity mismatches, so much that foreign borrowing was short-term and domestic lending long-term. Lax prudential regulatory and supervisory practices also contributed to the problem. Indeed, many non-bank financial institutions had emerged in the region in the runup to 1997. This was because the licensing requirements were much lighter in places such as Thailand, and regulations in South Korea and the Philippines – including lower capital requirements – were much less stringent than those applied to commercial banks. As long as money was flowing in, however, the system was kept in equilibrium. In 1996, Thailand amassed inflows equal to 14 per cent of its GDP. As domestic borrowers were tapping into cheap foreign-currency loans, the situation was becoming clearly unsustainable.

The Asian crisis, once again, put the problem of unfettered capital flows under the spotlight – especially for developing countries. There are four critical lessons to be learnt here. First of all, financial globalisation and the liberalisation of capital movements without an appropriate regulatory framework put the financial stability of many countries at risk. The second point, linked to the first, is that the speed and impact of financial contagion among economies interconnected through capital flows can generate a vicious cycle of debt, and hit the real economy. Third, rebuilding and expanding foreign exchange reserves in countries that were affected by the crisis is seen as a form of self-insurance against further crises. But the final and most important point is that the Asian financial crisis brought to light the need for greater financial cooperation within the region in face of a common crisis. Indeed, in May 2000, finance ministers from ASEAN+3, the Association of Southeast Asian Nations plus China, Japan and South Korea, met in Chiang Mai, Thailand and agreed on a series of bilateral swap arrangements.

The problem here is that often the monetary policy and political decisions that will benefit the United States domestically are exactly those that will cause havoc for the countries that rely on the dollar system. This highlights the broad and deep contradiction in having an international monetary system that has retained the dollar standard of Bretton Woods bar gold.

China epitomises the constraints of immature lenders13 – another is Singapore with a current account surplus in excess of 16 per cent of GDP. With a surplus in its current account, even if it has significantly narrowed over the years, China tends to offset this excess by investing abroad. In one sense, this is not dissimilar from Britain when it invested sterling all over the world in the nineteenth century, or Germany nowadays, which lends heavily to other countries in the euro area. But in another sense, it is radically dissimilar because China invests abroad in dollars – not in renminbi. Over the years, China has offset its trade surpluses by accumulating dollars and financial assets denominated in dollars and, increasingly, by expanding its financial diplomacy in Asia, Africa and Latin America. It is indeed the renminbi’s reduced international circulation and liquidity, as I will discuss in Chapter 7, that has limited worth for international lending, dictating that China’s external claims must be made in dollars. In doing so, however, China continues to take on the exchange rate risk.

Rapid credit growth underpins the overheating of domestic economies and the buildup of vulnerabilities, and it results in growth that is not sustainable and often leads to bubbles and subsequent bursts. Between 2010 and 2013, issuance of below-investment-grade debt in developing countries, especially India and Turkey, rose from 15 to 35 per cent of total debt issuance.20 When inflows turn – and they always turn when economic growth eventually slows, financial conditions become tighter and exchange rates depreciate – then a financial and banking crisis may be just around the corner. For developing countries with weak economic fundamentals and unsustainable exposure to capital inflows, this has always been the case.

The remedy devised by the Fed did kickstart the US economy and therefore the world economy after the global financial crisis, but simultaneously landed the developing countries in a difficult position. At the time QE was presented as an almost inevitable measure, but it was not the case. Given the downturn in demand was large enough to require exceptionally loose monetary policy, fiscal policy should have been used to sustain demand, but it was politically unfeasible.

The portfolio flows mainly fed into local currency sovereign and corporate bond markets. In 2013, corporate bond issuance in developing countries reached $630 billion – in 2000, they had amounted to just $13 billion. During the same period, the share of debt issued in local currency expanded from close to zero to over 50 per cent.25 We have seen in the previous chapter, with examples such as Mexico and Argentina, that mixing heavy capital inflows with sovereign debt in a developing country tends to create a breeding ground for crises. Indeed, all crises since the 1980s have been triggered by excessive indebtedness and excessive reliance on portfolio inflows, and banking crises are frequently born out of debt crises. In addition, a surge in capital inflows is conducive to stimulating strong credit growth. In some countries, as I will discuss in the next section, some of these inflows were channelled in the shadow banking sector.

So, for China, a stronger renminbi equated to a drastic reduction in the value of its dollar reserves – a lessening of the ‘wealth of the nation’. But, to some extent, China could only blame itself and its exchange rate policy. As the renminbi was appreciating against the dollar, the central bank was intervening in the foreign exchange market to curb the strength of the Chinese currency and as a result continued to expand its dollar holding. China’s official reserves would go on to peak at just over $4 trillion in June 2014.29 This story illustrates the risks of sitting on a large pile of dollars. Dollar accumulation means taking on a large exchange rate risk and also being exposed to the domestic politics of the United States as well as vulnerable to swings in US foreign policy.

But holding reserves has a cost for countries that are still developing their economies, where a significant share of the population is poor and where the income per head is relatively low. Indeed reserves are capital that is accumulated instead of being used for domestic development.

China’s central bank, the PBoC, holds onto the dollars that are earned through trade, stashes them in its foreign exchange reserves and gives exporters renminbi in return. In other words, the PBoC buys dollars in exchange for renminbi, which changes the dynamic between supply and demand of the two currencies and prevents the renminbi from appreciating against the dollar. In practice, this equates to injecting a lot of renminbi liquidity into the banking system that, in turn, feeds domestic demand and pushes up both consumer and asset prices. Coupled with capital markets’ limited diversification, this has the potential to result in the creation of asset bubbles in markets such as real estate,

To avoid any undesired effects on consumer prices, the Chinese authorities need to control monetary expansion in the domestic market. They have an array of policy tools at their disposal to mop up the excess liquidity – or sterilise it. One option is to increase the reserve requirement ratio for large domestic banks, which has been raised by up to 20 per cent of banks’ capital. Another option is for the central bank to sell financial securities such as bonds. Between 1999 and 2005, the PBoC bought nearly all of the foreign currencies that came into the country, invested them and then sterilised them by issuing local currency bills to take the funds – mainly dollars – out of circulation. Around 90 per cent of China’s reserves have accumulated from the joint process of foreign exchange intervention and sterilisation.

It is true that China’s development over the last three decades has been facilitated by the dollar. But ‘free-riding’ on the dollar – an endless source of annoyance for the United States, as I’ll discuss later – has not come without its own set of burdens. Although it may have suited the Chinese leadership’s goals in the heyday of reforms and opening up, there is no doubt that it has become a constraint. Having the dollar at the core of China’s financial and monetary system is a constant reminder of the limitations of such a system. For, despite the size of its economy, financially China remains a developing country. Further to this, there are costs implied in using the dollar such as mismatched assets and liabilities on firms’ balance sheets, and exchange rate and liquidity crunch risks. Another facet of China’s dependency on the dollar is that the renminbi is an ‘immature’ international currency with limited convertibility outside of designated markets, restricted payment facilities and so constrained international circulation. As for the United States, China’s (and other countries’) large dollar holdings highlight that the dollar can be a powerful weapon in the event of geopolitical tensions.

Being offered the possibility to rely on a dollar liquidity line from the Fed is a highly effective way to stabilise a country’s banking and financial system. The Fed indeed agrees to accept some countries’ currencies in return for a loan in dollars, acting as de facto lender of last resort.

Unlike in Japan, where well over half of the debt is public, in China it is mainly private. The rate at which China’s debt has grown is a cause for concern and this is what prompted the monetary authorities to keep a tight grasp on capital movements. As I will discuss in Chapter 7, capital controls have ensured that individuals’ and families’ savings remain in the country and are channelled back into the banking and shadow banking sectors. This means that China’s highly indebted companies and the banks that lend to them are never without financial resources. As long as China keeps control on capital movements, then, its current account surplus and large holding of foreign exchange reserves should keep its debt sustainable and ward off financial instability.

The IMF has warned that high corporate debt levels will amplify stresses and put financial stability at risk52 – and there are many potential sources of stress in the global financial system. Trade tensions could worsen the outlook and cause investor sentiment to sour, resulting in stress from an increase in risk premiums, for example. A shock of this type would result in increased interest rates, corrections in stretched asset valuations (such as properties), exchange rate volatility and sudden international capital flow reversals. Such developments would bring leveraged companies, households and sovereigns under strain, deteriorate banks’ balance sheets and cause damage to public finances, especially those of emerging market economies.

Austerity programmes and constraints on fiscal policy pushed the burden of the post-crisis adjustments on deficit countries – like Greece – while no adjustment was required from surplus countries – like Germany – which were allowed to accumulate surplus and push onto others the impact of their deflationary policies.

Europe’s monetary union has been built around the idea that the integrity of the union needs to be protected from the fiscal profligacy of its member states. This idea is solidified in the Maastricht Treaty fiscal rules, which – in addition to the sovereign debt and the annual budget deficit thresholds – state that no member country should have an inflation rate higher than 1.5 per cent, and that the union will not assume or be liable for the commitments of member states’ central governments. The bottom line of this is that the union will not bail out its members if they have not respected the rules.

Playing the role of lender of last resort means that central banks provide liquidity to the commercial banking system in times of stress by lending against good collateral and charging interest rates. This function is now critical to our modern economy. Without the assurance that a central bank is both willing and able to provide liquidity, problems specific to one single bank will likely cause savers to withdraw their funds from other banks too. What this means is that a solvency crisis that begins in a specific part of the system can quickly evolve into a systemic liquidity crisis that has the potential to bring down the entire system. Unconstrained capital flows mean that a lender of last resort cannot limit its action to commercial banks. As it became evident throughout the unfolding of the Greek crisis, the lender of last resort should be prepared to provide liquidity to the bond market by actively intervening if a lack of liquidity and rising interest rates threaten to trigger a sovereign debt default.

The issue of how to shape a balanced relationship between democratically elected bodies and delegated bodies lies at the core of organisations like the WTO, the IMF, the multilateral development banks, and the OECD. By providing the infrastructure (including data and analysis) critical for the implementation and monitoring of the rules that inform the international economic order, these institutions increasingly play a decisive role in shaping and influencing the economic policies of sovereign states that are part of such order.

Looking at Italy and Greece, the moral of the tale is clear: dealing with debt can disrupt domestic politics, especially when the adjustment required as part of the debt-management strategy disproportionally hits the most vulnerable in society. For debt-afflicted countries, either they play by the rules – and those in the single currency fit in the monetary straightjacket – or they risk losing market confidence and so undermining or even stopping capital flows that are necessary to refinance the existing debt. This, as we have seen in Greece, can bring havoc or simply push up the cost of servicing the debt as in the case of Italy.

To borrow the words of Jean-Claude Juncker who, having served as prime minister of Luxembourg between 1995 and 2013, and president of the European Commission between 2014 and 2019, is easily one of Europe’s most seasoned politicians: ‘Politicians are vote maximisers [. . .] for the politician, the Euro can render vote-maximising more difficult, as a smooth and frictionless participation in the monetary union sometimes entails that difficult decisions have to be undertaken or that unpopular reforms have to be initiated.’36 Juncker clearly refers to Europe’s monetary union, the most constraining framework that countries can sign up to. But this is true for all countries that are exposed to capital markets, for debt is in principle at odds with democracy if debt management imposes policy choices that are difficult to reconcile with voters’ preferences.

close international cooperation is critical for keeping the economic order in equilibrium and containing instability when crisis hits. International cooperation is difficult to maintain, because it often conflicts with national interests. It is impossible, however, when countries start from an uncompromising nationalistic position where domestic interests are too strong or the domestic political costs are too high.

in a world where capital moves in and out of markets freely, sovereignty – and especially fiscal sovereignty – is a fluid concept. International investors have not been shy to voice their concerns regarding Italy and, as we have seen repeatedly throughout this book, governments that struggle to maintain credibility in the eyes of international investors find it difficult – if not impossible – to refinance their debt when the money eventually leaves the country. Italy has all the ingredients for this to happen: stagnant growth, delays in the improvement of its debt position, a weak banking sector, falling saving rates – and no perceived urgency, let alone a clear plan to address structural reforms.

Putin declared that the liberal order had ‘outlived its purpose’ and become ‘obsolete’ as the political balance of power is shifting from ‘traditional western liberalism to national populism’. The reason? ‘Public resentment about immigration, multiculturalism and secular values at the expense of religion.’52 The interview – Putin’s first with a major international newspaper in sixteen years – leaves many questions unanswered. Why now? Was this a way to tell the world that Russia is back at the top table with history on its side, as the Financial Times argues? And the idea that liberalism has run its course and has ‘come into conflict with the interests of the overwhelming majority of the population’, is it what ‘our Western partners’ believe or what they are led to believe?

China’s outward direct investment really took off in the mid-2000s; by 2018 China’s direct investment outward stock had reached almost $2 trillion, making it a net exporter of capital for the first time, and the second largest outbound investor in the world.20 By investing abroad, China has managed to connect itself to a variety of resources that have been invaluable for its development, including commodities such as oil, iron ore and copper.

Thus, the large pool of savings that have been accruing in China’s domestic banks for several decades have been instrumental for the country’s rapid economic growth. Indeed, the Chinese authorities have channelled these savings towards the country’s industrial transformation. By tightly controlling the maximum deposit rate (capping savers’ returns), the minimum lending rate (and so keeping the cost of borrowing low) and credit quotas, the monetary authorities have adequately managed the allocation of these savings from the banking system into state-owned enterprises.

Such a system has created some serious distortions and vulnerabilities. Banks are saddled with too much low-quality lending and non-performing loans, and they are vulnerable to insolvency that can in turn trigger a liquidity crisis. Savers, on the other hand, feel the pressure as they do not get much for their money, and so they turn to non-bank financial institutions instead in the hope of getting better returns. Within the so-called ‘shadow’ banking sector savers can invest in unregulated short-term instruments, such as, for example, commercial papers that pay high interest over three months, but are riskier than the bank deposits – with the offer of increased returns, of course, comes an increased rate of risk. Low interest rates can create problems for borrowers too. Chinese firms often borrow more than they need, and give little consideration to efficiency or profitability. After all, if they were to incur any financial losses, this would simply be covered by state subsidies.

The so-called ‘policy banks’ – the Agricultural Development Bank of China (ADBC), China Development Bank (CDB) and Export–Import Bank of China (Exim Bank) – are the other pillars of China’s banking system. They were all established in 1994 to finance trade, development and state-led projects. Not holding any deposit, they get their capital by issuing bonds on the domestic capital market where they are dominant. Indeed, approximately three-quarters of the total bonds on the Chinese market are issued by the policy banks.

By keeping controls on capital movements, the Chinese monetary authorities are able to maintain domestic financial stability. Due to China’s combination of high savings and financial repression, unrestricted capital movements would pose a threat. If the authorities were to loosen controls, Chinese people could choose to invest their savings abroad for better returns. China’s domestic banks would then be left with two options – compete or collapse. A similar threat could also arise from a change in external conditions such as a change in the US monetary policy that could trigger domestic financial instability in China. Managing capital movements to avoid sudden shifts in the demand for renminbi and renminbi-denominated assets ultimately feeds back into the Chinese monetary authorities’ control of the exchange rate. For example, in the years after the global financial crisis when the Chinese economy was growing strongly, interest rates in the United States were zero and the dollar was weak, controls on capital movements served to restrain the inflows, helping maintain financial stability and keep the exchange rate at the level consistent with China’s economic objectives.

So, if it is true that ‘great nations have great currencies’,48 then not having a great currency is preventing China from achieving its ambition of being a great nation. But what does the obstruction actually consist in? I have already discussed the limitations that face countries lacking an international currency when they have to issue debt (the original sinners) or lend (the immature creditors) on international markets, using currencies that are not their own. With an excess of savings, China is a lender rather than a borrower and so suffers from the latter of these issues with all of its related costs and risks.

‘Made in China’ should help to push the country to the second stage of such industrialisation and achieve 70 per cent ‘self-sufficiency’ in high-tech industries by 2025. The aim is not only to reach a point similar to that of advanced economies in the earlier stage of their industrialisation, but also to respond to the competition from other developing countries. Without an ad hoc industrial strategy, the authorities felt that the advantage in terms of competitiveness acquired over the first stage of industrialisation would be dissipated, and China would be squeezed in between advanced economies and developing countries with a lower cost base.

It is the relative competitive position of different countries in international trade as a result of the rise of China – and other developing countries – that bothers the United States. Through its process of ‘opening up’ China has challenged the multilateral trade system to accommodate for the increase in cheap exports that have resulted from its exports-led growth model. The world is still grappling with the impact of this on labour market arrangements, consumption patterns and environmental sustainability.1 In addition, China’s large accumulation of savings, a consequence of its export-led growth model, has exacerbated the imbalances between creditor countries (i.e., China itself) and debtor countries (such as the United States). And controls on capital movements in and out of China constrain the adjustment through the exchange rate.

Thus, despite the catchy soundbite, the call for the ‘new Bretton Woods’ was not answered. There are a number of reasons for this. Firstly – as I have already stressed – Bretton Woods did not come out of the blue and was the result of years of intellectual debate and policy discussions. Second – again a point already mentioned – the post-2008 initiatives were driven by the urgent need to reset the system, so there was simply not enough time – and possibly not enough inclination – to consider any alternative. The third reason was the fact that the global financial crisis, despite its devastating effects and long-term impact, did not compare to the knock-on effect of two world wars. The state of affairs was undeniably severe, just not severe enough to trigger the rebuilding of the international economic order – as had happened in 1944.

As in the 1930s, and at any point of crisis in the international order, trade is the lightning rod for geopolitical tensions and rivalries even if the lightning comes from monetary imbalances. The constrained monetary system of the 1930s led to a rise in protectionism which in turn resulted in political catastrophe, global conflict and the collapse of international cooperation.

There are three broad lines along which the governance of the Bretton Woods institutions needs to be reformed: the developing countries need to be better represented, they need to be given an adequate voice in decision-making processes and the leadership needs to be appointed on the basis of merit and through a transparent process.

while China is happily heading down the path of development finance through the AIIB and to a lesser extent the NDB, it lacks the financial capacity and the political clout to mitigate and absorb the risks that are implicit in providing the global financial safety net. While the dollar remains the key international currency, the risk of supplying liquidity in dollars is far too much even for a country with deep financial resources like China. China is fully aware of this fact. And in any case, seriously challenging the long-standing Bretton Woods institutions doesn’t seem to be the ultimate goal of the AIIB and even less the NDB. Instead, these institutions appear to be a step towards the creation of a long-needed economic and financial regional network to embed and support Asia’s economic and financial integration, or regionalisation. Nonetheless, the United States’ approach to China has changed from ambiguous ‘engage and contain’ to open hostility,

The bilateral swap agreements arranged by the PBoC are fundamentally different from the CMIM and the BRICS Contingent Reserve Arrangement because they are not designed to create a liquidity line and make dollars available between central banks. Instead, their purpose is to establish a domestic currency liquidity channel between these banks to support bilateral trade and investment relations and feed, if necessary, the offshore renminbi markets. As domestic currencies are increasingly used in bilateral trade – as China has been pushing for some time – it becomes even more important that liquidity in those currencies is assured.

Ultimately, bilateral swaps are an instrument of financial diplomacy. They are flexible, quick to deploy, easy to manage and suitable for winning friends. As a result they are becoming an increasingly important tool in China’s financial diplomatic toolbox.

Arguably the AIIB and the NDB are China’s own contributions to the global financial architecture, based on forty years of reform and development experience. China is also taking greater responsibility in providing financial support to countries that are experiencing short-term liquidity problems. Here it has chosen to work collaboratively with the ASEAN. There are, however, two operational issues for which, at least for the foreseeable future, China is both unwilling and unable to take up the mantle of leadership and create a system parallel, or in competition with, the existing one. First of all, as long as the renminbi remains an immature international currency with limited circulation and liquidity, China fundamentally lacks the financial capacity to take up this role. Providing finance for development and stability are the critical goods that the country at the helm of the international order must provide, but China would have to do so in dollars, which would be costly and risky. It would have to tap into its dollar reserves which, admittedly, are large, but ultimately finite. China would emerge as a lender of last resort with limited capacity, essentially a contradiction in terms.

‘The 1929 depression was so wide, so deep and so long because the international economic system was rendered unstable by British inability and the United States unwillingness to assume responsibility for stabilizing it’, writes Charles Kindleberger in the last chapter of his magisterial book The World in Depression.3 According to Kindleberger, when the Wall Street stock market crashed, the United States should have stepped up and taken responsibility for: ‘(a) maintaining a relatively open market for distress goods; (b) providing countercyclical long-term lending; and (c) discounting the crisis’.4 Put otherwise, the United States should have provided liquidity and ensured policy coordination among the main economies. Instead, it chose to raise import duties by implementing the Smoot–Hawley Tariff in 1930, opening the door to other countries to embrace ‘beggar-thy-neighbour’ trade policies and exchange rate depreciation in order to protect their domestic markets. ‘When every country turned to protect its national private interest, the world public interest went down the drain, and with it the private interests of all.’

As a ‘new Bretton Woods’ is unlikely to emerge, the best option is to create resilience to respond to economic and financial instability. Regional arrangements and regional currencies will lend resilience to this framework. If the large economies can agree to blend their rivalries within existing, but reformed, international institutions and new multilateral regional ones, then the economic and monetary system will accommodate regional arrangements and regional currencies so to balance the fading US leadership.

If the United States begins to use the dollar to confront its adversaries in an increasingly challenging geopolitical background and selectively supply dollars in line with its foreign policy objectives, then this situation could easily materialise. These questions inevitably lead to the key issue of how long the dollar should remain at the helm of the international monetary system – is it finally time to take suggestions of an alternative seriously? Inevitably, there will come a time when dollar-holders will lose confidence in the value of the American money, taking it to be less predictable and less secure. Geopolitical reasons have driven recent shifts in the use and holding of dollars, but these have been on the margin. The most notable case is that of Russia, where the central bank’s dollar holdings have been halved in favour of euros, renminbi and yen; 32 per cent of Russian reserves are now in euros and 22 per cent in dollars.

There is no indication that the Chinese leadership’s plan is for the renminbi to replace the dollar as the pivotal currency in the international monetary system, and the authorities have actively sought to play down such expectations since the launch of the renminbi strategy in 2009.

The development of the renminbi as the reference currency for Asia is consistent with China’s strong trade ties in the region. As the renminbi becomes more frequently used to settle trade transactions between China and its neighbours, China’s persistent trade deficits with other Asian countries – it is a net importer in the region – suggest that renminbi are expected to accumulate in their reserves.39 Initiatives such as the BRI are likely to drive the use of the renminbi for payments in the region as the Chinese companies involved in the construction of these projects will offer renminbi, or even demand them, for settling payments.40 In addition, closer links to China mean that movements in the renminbi should become more relevant for Asian exchange rate markets. For example, the currencies of Asian economies in the same production chain as China and those of large commodity exporters are likely to respond to global demand shocks in the same way as the renminbi. By the same token, China’s neighbouring countries will have an incentive to stabilise their currencies against the renminbi and hold more foreign exchange reserves denominated in renminbi.

Posted in Review and Ideas | Tagged , , , , , , , , , | Leave a comment