THE PRICE OF PEACE: MONEY, DEMOCRACY, AND THE LIFE OF JOHN MAYNARD KEYNES
by Zachary D. Carter
Published in December 1923, A Tract on Monetary Reform was, like its predecessor, a deceptively technical title filled with shocking ideas.19 It was not merely the sanctity of international debt contracts that must be abandoned, Keynes informed his readers, but the entire global financial system that had established the foundation of free exchange between nations. The gold standard, the benchmark of economic sanity for as long as anyone could remember, had become a barrier to peace and prosperity—a “barbarous relic” incompatible with “the spirit and the requirements of the age.”20 One by one, Keynes was taking aim at the sacred tenets of nineteenth-century capitalism. The world was about to change.
The new financial reality had spawned its own political ideology. In 1910, the British journalist Norman Angell published The Great Illusion, a book claiming to demonstrate that the international commercial entanglements of the twentieth century had made war economically irrational. No nation, Angell argued, could profit by subjugating another through military conquest. Even the victors would suffer financial harm, whatever the spoils might be.
The more currency a country circulated, the more economic activity it could support—so long as there was a corresponding amount of gold stored away in bank vaults to back up its bills. The financial thinkers of the day believed that without gold to give money some value independent of a government’s say-so, issuing fresh currency could not ultimately boost the economy. Instead, it would cause inflation, an overall increase in prices that would devalue the savings people had previously accumulated and eat away at the purchasing power of their paychecks.
The experience left a deep impression on Keynes. Financial markets, he had discovered, were very different from the clean, ordered entities economists presented in textbooks. The fluctuations of market prices did not express the accumulated wisdom of rational actors pursuing their own self-interest but the judgments of flawed men attempting to navigate an uncertain future. Market stability depended not so much on supply and demand finding an equilibrium as it did on political power maintaining order, legitimacy, and confidence. Twenty-two years later, those observations would become central tenets of the economic theory presented in Keynes’ magnum opus, The General Theory of Employment, Interest and Money: A large proportion of our positive activities depend on spontaneous optimism rather than on a mathematical expectation, whether moral or hedonistic or economic. Most, probably, of our decisions to do something positive…can only be taken as a result of animal spirits—of a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities. Enterprise only pretends itself to be mainly actuated by the statements in its own prospectus….Only a little more than an expedition to the South Pole, is it based on an exact calculation of benefits to come. Thus if the animal spirits are dimmed and the spontaneous optimism falters, leaving us to depend on nothing but a mathematical expectation, enterprise will fade and die;—though fears of loss may have a basis no more reasonable than hope of profit had before.
Markets, Keynes concluded, were social, not mathematical, phenomena. Their study—economics—was not a hard science bound by iron laws, like physics, but a flexible field of custom, rule of thumb, and adjustment, like politics. Market signals—the price of a good or the interest rate on a security—were not a reliable guide to consumer preferences or corporate risks in the real world. At best, they were approximations, always subject to change based on new attitudes about an uncertain future.
Principia Ethica was a sophisticated attack on the moral and political philosophy that had dominated English thought since the late eighteenth century—a doctrine that went by the name “utilitarianism.” Developed by Jeremy Bentham and John Stuart Mill, utilitarianism declared that pleasure was the basis of all morality. A good or right action would produce pleasure. The more pleasure a good deed produced for the more people, the more righteous it was. And so the aim of all government was to produce more pleasure. The best society was the happiest society.
Moore and the Apostles hoped to overturn utilitarianism without reverting back to the moral authority of the Church, which was quickly falling out of fashion in English culture. Things were not good because they produced pleasure, Moore argued. They were good because they were good. Pleasure itself could be either good or bad. People enjoyed all kinds of terrible things, and the pleasure they derived from them was not good but perverse. A good horse, a good piece of music, and a good person, meanwhile, all had something ineffable but vitally important in common: they were all good. But you could not find this goodness under a microscope. It could not be measured or derived from some set of facts about the natural world; it was a fundamental property, “simple, indefinable, unanalysable,”18 that could only be intuited directly by human reason. There were objective facts about value just as there were facts about colors; it wasn’t a matter of opinion whether the sky was blue or Goethe was a great poet. But good things could be understood solely in their “organic unity”; they could not be intellectually broken up into smaller components.
Utilitarianism and classical economics had developed alongside each other in English-language thought and shared important conceptual foundations. Both were concerned with efficiency. Economists following Adam Smith focused on the efficiency of agricultural and industrial production; utilitarian philosophers mused about the efficient production of pleasure. Both utilitarianism and the economics discipline were oriented around simple mathematical conceptual schemes: more was better and getting more with less better still. But after reading Principia Ethica, Keynes rejected the idea that efficiency could be the central organizing principle of a good society. No simple equation could approximate the best way to live.
He supported modest expansions of British social welfare programs, but his speeches to the Union from 1903 reflect a preoccupation with the Church—which he considered a source of sexual and intellectual tyranny—and unfettered trade. “I hate all priests and protectionists,” he declared in December 1903. “Free trade and free thought! Down with pontiffs and tariffs. Down with those who declare we are dumped and damned. Away with all schemes of redemption or retaliation!”
Tariffs projected a “Spirit of Nationalism,” which was “one of the most considerable hindrances to the progress of Civilisation”—“a feeling that anyone else’s prosperity is your damage, a feeling of jealousy, of hatred.”
Under traditional economic theory, markets were supposed to clear these problems by themselves. Prices would rise and fall according to supply and demand, encouraging goods to flow to where they were most needed. A country that produced too much iron could trade it to a country that produced too much wheat and vice versa. Keynes didn’t dispute the idea in principle, but he and other Allied policy makers recognized that battalions could run out of ammunition and cities could starve while everyone waited for markets to adjust. Free markets were a luxury that a nation at war could not afford.
Economists had long been aware that inflation was a common problem during wartime. When cash-strapped governments printed money to pay their bills, prices rose, reflecting, according to the theory, the higher quantity of money in circulation. In a nationally self-sufficient economy like that of Germany, Keynes argued, inflation functioned as “a concealed tax.” Wages couldn’t increase evenly with the prices of goods, because the German government had frozen workers’ pay rates for the duration of the war. So although the German people were taking home the same paychecks they had received in 1913, those paychecks didn’t have the same purchasing power they had once carried. Printing notes gave governments more money to spend on the war as it reduced the standard of living for the citizenry—transferring wealth from the public to the government, just as taxation might have done. That system might be attacked on grounds of “social justice”—why, after all, were “the working classes” being required to pay for the war instead of the very rich?—but there was no risk in Germany that inflation would lead to a runaway disaster during the war. When the German government stopped printing extra currency to pay its military bills, the price increases would stop.
But inflation would function much differently in the British economy. Because Britain relied so heavily on international trade, Keynes argued, inflation could serve only as a very temporary expedient. When British prices increased, it affected not only household budgets but also the prices the British paid for imports. At the same time, the prices British producers received for their exports did not increase; the amount they could fetch in foreign markets depended on the prevailing market prices abroad, not on the going rate at home. As a result, inflation had the effect of exacerbating the British trade deficit—the British were paying more to consume goods from abroad than they received from the sale of exports. And since foreign suppliers wanted to be paid in either foreign currency or gold, the British could inflate themselves into bankruptcy. A sustained trade deficit would deplete Great Britain’s gold reserves.
This was an important theoretical point in Keynes’ intellectual development. Money wasn’t just a passive force that people used to keep track of the value of goods and services; it was an active power in its own right. A problem in the monetary system could create unexpected trouble in the realm of what Keynes called “real resources”—the equipment, commercial products, and savings of a community.
Keynes and McKenna believed that the strongest weapon in the British arsenal was its economy. Great Britain was the richest nation in the conflict, providing money to Russia, France, Italy, and everyone else on the Allied side. The ultimate source of wealth in this war chest was the country’s formidable industrial sector, fueled by the resources of its vast global empire and its dominating navy. If Britain was to support its own soldiers, much less the entire Allied project, it would also need men on the home front running machines, harvesting fields, and performing essential economic work. A surge of troops would deplete essential manpower at home. It was a matter of both production and payment. The British needed men in factories to manufacture the weapons used on the front lines. But they also needed men to produce exports that could be sold abroad, particularly in the United States. When the British bought supplies from America, their U.S. trading partners had to be paid in dollars. And the most reliable way for the British to get dollars was to sell products to Americans. The government could sell off imperial assets for dollars—stocks, bonds, royal treasure—but a fire sale during wartime would probably yield disappointing prices and would permanently reduce the wealth of the empire.
And as with the financial crisis of August 1914, Keynes believed the question of money had become a question of power. Much of the British Empire’s economic might over the previous half century had been derived from its status as a creditor nation. When other countries needed funds, they turned to London, which gave the British a unique ability to influence how that money was spent and whom it would benefit. But the war had forced Great Britain to look abroad for its own financing needs, and Keynes recognized that as the empire became increasingly dependent on foreign help, it ceded geopolitical influence.
Even before hostilities had formally ended, the Treasury had asked Keynes to calculate precisely the amount Germany could afford to pay. Keynes identified a maximum of £2 billion—half paid up front, the other half spread out over the next three decades.21 The actual costs of the war, of course, had been vastly higher, but a more exacting indemnity would prove counterproductive. To generate the wealth needed to make reparation payments, Germany would have to boost its exports, taking international market share from British producers and thus ultimately undercutting British wealth. If the Allies tried instead to seize German gold, German mines, or German factories, they would only undermine Germany’s ability to generate future wealth that could be devoted to tribute. “If Germany is to be milked,” Keynes wrote in a report for the British delegation, “she must not first of all be ruined.”
Like everyone else at Paris, Wilson blamed Germany for the war. Unlike many, however, he did not blame the German people. Indeed, to Wilson’s mind, German citizens had been victims of the kaiser’s autocratic excess before the peoples of Belgium, France, Russia, and Great Britain had been, as he told Congress in April 1917: “We have no quarrel with the German people. We have no feeling towards them but one of sympathy and friendship. It was not upon their impulse that their Government acted in entering this war. It was not with their previous knowledge or approval. It was a war determined upon as wars used to be determined upon in the old, unhappy days when peoples were nowhere consulted by their rulers and wars were provoked and waged in the interest of dynasties or of little groups of ambitious men who were accustomed to use their fellow men as pawns and tools.”27 In short, he believed that the war had been caused by autocracy—an idea bound up inexorably with empire, since conquered peoples were denied their own government. Its solution was democracy—and by implication, the end of imperialism. “A steadfast concert for peace can never be maintained except by a partnership of democratic nations. No autocratic government could be trusted to keep faith within it or observe its covenants….Only free peoples can hold their purpose and their honour steady to a common end and prefer the interests of mankind to any narrow interest of their own.”
The war debts of Allies and enemies alike were so massive that they would be stirring up social turmoil for years to come. Governments would have to curb services to their citizens in order to meet foreign interest payments. Taxes would need to be raised in order to ship money overseas. The notion that this was a fair return for America’s help in the war might resonate with financiers and government officials, but it would make little sense to citizens. A farmer who had lost his son and half his acreage would not feel a rush of gratitude at the prospect of diverting a huge portion of his labor to the enrichment of American bankers.
His battle over reparations and inter-ally debt had made him a lifelong enemy of austerity—the doctrine that governments can best heal troubled economies by slashing government spending and paying down debt. When a government was burdened with too much debt, Keynes had come to believe, it was generally better to swear off the debt than to pay it off by burdening the public with a lower standard of living.
The product of his labors, Economic Consequences, still stands today as both a landmark of political theory and one of the most emotionally compelling works of economic literature ever written. Like all of Keynes’ best work, it is not fundamentally a work of economics at all, but a treatment of the great political problem of the twentieth century—a furious tirade against autocracy, war, and weak politicians. It is at once a howl of rage directed against the most powerful men in the world and an ominous prophecy of the violence that would again sweep the continent in the years to come. The book opens with a sunny portrait of the global financial order that persisted between the close of the Franco-Prussian War and the summer of 1914, describing the free international trade system as an engine of prosperity unparalleled in human history. Economic inequality had been the essential ingredient of that social progress, creating large personal fortunes that the rich could invest in new enterprises that addressed society’s needs and advanced the progress of “civilisation.” Though the mechanisms of growth were inherently unfair, with capitalists at the top reaping far more economic fruit than workers at the bottom, the gains improved the lives of all who participated: better food, nicer fineries, all the extravagances of La Belle Époque that could be purchased at ever-declining prices by an ever-expanding middle class. “Society was working not for the small pleasures of to-day but for the future security and improvement of the race,—in fact for
The steady piling up of material riches over the decades had created the impression of a strong and resilient system. But Keynes believed the arrangement was a fragile historical anomaly. It depended on “a double bluff or deception”: the system would only work if workers believed in it, and workers would not believe in it unless it worked. Break the collective faith in a better tomorrow, and workers would walk off the job, riot in protest, or worse.
One of the great rhetorical tricks of Economic Consequences is the ease with which Keynes moves from images of “terrible exhaustion” in Austria and Germany to the prospect of continent-wide economic crisis. The “oppressive interest payments to England and America” still on the books would soon reduce France, Italy, and Belgium to the same condition as Germany. The economic fate of Europe, Keynes insisted, was already indivisible, and that economic union would write its political future. Governments burdened with heavy debts, Keynes predicted, would resort to inflation to ease the burden, just as they had during the war—a situation that would quickly prove politically destabilizing. Inflation had unequal effects. People with substantial savings—a small minority of the population in 1919—were hit hardest, as the value of their nest egg was eroded; it was a “hidden tax” on a particular economic demographic. Such a morally arbitrary “rearrangement of riches” would fuel anger at the “capitalist classes.” “Lenin was certainly right,” Keynes wrote. “There is no subtler, no surer means of overturning the existing basis of society than to debauch the currency.”18 (Though this has become one of the Marxist leader’s most popular aphorisms over the years, the prose is pure Keynes; he was paraphrasing an interview Lenin had given to a New York newspaper.)
He agreed with Burke that governments were justified not by inalienable individual rights but by their results—their ability to achieve social stability and public happiness—and he shared with his predecessor a profound fear of social upheaval. But though he agreed with Burke’s aims and his mode of analysis, he rejected many of his methods. Burke, like the population theorist Thomas Malthus, had seen economic scarcity as an inescapable fact of human life. There just wasn’t enough wealth to go around, and if humanity was to realize any abiding cultural achievements, mitigating inequality could not be a function of government. Democracy, to Burke, would lead to collective poverty and the end of all fine living. A monarchy that protected the rights of private property was the only way to secure a decent society.
The warnings Keynes issued in the pages of The Economic Consequences of the Peace would reverberate through European history as militant demagogues rose across Europe, exploiting inequality, austerity budgeting, inflation, and uncertainty to take power by preaching vengeance and hate. Benito Mussolini would march on Rome in three years’ time. In Germany, hyperinflation and Adolf Hitler’s Beer Hall Putsch would follow soon after, the rise of Josef Stalin a short time after that. Keynes’ slim masterpiece remains essential today not because of its statistical prowess or its analytical detail but because the mass psychology he presented would prove so integral to the great tragedies of the twentieth century. And the explanatory power of his narrative can be applied with only modest revisions to the great problems of the twenty-first century. Substitute the financial crisis of 2008 for the Great War, swap European austerity budgets and the American foreclosure crisis for war debts and reparations, and the result is a modern recipe for militant far-right nationalism.
“The moderate people can do good and perhaps the extremist can also do good; but it is no use for a member of the latter class to pretend that he belongs to the former,” Keynes wrote to Arthur Salter, who had been secretary at the Supreme Economic Council in Paris. “Besides, it is much a hopeless business trying to calculate the psychological effect of one’s actions; and I have come to feel that the best thing in all the circumstances is to speak the truth as bluntly as one can.”
Keynes argued that there is a difference between probabilities and statistical frequencies. To say that some state of affairs is probable, according to Keynes, is not to simply state that mathematically, it will occur a certain percentage of times in a simulation (that is, if fifty of the one hundred coins in a bag are quarters, I have a 50 percent probability of drawing a quarter every time I reach in). Mathematical data might be useful in a person’s assessment of probability, but it cannot be probability itself.
That doctrine—that managing the overall supply of money was the best way for governments to achieve economic growth and stability—became known as monetarism. It was a radical rethinking of the way central banks should operate.63 The Bank of England typically managed its gold reserves with an eye to fluctuations in international trade, ensuring that Great Britain didn’t run out of gold due to too many imports or a shortage of exports. If Britain was running a trade deficit, then money—gold—would be flowing out of the country, because Britain was effectively purchasing more goods from abroad than it was selling to foreigners. In that situation, the Bank would raise interest rates, effectively lowering the price of British goods on the international market until trade levels were balanced. The idea was to have the real terms of trade determining the price level. Keynes was suggesting the opposite, regulating prices to ensure stability—a strategy that would have implications for the course of trade. It was a step away from the laissez-faire doctrine that public officials should not meddle in economic affairs. Governments would find themselves forced to choose between maintaining a stable exchange rate and a stable price level. When the choice came, Keynes argued, there should be no hesitation: Keep prices stable, and adjust exchange rates. It might be true that “over the long run,” rashes of inflation and deflation would burn themselves out. “But this long run is a misleading guide to current affairs,” Keynes observed. “In the long run, we are all dead.
There is an unresolved tension running throughout Keynes’ work between his desire to democratize the trappings of ruling-class life and his own reverence for that same ruling class. “The great trouble with Keynes was that he was an idealist,” his colleague and collaborator Joan Robinson once wrote.7 His faith that “an intelligent theory would prevail over a stupid one”8 was hard to square with a world in which “vested interests” often rejected reforms that carried broad benefits for all, preferring even a dysfunctional status quo as long as it maintained their place at the top of the social pecking order.
By the time he presented The End of Laissez-Faire to a lecture audience at Oxford in November 1924, British unemployment had been in double digits for nearly five consecutive years. Instead of creating equality and harmony, laissez-faire had generated vast inequality and social unrest, so much of each that all the splendid things liberal individualism was supposed to foster—fresh thinking, great art, fine wine, exciting conversation—were now threatened by social instability. It was time to move on.
“One of the most interesting and unnoticed developments of recent decades,” he wrote, “has been the tendency of big enterprise to socialise itself”19 by responding to public need rather than private profit.
“The political problem of mankind is to combine three things: economic efficiency, social justice, and individual liberty,” Keynes wrote.
Lower wages were in a very real sense the point of deflationary policy; the idea was to bring down the price of everything, including labor. Under classical economic theory, this cost cutting did not have to result in mass layoffs. “Unemployment is a problem of wages, not of work,” Keynes’ Austrian contemporary Ludwig von Mises wrote in 1927.50 As high interest rates imposed higher costs of credit on employers—or reduced demand for their goods—companies could reduce labor costs by cutting pay all around. Lower wages wouldn’t really hurt workers, the thinking went, because with the price of goods falling, workers wouldn’t need as much money as they had before. Based on this reasoning, Conservatives, bankers, and even Liberal politicians blamed the British jobs crisis on trade unions. People had to be laid off, these critics insisted, because companies had signed collective bargaining contracts that required them to keep wages artificially high. Since wages couldn’t be lowered, firms had no other choice but to fire people to bring down their costs. Firms that couldn’t fire people had to close. Keynes lampooned what he called the “orthodox” explanation: “Blame it on the working man for working too little and getting too much.”51 All of that might make sense on paper, Keynes argued, but it was totally divorced from what happened in the real world. “Deflation does not reduce wages ‘automatically,’ ” he observed in the Evening Standard. “It reduces them by causing unemployment.”52 Keynes had little enthusiasm for unions, but by 1925 he believed that steep deflation could never be accomplished without mass layoffs unless the government became deeply involved in managing the affairs of the business world. It was not only collective bargaining that stood in the way of uniform wage reductions; it was human psychology. No sane worker negotiating with his boss would accept a pay cut in the name of broader social welfare without some guarantee that other workers would take the same deal. He could easily find himself shortchanged for nothing. “Those who are attacked first are faced with a depression of their standard of life, because the cost of living will not fall until all the others have been successfully attacked too,” Keynes wrote. “Nor can the classes, which are first subjected to a reduction of money wages, be guaranteed that this will be compensated later by a corresponding fall in the cost of living, and will not accrue to the benefit of some other class. Therefore they are bound to resist so long as they can; and it must be a war, until those who are economically weakest are beaten to the ground.”53 Contrary to the conventional wisdom, then, it was not the departure from gold that was causing Great Britain’s economic malaise, it was the country’s enthusiasm to return to gold at the exchange rates that had prevailed before the war.
The collective faith of the citizenry in the ability of the nation’s economic system to deliver steady, predictable gains had collapsed. Millions of British workers had joined together in an attempt to shut down the entirety of the nation’s commercial life. People—most people—had actively harmed their own society in order to make a political point. The unrest had extended well beyond the ranks of the unemployed; only people who had jobs could go on strike, after all. There was clearly no sense among the public that their welfare rested on secure foundations. It was as if the “double bluff” of the prewar years had been reversed, creating a downward spiral of doubt and decay. People had once accepted an unequal system because it had improved their lives; because they had embraced it, the system had been able to generate prosperity. Now everyone from the coal miner to the investment house magnate had come to believe in a bleak, limited future (whatever the bankers said about the virtues of the gold standard, the paucity of actual investment in the economy was a more telling measure of their true feelings). That collective doom and gloom could not be broken by individual acts of courage.
Money, he argued, was an inherently political tool. It was the state that determined what substance—gold, paper, whatever—actually counted as money—what “thing” people and the government would accept as valid payment. The state thus created money and had always regulated its value. “This right is claimed by all modern states and has been so claimed for some four thousand years at least.”45 The significance of gold to economic history was both relatively recent—it had only really mattered in the past few decades—and arbitrary. The true source of monetary stability was the public legitimacy of the political authority that happened to choose gold as its preferred medium of exchange. Money had no meaning absent political authority.
The Treatise, then, was an all-out assault on the intellectual foundations of laissez-faire. There was no such thing as a free market devoid of government interference. The very idea of capitalism required active state economic management—the regulation of money and debt.
Ideally, according to Keynes, the savings of the people would equal the investments of the business world. But things could go haywire; there was no process by which savings were automatically converted into investment. The impetus to save and the impetus to build were different motivations. “It has been usual to think of the accumulated wealth of the world as having been painfully built up out of that voluntary abstinence of individuals from the immediate enjoyment of consumption which we call thrift,” he wrote in the Treatise. “But it should be obvious that mere abstinence is not enough by itself to build cities….It is enterprise which builds and improves the world’s possessions….If enterprise is afoot, wealth accumulates whatever may be happening to thrift.”54 The role of the banking system was to ensure that the savings of society were perfectly tuned to society’s capacity for investment. If the interest rates lenders offered were set correctly, savings would equal investment and society would operate happily at full employment. But if total investment exceeded the total amount that a society wanted to save, the result would be inflation. And if the reverse occurred—if a society saved more than it invested—the result would be a “slump.”
Keynes and Marx also shared the unfortunate fate of being right about the revolutions to come and wrong about their social implications. As Marx predicted, Communists overthrew capitalists all over the world in the twentieth century. Keynes, for his part, got his math about right. If anything, he was overly pessimistic about the economic potential about to be unleashed. By 2008, the Nobel laureate economist Joseph Stiglitz has noted, global economic output reached a level sufficient to raise every man, woman, and child on the face of the earth above the U.S. poverty line—a very great improvement for the domestic poor and an astounding achievement for the global poor.81 According to a recent analysis by Harvard University economist Benjamin M. Friedman, we are, moreover, on track for an eightfold increase in the standard of living in the United States by 2029—if standard of living is taken to mean the total economic output per person.82 “The numbers hang together,” observed another Nobel laureate, Robert Solow83—even though the world did not, in fact, escape several catastrophic wars in the decades since Keynes’ essay. But the age of farmer-critic-fishermen is not yet upon us. We do not live in a utopia where all people work fifteen hours a week, reserving the rest of their time for painting, literature, and walks in the park. What went wrong? In his essay, Keynes distinguished between human needs essential to survival and semi-needs whose “satisfaction lifts us above, makes us feel superior to, our fellows. Needs of the second class, those which satisfy the desire for superiority, may indeed be insatiable.”84 This effort to keep up with the Joneses has no doubt played a role in lengthening the workweek. But the primary culprit is simple inequality. The tremendous expansion of output and productivity over the past ninety years has been harvested for the most part by a very small section of society. For everyone else, economic prospects are roughly where they were in the mid-1920s (although a decline in the overall workweek from 1930 to 1970 suggests very clearly that people are not really eager to work the hours they do). As any working family can attest, they work because they have to.
Both tariffs and monetary adjustments were efforts to alter the flow of trade, thus expanding domestic production and employment. One functioned by changing the price of goods, the other by changing the price of money, but the effect was the same.
Free trade, Ricardo had explained, allowed countries to specialize in what they did best, enabling the world economy to produce more than if each individual country tried to supply itself with homegrown goods. But technology had eliminated many of the advantages of national specialization. International trade was dominated by heavy manufacturing products that could now be made for the same price just about anywhere.
“The class-war faction believe that it is well known what ought to be done; that we are divided between the poor and good who would like to do it, and the rich and wicked who, for reasons of self-interest, wish to prevent it; that the wicked have power; and that a revolution is required to depose them from their seats. I view the matter otherwise. I think it extremely difficult to know what ought to be done, and extremely difficult for those who know (or think they know) to persuade others that they are right—though theories, which are difficult and obscure when they are new and undigested, grow easier by the mere passage of time.” Compared to the persuasive power of good ideas, he insisted, “the power of self-interested capitalists to stand in their way is negligible.”
FDR had a remarkable capacity to show different faces of his political persona to different audiences when it suited him, and his rhetoric didn’t always match his policy agenda. But over the coming years, he would show that he meant what he said on his first day in office. The abandonment of laissez-faire in banking didn’t happen all at once, but it proved to be extremely thorough. Roosevelt would leave the gold standard, socialize the deposit system, nationalize the Federal Reserve System, synchronize monetary policy with fiscal policy by placing the Fed under Treasury oversight, and force the nation’s biggest banks to break up into smaller institutions with narrower lines of business. In sum, he broke the political back of the American financial sector and began using it as an instrument of economic recovery, directed by the federal government. It would prove a triumph of Keynesian policy more comprehensive than Keynes had ever imagined possible in the United States—a fundamental change in the relationship among the state, society, and money.
During the Great Depression, more than half of the country’s population still lived on farms or in the small towns that served as local hubs for agricultural trade (today, about 80 percent of Americans live in cities). And a staggering one-half of all farm loans were in default when FDR came into office.38 The crushing deflation of the Depression had done what it always did to farmers: Though the prices for their produce fell, the loan balances farmers had taken on to seed and harvest their fields remained high. When farmers were forced to sell their crops for less, their debts became overwhelming. FDR established an array of programs to get farmers more attractive loans. But lower rates on mortgages could help only at the margins if the president couldn’t stop the relentless decline of commodity prices. In the summer of 1933, he dispatched his economic adviser, George Warren, to Europe to survey monetary strategies abroad. Warren returned with a grim political assessment. “Hitler is a product of deflation,” he wrote to Roosevelt. “It seems to be a choice between a rise in prices or a rise in dictators.”39 Events at home, meanwhile, had already convinced Roosevelt of the need to take drastic measures. Three weeks after the president had ordered citizens to turn over their gold coins, Judge Charles C. Bradley had taken up a slate of foreclosure cases in Le Mars, Iowa. A total of fifteen farms were at risk of being repossessed when 250 angry farmers descended on Bradley’s courtroom and demanded that he impose a countywide moratorium on foreclosures. The agitators stormed the bench, threw a rope around Bradley’s neck, and dragged him to a country crossroads, where they “nearly lynched him.”40 Roosevelt had prevented a financial collapse on inauguration day, but rural America remained on the verge of revolution. With half of the country living off the land, somewhat higher grocery bills resulting from higher crop prices would have been worth the sacrifice. But Roosevelt decided to bring crop prices up primarily by bringing the value of the dollar down. If it worked, the price of everything, including wages, would effectively go up, easing the effect of higher food costs on household budgets. “It is simply inevitable that we must inflate,” FDR wrote to Woodrow Wilson’s old aide, Colonel Edward M. House. “Though my banker friends may be horrified.”
The government, he argued, should act directly to expand economic “output” and consumer “purchasing power” through deficit-financed expansion. Whatever else FDR might do in office, the fundamental imperative was to spend, spend, spend: “I lay overwhelming emphasis on the increase of national purchasing power resulting from governmental expenditure which is financed by loans and is not merely a transfer through taxation, from existing incomes. Nothing else counts in comparison with this.”
Cheap credit and an expanded money supply were not enough. The government would have to actually spend that new money it created in order to get the economy moving again. Relying on monetary policy alone, Keynes argued, was “like trying to get fat by buying a larger belt. In the United States today your belt is plenty big enough for your belly. It is a most misleading thing to stress the quantity of money, which is only a limiting factor, rather than the volume of expenditure, which is the operative factor.”
This remains the popular understanding of Keynesian economics to this day: in a slump, governments should borrow money and spend it on useful projects to kick-start a recovery. When the government spends this money, it goes into the pockets of its citizens, who in turn can spend it on other wants and needs, expanding the total size of the economy and ensuring a prosperous recovery rather than a downward spiral in which retrenched spending feeds unemployment and further reductions in spending.
Under the competitive market paradigm, economists had been able to argue that workers were paid a wage equal to the true value they added to the business. With competition whittling away waste and excess, workers would end up receiving what economists called the “marginal productivity” of their work. Each worker would be paid an amount exactly equal to how much more productive he or she made the operation. That meant, particularly for the Austrian economists Hayek and Mises, that complaints about low wages were really complaints about worker productivity. If workers wanted better pay, the only sustainable way to get it was by working harder. But that argument would fall apart if it could be shown that labor markets were not perfectly competitive—if, instead, they exhibited some of the features of monopoly. If the only jobs in town were at the coal mine, then the mine owners wouldn’t have to compete with other employers by offering better pay. When Robinson showed that markets were almost always at least somewhat anticompetitive, she believed she had “hacked through” a “prop to laisser-faire ideology.”11 Capitalists, according to Robinson, were chronically underpaying their staff.
The economic system was understood to be apolitical and self-correcting, akin to population dynamics in the natural world. Everything—wages, commodity prices, interest rates, profits—responded automatically to any unexpected change in other areas, quickly bringing the system to an equilibrium in which the maximum amount of goods was being produced and consumed, so that social needs were met to the greatest extent possible.
The material abundance of the Gilded Age had sown doubts in Keynes about the supposed scarcity of resources, but it was the ravages of the Depression that made him certain the old order had it wrong. Clearly the trouble was not a shortage of production. Crops were rotting in the fields while children went hungry in the streets. Producers were not cutting back because they couldn’t afford to meet the high wage demands of workers; laborers were roaming from town to town, desperate for any work at all. As he wrote in the opening chapter, “It is not very plausible to assert that unemployment in the United States in 1932 was due either to labour obstinately refusing to accept a reduction of money-wages or to its obstinately demanding a real wage beyond what the productivity of the economic machine was capable of furnishing.”32 For Keynes, the empirical fact of the Depression proved that the classical theory was wrong. The economy was not self-correcting. Even if politicians were messing things up with bad policy, the system should at some point between 1919 and 1936 have been able to sort itself out. A bad level for gold in 1925 or a wrong-headed tariff in 1931 should have been no different than a bad harvest or a fire, something quickly remedied by the automated magic of supply, demand, and the price mechanism.
Say’s Law meant that there could not be unspent income in a society. Because the supply of new products created its own demand for them, increased production automatically brought the economic system of payment and consumption into equilibrium at a higher level of activity. When the producer of a good accepted its purchase price and passed that income on to workers in the form of wages (enjoying some himself in the form of profits), he created a new source of demand in society exactly equal to the value of what he had produced. That money would be spent on other goods, ensuring that there could be no deficiency of total demand in the economy. Even the money that people set aside as savings was just another form of spending: spending on the future. Say acknowledged that overproduction might occasionally arise in particular industries but insisted that such problems were “only a passing evil” that couldn’t apply to the economy as a whole for any meaningful period of time.
The possibility of excessive savings carried tremendous consequences. Capitalism would be in a state of overproduction. The supply of goods and services would exceed the demand for those goods and services because money—savings—was not being spent. Producers would respond by cutting production and laying people off. That would bring supply and demand into equilibrium, but it would be a bad equilibrium in which nobody made the investments necessary to hire people and expand production. Unemployment could creep in as a permanent part of a low-functioning economy.
Keynes recognized that money was not only a mechanism for transmitting information about the relative values of different goods; it was also a store of value, which enabled people to make and express judgments about their own material security through time.
“Consumption,” he wrote, “is the sole end and object of all economic activity.”38 But money enables us to put off consumption to another day and another day and another indefinitely without losing our ability to consume at some point. We may substitute holding money for realizing actual material satisfaction not out of vice or confusion but out of simple fear for our future prospects. But when we refuse to consume, we deny others their income. This not only forces society to live with less—it risks making our fear into a contagion, realized in the form of decreased production, layoffs, and suffering amid surplus.
People didn’t actually bet on the value of different enterprises; they bet on the judgments of other speculators. As Keynes put it in one of the few accessible passages from The General Theory: “Professional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest.”39 This didn’t just mean that financial markets were prone to panic and instability, as excitement and emotion overtook cool reasoning; it meant there was no reason to believe that markets ever accurately gauged the value of various investments. Wall Street and the City were perfectly capable of turning extraordinary profits for themselves without doing much for the greater good—indeed, they could do active social harm without intending to. “There is no clear evidence from experience that the investment policy which is socially advantageous coincides with that which is most profitable.”
The chief economic question facing each society, Keynes believed, was no longer what it could afford but how its members would like to live. A titan of industry could not shrug off poverty as an inevitable element of every society. Democracies could choose different paths. Keynes was no longer telling a story about adjusting a machine that generally tended toward a functional, prosperous equilibrium. The General Theory did not prove that governments may need to intervene in the operations of a free market from time to time to correct excesses or imbalances. It showed, instead, that the very idea of a free market independent of government structure and supervision was incoherent. For markets to function, governments had to provide demand. Eras of laissez-faire prosperity like the British golden age before the war were very rare—a “special case” resulting from unique psychological and material circumstances that were impossible to replicate with any regularity through speculative financial markets, in which “the capital development of a country becomes a by-product of the activities of a casino.”
Keynes had, he believed, destroyed “one of the chief social justifications of great inequality of wealth.”48 In his youth, he had understood saving as a virtue that benefited society at large. The fortunes of the rich, accumulated over generations, created a source of investment capital that could be deployed for the benefit of all. With The General Theory, Keynes demonstrated that capital growth was not the result of virtuous saving by the affluent; it was a by-product of the income growth of the masses. Creating large amounts of savings at the top of society did not bring about higher levels of investment. The causal arrow pointed the other way: Creating large amounts of investment caused higher levels of savings. And so “the removal of very great disparities of wealth and income” would improve social harmony and economic functionality.
Keynes argued that the utilitarian moral philosophers of the eighteenth and nineteenth centuries had popularized “a perverted theory of the state” guided by “business arithmetic” in which the final judgment on the social value of any activity was to be found in whether it turned a profit.51 But the market, he argued, was not a reliable statement of society’s preferences, and it could not invisibly guide a polity to salvation. The market simply failed to deliver a host of real social goods that the public enjoyed, particularly art. The things that make life meaningful—beauty, community, a vibrant and multifaceted culture—all required collective, coordinated action. “Our experience has demonstrated plainly that these things cannot be successfully carried on if they depend on the motive of profit and financial success. The exploitation and incidental destruction of the divine gift of the public entertainer by prostituting it to the purposes of financial gain is one of the worser crimes of present-day capitalism.”
Do not discount the power of ideas to triumph over the economic interests of the ruling class. The vested interests of the capitalists, he argued, did not reign sovereign over the great gears of human history; the beliefs and ideas of the people did. They could choose to shrug off the suffering and dysfunction of the past two decades without resorting to violent revolutionary upheaval. All they needed was to be convinced by an idea. Is the fulfillment of these ideas a visionary hope?…The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas. Not, indeed, immediately, but after a certain interval; for in the field of economic and political philosophy there are not many who are influenced by new theories after they are twenty-five or thirty years of age, so that the ideas which civil servants and politicians and even agitators apply to current events are not likely to be the newest. But, soon or late, it is ideas, not vested interests, which are dangerous for good or evil.
“When F.D.R. came to office in March 1933, so desperate was the economic position that for the business and financial community he was an angel of rescue,” he later wrote. “By 1934, things were enough better so that his efforts on behalf of farmers and the unemployed, his tendency to make light of economic orthodoxy, could be disliked and even feared. Roosevelt had become ‘that man in the White House’ and ‘the traitor to his class.’ ”34 The ill will between Roosevelt and the rich was a matter of power, not results. No peacetime U.S. president in the years since has matched the economic growth achieved during the first three full years of FDR’s administration. Adjusted for inflation, the economy grew by a monumental 10.8 percent, 8.9 percent, and 12.9 percent during 1934, 1935, and 1936, respectively.35 Over the course of his first term, the unemployment rate plunged from over 20 percent to less than 10 percent, as the ranks of the unemployed were thinned by more than half, from roughly 11.5 million to 4.9 million (there were about 1.4 million unemployed prior to the stock market crash).36 Only once has a U.S. wartime economy matched Roosevelt’s initial economic miracle—a few years later, during the mobilization for World War II. Though FDR had to wrestle with Congress, the Supreme Court, and even himself over spending, taxes, regulations, budget deficits, and everything else that made up the New Deal, he was in fact spending a lot of money, nearly doubling the expenditures of the federal government from $4.6 billion to $8.2 billion as the deficit surged from $2.6 billion to $4.3 billion—though he offset some of the deficit impact of his new programs by increasing taxes on the wealthy. Those figures were modest compared to what Keynes had advocated and indeed compared to what was to come. In his 1934 trip to the United States, Keynes had advocated annual deficits of $4.8 billion to members of the administration. In 1936, federal outlays still accounted for less than one-tenth of the total U.S. economy. By the end of the war, government projects would total $92.7 billion a year and account for more than 40 percent of all U.S. economic activity (since the beginning of Ronald Reagan’s presidency, spending has fluctuated by a few percentage points around 20 percent of gross domestic product).37 All of this offended the policy sensibilities of the elite, who hated progressive taxation, deficits, and devaluation as much as the British banking establishment did. But there was more at stake than Wall Street’s bottom line. The New Deal did not, in fact, crimp legitimate business on Wall Street; Roosevelt just reorganized it. In 1935, with the United States off gold and onto Glass-Steagall, and with the SEC policing traders and the federal government incurring unheard-of deficits, the amount of securities offerings underwritten by investment banks expanded to four times the level of the previous year.38 With the economy growing rapidly, brokers and traders had more work to do. Everybody did. But the rich, as a group of Harvard economists observed, continued to “complain bitterly” of their tax burden, which they perceived as a violation of “divine right”—even though “the additions to their incomes, resulting from the government’s activities, are far greater in amount than the additional taxes they pay.”39 Jack Morgan, according to one chronicler of the family, viewed the New Deal “less as a set of economic reforms than as a direct, malicious assault on the social order.”40 Which, of course, it was. Morgan was only the most obvious, iconic embodiment of what was quickly becoming a hereditary American nobility. Close friends with King George V, adored by the king’s infant granddaughter who would one day become the second Queen Elizabeth, Jack enjoyed traditional aristocratic recreations, shooting pheasant when the affairs of his firm overtaxed his nerves. But whereas the landed European gentry of the nineteenth century had understood themselves as a chosen elect, Morgan and his elite countrymen believed they had won their place in society through business acumen and the sound stewardship of a grateful society. That was an incredible idea for a man who had been handed the most powerful post in American finance from his father, who in turn had inherited the banking house from his father before him. It was nevertheless sincere. Even the great scourge of Wall Street, Ferdinand Pecora, commended Morgan for his “deeply genuine” testimony before his Senate committee, in which Jack stated it was impossible for a “private banker” to “become too powerful,” because such status was attained “not from the possession of large means, but from the confidence of the people” and the “respect and esteem of the community.”41 This self-conception was fed by the energy both Jack and his father had devoted to philanthropy, paying hundreds of thousands of dollars a year in salaries for Episcopalian clergy and underwriting social services offered by the church. Jack even opened his father’s study and art collection to the public as a museum. That was standard social stewardship for the Carnegies, Mellons, and Fricks who dominated the U.S. economy. The New Deal dynamited the whole worldview. Not only had FDR shackled families like the Morgans with new taxes, regulations, auditors, and overlords, his system actually worked. It was not the great genius of financial patricians that made the economy grow at unheard-of rates; it was, as Keynes had argued, the purchasing power of the masses. It sent Morgan into paroxysms of fury. Even the mention of Teddy Roosevelt prompted him to scream “God damn all Roosevelts!”42 As his sense of self-worth and place in society collapsed, he retreated to the safety of his banking fief, discarding his former sense of noblesse oblige. “I just want you to know,” he shouted to Dawes Plan architect Owen Young, “that I don’t care a damn what happens to you or anybody else. I don’t care what happens to the country….All I care about is this business! If I could help it by going out of this country and establishing myself somewhere else I’d do it—I’d do anything.”43 “Regardless of party and regardless of region, today, with few exceptions,” wrote Time, “members of the so-called Upper Class frankly hate Franklin Roosevelt.”44 The president returned the favor. Subjected to relentless attacks from “the Wall Street bankers” throughout his first term, he denounced them as “economic royalists” in a fiery speech to the Democratic National Convention in 1936. “They had begun to consider the Government of the United States as a mere appendage to their own affairs,” he roared from the podium. “We know now that Government by organized money is just as dangerous as Government by organized mob. Never before in all our history have these forces been so united against one candidate as they stand today. They are unanimous in their hate for me—and I welcome their hatred!”45 There was at least as much political calculation in FDR’s posture as genuine outrage. His inner circle still included a few baffled but pragmatic bankers, typically from outsider firms or those allied with new industries. Sidney Weinberg, head of the then-minor investment bank Goldman Sachs, was an FDR confidant from the 1932 campaign until the president’s death.46 And FDR studiously courted advice from and sought avenues for agreement with Morgan partner Owen D. Young. A conservative Democrat, Young tried his best to cooperate, though in moments of weakness he wondered if a “totalitarian state” might not be better equipped than Roosevelt’s version of democracy to administer “economically desirable” “self-discipline”—particularly corporate tax cuts.47 But Roosevelt’s counterpunches against the elite had a powerful effect on public opinion. The financiers who denounced him were not going to vote Democrat, but attacks raining down onto Roosevelt from such prestigious men could erode support among voters who were genuinely on the fence. Roosevelt called into question the legitimacy of his opponents and rallied his own supporters against them. Anti-FDR fervor was no longer a reasoned critique from learned men but merely the kind of thing you could expect from people who didn’t like democracy. “When Roosevelt countered, a whole generation joined on his side,” Galbraith observed. “If the privileged were against Roosevelt, we obviously must be against privilege. If Roosevelt found the moral posture of big business unconvincing or fraudulent, it must be so.”
Consciously or not, FDR was taking the ideas of The Economic Consequences of the Peace and expanding them into a foreign policy doctrine of breathtaking ambition: In the future days, which we seek to make secure, we look forward to a world founded upon four essential human freedoms. The first is freedom of speech and expression—everywhere in the world. The second is freedom of every person to worship God in his own way—everywhere in the world. The third is freedom from want—which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants—everywhere in the world. The fourth is freedom from fear—which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbor—anywhere in the world. That is no vision of a distant millennium. It is a definite basis for a kind of world attainable in our own time and generation. That kind of world is the very antithesis of the so-called new order of tyranny which the dictators seek to create with the crash of a bomb.
Subsequent American war advocates have invariably cited the protection of human rights abroad as an overriding moral concern, often attesting to high ideals to divert attention from less benign motivations: claims on resources, imperial strategy, or simple belligerence. The pattern began in World War II. While FDR pitched the conflict to Americans as a fight for human rights “anywhere in the world,” the U.S. State Department—the chief organ of American diplomacy—repeatedly refused aid to Jewish refugees. On the West Coast, more than 100,000 Japanese Americans were forced from their homes and ordered to report to internment camps, a policy that originated in Roosevelt’s War Department.
But with the war orders coming in on a massive scale, Keynes insisted that it was only a matter of time until rapid price increases took effect. Americans would need to have a battle plan ready when they did. First, he said, speculators anticipating an increase in production from the war would bid up the prices of key commodities—everything from cotton for uniforms to iron, coal, and cement. Next, as workers joined the military or filled positions in military manufacturing, employers would begin offering higher salaries to attract and retain talent. After that, labor unions, correctly perceiving their greater leverage with employers, would begin to demand—and receive—higher pay under collective bargaining contracts. All of this would have an impact on prices. Commodity speculation would raise the cost of raw goods for manufacturers and force them to charge retailers more, while retailers would sense the better purchasing position of their customers and raise prices themselves. The entire phenomenon would be exacerbated by the fact that enormous segments of the economy, though operating at full tilt and with essentially no unemployment, would be producing war materiel for use overseas rather than consumer goods to be purchased at home. The purchasing power created by widespread availability of good-paying jobs would face a shortage of products it could actually buy. Demand would rage far ahead of supply. Without “heavy taxation, a high pressure savings campaign or rationing on a wide scale,” the United States was in store for an inflationary explosion.
During World War I, rising prices had accrued to industrialists in the form of higher profits, which were then taxed away by the government, borrowed by the government, or spent on consumer goods, further driving up their prices. When those profits were borrowed, the industrialists received an asset—bonds—that their workers did not. Workers benefited only in the form of higher pay—and that was cold comfort, since the value of their paychecks was steadily being inflated away. The most egalitarian method, of course, would have been to tax profits to the hilt—but there was a limit to how much governments could actually tax. In the United States, for instance, the tax rate on the highest incomes would eventually reach 94 percent during the war. For taxes to really do the trick, they would ultimately have to hit working people of more modest means. By forcing workers to accept a program of “deferred pay,” Keynes was attempting to redistribute postwar wealth from the investor class to the working class. The title of the piece is misleading. Compulsory savings wouldn’t really “pay” for anything. By hook or by crook, the British government was going to maximize war production. When it wanted bombs, it would make them, and, since the gold standard was long gone, it could print the money to pay for them without having to yoke its printing presses to the amount of gold at the Bank of England. Mandatory savings were a way of managing inflation. By pulling demand out of the economy—reducing the purchasing power of ordinary people—Keynes wanted to limit their ability to bid up retail prices. This was a critical observation about the way money, debt, and even taxes functioned in a post–gold standard world. In 1931, it had been possible for the British government to spend so much money that it could not meet its debt obligations, because it could print only so much money; its debts were written in pounds tied to a certain amount of gold. Under the gold standard, it was possible for a government to run out of money; there was only so much gold in the vaults. But a government that controlled its own currency, Keynes observed, could not go bankrupt. Under the fiat currency that had prevailed in Great Britain since 1931, the government could easily print its way out of excessive debt. Taken to extremes, the consequence of that strategy would be inflation, of course. And so the purpose of taxes—or deferred savings or any similar instrument—was not to “pay” for government services but to regulate the value of money.
By declaring “freedom from want” a human right, FDR had presented the social reforms of the New Deal as a moral imperative every bit as pressing as the military defeat of Nazism. By including it in the Atlantic Charter, he and Churchill had declared personal economic security a defining characteristic of any democracy, a bedrock guarantee that distinguished a free society from tyranny. Hayek turned this argument on its head—a daring maneuver at the height of the war that had transformed FDR and Churchill into figures of public adulation. The very idea of “economic freedom,” Hayek argued, was antithetical to what true advocates of political freedom had championed for centuries. “Freedom from necessity,” he claimed, was an inherently “socialist” idea. It was not a bulwark for the democracies against Nazism but an ingredient of Nazism and Soviet communism alike, which could only be effectively implemented by a violent dictatorship that crushed other political rights.
The antigovernment refrain of The Road to Serfdom was perfectly in key with Mises’ uncompromising libertarian tract Bureaucracy, published in the same year, in which Hayek’s mentor forcefully declared New Deal liberalism a variant of authoritarian communism. “Capitalism means free enterprise, sovereignty of the consumers in economic matters, and sovereignty of the voters in political matters,” he wrote. “Socialism means full government control of every sphere of the individual’s life….There is no compromise possible between these two systems.”12 You could have laissez-faire, or you could have Soviet Russia; there was no middle ground. Hayek recognized that the all-or-nothing severity of his old instructor was a political dead end in an era in which every government seemed to be pursuing new Keynesian reforms. And so, like Lippmann before him, Hayek attempted to graft his laissez-faire conception of liberty onto something compatible with the emerging modern nation-state. The government might be allowed to maintain some basic minimum standard of living for everyone, after all. He drew a distinction between “regulation”—which was merely designed to solve obvious problems—and dangerous “planning”—which could only be achieved by a dictator orchestrating the lives and limiting the choices of free individuals. The size and scope of corporate enterprises, he argued, should be closely limited and monitored to prevent big firms from interfering with free competition in the marketplace.
In The End of Laissez-Faire, Keynes had argued that liberalism could not stand on abstract principles alone; it had to actually deliver the goods for the people who lived under it. Laissez-faire had led to vast inequality and grinding depression, failing a basic test for democratic legitimacy. By shrugging off the practical shortcomings of laissez-faire, Keynes argued, Hayek had deluded himself about the causes of dictatorship in Germany. The economic fuel for the rise of Hitler had been the suffering and despair generated by deflation—not the social welfare policies Hayek decried as “socialism.” The democracies of the world could not turn their backs on the economic strategies that had rejuvenated them in the late 1930s and 1940s; doing so would only unleash a new wave of political uncertainty, encouraging new authoritarian social movements. Hayek’s call to abandon the New Deal and Keynesian economic management was a recipe for more strongmen. “What we need therefore, in my opinion, is not a change in our economic programmes, which would only lead in practice to disillusion with the results of your philosophy,” he wrote, “but perhaps even the contrary, namely, an enlargement of them.”
Hayek and Keynes agreed that democracy was not the fundamental organizing principle of society; it was a tool for achieving more important goals. They even agreed that the most critical function of democracy was its ability to produce a vibrant, elite culture. The value Keynes placed on Bloomsbury was in some respects very similar to Hayek’s appreciation for the old Viennese aristocracy. But to Keynes, nothing was lost in guiding all the world to Bloomsbury, while for Hayek, aristocracy was inherently exclusive; the whole point was that not everyone could be an aristocrat.
Unemployment was a breeding ground for fascism. It created dangerous political instability and a source of anger that could easily be weaponized. The terms of trade might help or hurt efforts to establish international goodwill, but tariffs or no tariffs, the legitimacy of an international economic order depended entirely on whether it did, in fact, provide for mutual prosperity.
The gold standard, he maintained, had broken down because it forced countries into deflationary corners. Countries that ran trade deficits became entirely responsible for restoring trade balance, Keynes believed, and they would eventually be placed in a position where they could only achieve competitive prices for their goods abroad by forcing down domestic wages, causing mass domestic unemployment. If Britain, for instance, ran a trade deficit with the United States by importing more than it exported, it would result in a balance-of-payments problem: Britain would be paying out more money to the United States than it was taking in. If the situation persisted long enough, Britain would run out of money with which to pay for American goods. This problem could in theory be resolved by international lending. If Americans, flush with money earned by exporting so many goods, made loans on reasonable terms to Great Britain, then the British would have the money needed to keep buying exports.
Under the ethical norms of the gold standard, the resulting suffering was the price a country had to pay for being weak or lazy. Keynes readily accepted that many countries had ineffective economic infrastructure. But nations often ran trade deficits because they had to, not because they were any more or less reckless than countries running trade surpluses. What’s more, governments that ran surpluses weren’t in fact being injured by countries that ran deficits. Though the deficit country would run up large financial debts, the surplus country enjoyed a fat export trade that employed its workers and raised its standard of living. The gold standard ethic heaped shame upon countries for piling up large debts, but it was the surplus countries that benefited most from those debts—and benefited at the expense of the debtor country employment. Keynes recognized that in the international order, as in ordinary life, the real villains were rarely beggars.
In Samuelson’s hands, human behavior and the economy more broadly were best understood as rational, profit-maximizing endeavors. Markets would clear themselves, and supply and demand would find their own rational equilibrium, just as David Ricardo and Adam Smith had posited long ago. But they would only do so, according to Samuelson, when the economy was operating at something close to full employment. By deploying Keynesian deficit spending or providing Keynesian tax cuts, policy makers could keep the economy from slipping “into a topsy-turvy wonderland where right seems left and left is right; up seems down; and black, white.”11 So long as unemployment did not spin out of control, the rational, profit-maximizing behavior of human beings would allow statistics to reliably predict when and where economic forces would reach equilibrium—if the data were sufficiently accurate.
With the age of economic scarcity ended, Galbraith believed that many of the objections economists had raised about economic organization in the past were no longer significant. Corporate monopolies might well be wasteful, but waste was not very important. What mattered was power. And even tremendous concentrations of power such as those of the modern corporation were not necessarily a problem so long as they were “countervailed” by other great powers—other large corporations in the supply chain or distribution scheme or, more important, powerful labor unions and a powerful government.
The whole point of The General Theory, she believed, was to show that economic production could not be understood as a self-sustaining set of processes independent from social norms and political realities.
American Capitalism had celebrated the end of scarcity. Now The Affluent Society decried the country’s increasing dependence on unnecessary production to establish the financial security of most families. The relentless postwar reliance on boosting economic output as the chief, if not only, means of improving the American standard of living had subjugated the work of democracy to the mechanics of the market. Nobody in her right mind would choose to work longer hours for dirty public parks. But that was what the logic of the market was dictating, because the market could only reward ideas that turned a profit. Nobody stood to profit from clean parks; they were just nicer to live with than dirty parks. But if nobody made the political judgment that clean parks were better, a society organized around profit incentives from production alone would almost automatically end up with dirty parks. The market was not an impartial guide to the beliefs of the public, and some of its verdicts were crazy.
Under the leadership of Marriner Eccles in the 1930s, the Fed board of governors in Washington had effectively fused with the Treasury Department, allowing the United States to pursue a unified fiscal and monetary agenda. Under the arrangement, the Fed pursued a monetary policy that kept interest rates low and money cheap for both banks and the federal government. Inflation and unemployment were managed not by interest rate adjustments but by fiscal policy—government spending and taxation—and, during the war, price controls. From 1937 to 1947, the Fed kept the discount rate at 1 percent, and beginning in 1942, it publicly coordinated monetary policy with the Treasury Department to keep down the interest rate on World War II bonds. Even after the war, when inflation briefly shot up after price controls were eliminated, the United States didn’t battle rising prices with high interest rates and the unemployment high interest rates created. As late as 1951, the discount rate was still just 1.75 percent, and the Fed remained formally committed to guaranteeing a specific, predictable interest rate on U.S. government debt.
Helping the rich get richer, Kennedy had argued, was the surest way to help the country. “I am not sure,” Galbraith had previously told Kennedy, “what the advantage is in having a few more dollars to spend if the air is too dirty to breathe, the water too polluted to drink, the commuters are losing out on the struggle to get in and out of the cities, the streets are filthy, and the schools so bad that the young, perhaps wisely, stay away.”
According to Friedman, there was a “natural rate” of unemployment below which no policy maker, fiscal or monetary, could push the economy without causing inflation. It was hard to pinpoint just what this “natural rate” was; it depended on technology, productivity, unionization rates, and regulatory policies. But tinkering with fiscal or monetary policy to boost employment was a fool’s errand.
Two weeks after the president told Connally to go off on Galbraith, the British Treasury informed the Nixon administration that it was about to shore up the pound by redeeming $3 billion in U.S. assets—dollars and Treasury securities—for gold. It was, in essence, a vote of no confidence against American inflation management. The United States was the only country in the Bretton Woods system with a currency convertible into gold. For the British, there was no difference between holding a dollar bill and holding the dollar’s exchange weight in gold—unless they expected the value of those dollars to decline. The United States had been leaking gold for years thanks to inflationary pressures and the new phenomenon of an American trade deficit. And so U.S. trading partners increasingly preferred holding gold to holding dollars. Great Britain’s decision was sure to rattle financial markets all over the world. A bold, multibillion-dollar gesture from a close diplomatic ally might even spark a run on the dollar.
Academic economics became dominated by conservative ideas. Monetarism quickly faded once Volcker found he couldn’t accurately or effectively target the precise supply of money in the economy. It was replaced by the rational expectations hypothesis, formulated by future Nobel laureate Robert Lucas. The rational expectations school essentially took Friedman’s ideas about price expectations and applied them to government policy. Rational people, according to Lucas, would factor the future effects of any change in tax rates or regulatory arrangements into their economic decisions. Increasing government spending to boost the economy was futile, according to this thinking, because people would recognize that the resulting budget deficit would eventually have to be cured through higher taxes and would therefore save any money they received in anticipation of future tax bills. As a result, it was impossible for policy makers to make any lasting improvement in the lives of citizens through macroeconomic management; the market would quickly adjust and subsequently overrule the government meddlers. It was as if Keynes had never existed; uncertainty had given way to hyperrationality and the ability to see the future. Lucas even went so far as to claim that his work had rendered the entire field of macroeconomics superfluous.
At its core, The General Theory of Employment, Interest and Money was a book about the dangers and limitations of financial markets. Given uncertainty about the future, it was impossible for markets to accurately price the full slate of risks attached to any financial asset. Investors were constantly processing new, unexpected information and attitudes, including their own. If a society relied excessively on financial markets to allocate resources, develop research, and improve industry, Keynes believed, it was destined for underperformance, instability, and unemployment. He had designed a theory and a policy agenda in which financial markets were subjugated to the authority of the state, believing the coordinated action of a government was capable of meeting the investment needs of society which financial markets could only secure through fleeting accidents. The Clinton administration was doing the opposite of what Keynes had prescribed: subjugating both the governing agenda of American democracy and the direction of global economic development to the currents of international capital markets.
As Joseph Stiglitz concluded in 2017, globalization “was an agenda that was driven by large corporations at the expense of workers, consumers, [and] citizens in both the developed and developing world.”39 The social milieus of citizens and shareholders became increasingly divergent, leading to disparities not only in wealth but in education and physical health, with those further down the income ladder registering lower test scores and shorter life expectancies, according to the OECD.40 The result has been heightened political tension not only between different countries but within individual nation-states as economically insecure populations question whether they do in fact belong to the same political project as their more affluent neighbors. “I think globalization has contributed to tearing societies apart,” argues economist Dani Rodrik.
The central problems of the twentieth century, Keynes argued, were best solved by alleviating inequality. Enterprise and economic growth were driven not by the unique genius and vast fortunes of the very rich but by the purchasing power of the masses, which created markets for new ideas. To put people to work, governments needed to create systems of support for the poor and the middle class, not new favors for the rich.