Key Points from Book: Chip War

In today’s world, chips permeate all aspects of our life, from powering our smartphones, TVs, laptops, up to ballistic missiles. War is no longer decided by which side has more tanks, fighter planes, or infantry. Drones and smart-missiles that could lock in target with high precision are increasingly changing the balance of power in the battlefield. This made access to advanced chips a matter of national security that both the West and China could no longer take for granted. In October, the U.S. announced a new export control that prohibit the sale of advanced chips and equipment needed to make them to China, while also effectively banning U.S. citizens, residents, or green card holders to aid China develop its own semiconductor industry and catch up to the West. This matters greatly to China, who is still highly reliant on imports of advanced technology to power its industry, despite efforts to push for home-grown innovation. With a single company in Taiwan producing 92% of world’s most advanced chips, the geopolitical stakes could not be higher.

In his book, Chris Miller beautifully outlined the history of how we get to where we are today and various parties involved in developing advanced chips that power our world today. More interesting to me is the parallel between China today and Japan in the 1980-90s, when the country was one of U.S. main technology rival.

The United States still has a stranglehold on the silicon chips that gave Silicon Valley its name, though its position has weakened dangerously. China now spends more money each year importing chips than it spends on oil. These semiconductors are plugged into all manner of devices, from smartphones to refrigerators, that China consumes at home or exports worldwide. Armchair strategists theorize about China’s “Malacca Dilemma”—a reference to the main shipping channel between the Pacific and Indian Oceans—and the country’s ability to access supplies of oil and other commodities amid a crisis. Beijing, however, is more worried about a blockade measured in bytes rather than barrels. China is devoting its best minds and billions of dollars to developing its own semiconductor technology in a bid to free itself from America’s chip choke.

Apple makes precisely none of these chips. It buys most off-the-shelf: memory chips from Japan’s Kioxia, radio frequency chips from California’s Skyworks, audio chips from Cirrus Logic, based in Austin, Texas. Apple designs in-house the ultra-complex processors that run an iPhone’s operating system. But the Cupertino, California, colossus can’t manufacture these chips. Nor can any company in the United States, Europe, Japan, or China. Today, Apple’s most advanced processors—which are arguably the world’s most advanced semiconductors—can only be produced by a single company in a single building, the most expensive factory in human history, which on the morning of August 18, 2020, was only a couple dozen miles off the USS Mustin’s starboard bow.

America’s vast reserve of scientific expertise, nurtured by government research funding and strengthened by the ability to poach the best scientists from other countries, has provided the core knowledge driving technological advances forward. The country’s network of venture capital firms and its stock markets have provided the startup capital new firms need to grow—and have ruthlessly forced out failing companies. Meanwhile, the world’s largest consumer market in the U.S. has driven the growth that’s funded decades of R&D on new types of chips. Other countries have found it impossible to keep up on their own but have succeeded when they’ve deeply integrated themselves into Silicon Valley’s supply chains. Europe has isolated islands of semiconductor expertise, notably in producing the machine tools needed to make chips and in designing chip architectures. Asian governments, in Taiwan, South Korea, and Japan, have elbowed their way into the chip industry by subsidizing firms, funding training programs, keeping their exchange rates undervalued, and imposing tariffs on imported chips. This strategy has yielded certain capabilities that no other countries can replicate—but they’ve achieved what they have in partnership with Silicon Valley, continuing to rely fundamentally on U.S. tools, software, and customers.

the concentration of advanced chip manufacturing in Taiwan, South Korea, and elsewhere in East Asia isn’t an accident. A series of deliberate decisions by government officials and corporate executives created the far-flung supply chains we rely on today. Asia’s vast pool of cheap labor attracted chipmakers looking for low-cost factory workers. The region’s governments and corporations used offshored chip assembly facilities to learn about, and eventually domesticate, more advanced technologies. Washington’s foreign policy strategists embraced complex semiconductor supply chains as a tool to bind Asia to an American-led world. Capitalism’s inexorable demand for economic efficiency drove a constant push for cost cuts and corporate consolidation. The steady tempo of technological innovation that underwrote Moore’s Law required ever more complex materials, machinery, and processes that could only be supplied or funded via global markets. And our gargantuan demand for computing power only continues to grow.

MIT considered the Apollo guidance computer one of its proudest accomplishments, but Bob Noyce knew that it was his chips that made the Apollo computer tick. By 1964, Noyce bragged, the integrated circuits in Apollo computers had run for 19 million hours with only two failures, one of which was caused by physical damage when a computer was being moved. Chip sales to the Apollo program transformed Fairchild from a small startup into a firm with one thousand employees. Sales ballooned from $500,000 in 1958 to $21 million two years later. As Noyce ramped up production for NASA, he slashed prices for other customers. An integrated circuit that sold for $120 in December 1961 was discounted to $15 by next October. NASA’s trust in integrated circuits to guide astronauts to the moon was an important stamp of approval. Fairchild’s Micrologic chips were no longer an untested technology; they were used in the most unforgiving and rugged environment: outer space.

When U.S. defense secretary Robert McNamara reformed military procurement to cut costs in the early 1960s, causing what some in the electronics industry called the “McNamara Depression,” Fairchild’s vision of chips for civilians seemed prescient. The company was the first to offer a full product line of off-the-shelf integrated circuits for civilian customers. Noyce slashed prices, too, gambling that this would drastically expand the civilian market for chips. In the mid-1960s, Fairchild chips that previously sold for $20 were cut to $2. At times Fairchild even sold products below manufacturing cost, hoping to convince more customers to try them. Thanks to falling prices, Fairchild began winning major contracts in the private sector. Annual U.S. computer sales grew from 1,000 in 1957 to 18,700 a decade later. By the mid-1960s, almost all these computers relied on integrated circuits. In 1966, Burroughs, a computer firm, ordered 20 million chips from Fairchild—more than twenty times what the Apollo program consumed. By 1968, the computer industry was buying as many chips as the military. Fairchild chips served 80 percent of this computer market. Bob Noyce’s price cuts had paid off, opening a new market for civilian computers that would drive chip sales for decades to come. Moore later argued that Noyce’s price cuts were as big an innovation as the technology inside Fairchild’s integrated circuits.

California’s Santa Clara Valley had benefitted immensely from the space race, which provided a crucial early customer. Yet by the time of the first lunar landing, Silicon Valley’s engineers had become far less dependent on defense and space contracts. Now they were focused on more earthly concerns. The chip market was booming. Fairchild’s success had already inspired several top employees to defect to competing chipmakers. Venture capital funding was pouring into startups that focused not on rockets but on corporate computers.

By the mid-1960s, the earliest integrated circuits were old news, too big and power-hungry to be very valuable. Compared to almost any other any type of technology, semiconductor technology was racing forward. The size of transistors and their energy consumption was shrinking, while the computing power that could be packed on a square inch of silicon roughly doubled every two years. No other technology moved so quickly—so there was no other sector in which stealing last year’s design was such a hopeless strategy.

Meanwhile, the “copy it” mentality meant, bizarrely, that the pathways of innovation in Soviet semiconductors were set by the United States. One of the most sensitive and secretive industries in the USSR therefore functioned like a poorly run outpost of Silicon Valley. Zelenograd was just another node in a globalizing network—with American chipmakers at the center.

Sony had the benefit of cheaper wages in Japan, but its business model was ultimately about innovation, product design, and marketing. Morita’s “license it” strategy couldn’t have been more different from the “copy it” tactics of Soviet Minister Shokin. Many Japanese companies had reputations for ruthless manufacturing efficiency. Sony excelled by identifying new markets and targeting them with impressive products using Silicon Valley’s newest circuitry technology. “Our plan is to lead the public with new products rather than ask them what kind of products they want,” Morita declared. “The public does not know what is possible, but we do.”

Interdependence wasn’t always easy. In 1959, the Electronics Industries Association appealed to the U.S. government for help lest Japanese imports undermine “national security”—and their own bottom line. But letting Japan build an electronics industry was part of U.S. Cold War strategy, so, during the 1960s, Washington never put much pressure on Tokyo over the issue. Trade publications like Electronics magazine—which might have been expected to take the side of U.S. companies—instead noted that “Japan is a keystone in America’s Pacific policy…. If she cannot enter into healthy commercial intercourse with the Western hemisphere and Europe, she will seek economic sustenance elsewhere,” like Communist China or the Soviet Union. U.S. strategy required letting Japan acquire advanced technology and build cutting-edge businesses. “A people with their history won’t be content to make transistor radios,” President Richard Nixon later observed. They had to be allowed, even encouraged, to develop more advanced technology.

Fairchild was the first semiconductor firm to offshore assembly in Asia, but Texas Instruments, Motorola, and others quickly followed. Within a decade, almost all U.S. chipmakers had foreign assembly facilities. Sporck began looking beyond Hong Kong. The city’s 25-cent hourly wages were only a tenth of American wages but were among the highest in Asia. In the mid-1960s, Taiwanese workers made 19 cents an hour, Malaysians 15 cents, Singaporeans 11 cents, and South Koreans only a dime. Sporck’s next stop was Singapore, a majority ethnic Chinese city-state whose leader, Lee Kuan Yew, had “pretty much outlawed” unions, as one Fairchild veteran remembered. Fairchild followed by opening a facility in the Malaysian city of Penang shortly thereafter. The semiconductor industry was globalizing decades before anyone had heard of the word, laying the grounds for the Asia-centric supply chains we know today.

Taiwan and the U.S. had been treaty allies since 1955, but amid the defeat in Vietnam, America’s security promises were looking shaky. From South Korea to Taiwan, Malaysia to Singapore, anti-Communist governments were seeking assurance that America’s retreat from Vietnam wouldn’t leave them standing alone. They were also seeking jobs and investment that could address the economic dissatisfaction that drove some of their populations toward Communism. Minister Li realized that Texas Instruments could help Taiwan solve both problems at once.

After initially accusing Mark Shepherd of being an imperialist, Minister Li quickly changed his tune. He realized a relationship with Texas Instruments could transform Taiwan’s economy, building industry and transferring technological know-how. Electronics assembly, meanwhile, would catalyze other investments, helping Taiwan produce more higher-value goods. As Americans grew skeptical of military commitments in Asia, Taiwan desperately needed to diversify its connections with the United States. Americans who weren’t interested in defending Taiwan might be willing to defend Texas Instruments. The more semiconductor plants on the island, and the more economic ties with the United States, the safer Taiwan would be. In July 1968, having smoothed over relations with the Taiwanese government, TI’s board of directors approved construction of the new facility in Taiwan. By August 1969, this plant was assembling its first devices. By 1980, it had shipped its billionth unit.

Intel planned to dominate the business of DRAM chips. Memory chips don’t need to be specialized, so chips with the same design can be used in many different types of devices. This makes it possible to produce them in large volumes. By contrast, the other main type of chips—those tasked with “computing” rather than “remembering”—were specially designed for each device, because every computing problem was different. A calculator worked differently than a missile’s guidance computer, for example, so until the 1970s, they used different types of logic chips. This specialization drove up cost, so Intel decided to focus on memory chips, where mass production would produce economies of scale.

By the 1980s, consumer electronics had become a Japanese specialty, with Sony leading the way in launching new consumer goods, grabbing market share from American rivals. At first Japanese firms succeeded by replicating U.S. rivals’ products, manufacturing them at higher quality and lower price. Some Japanese played up the idea that they excelled at implementation, whereas America was better at innovation. “We have no Dr. Noyces or Dr. Shockleys,” one Japanese journalist wrote, though the country had begun to accumulate its share of Nobel Prize winners. Yet prominent Japanese continued to downplay their country’s scientific successes, especially when speaking to American audiences. Sony’s research director, the famed physicist Makoto Kikuchi, told an American journalist that Japan had fewer geniuses than America, a country with “outstanding elites.” But America also had “a long tail” of people “with less than normal intelligence,” Kikuchi argued, explaining why Japan was better at mass manufacturing.

The U.S. had supported Japan’s postwar transformation into a transistor salesman. U.S. occupation authorities transferred knowledge about the invention of the transistor to Japanese physicists, while policymakers in Washington ensured Japanese firms like Sony could easily sell into U.S. markets. The aim of turning Japan into a country of democratic capitalists had worked. Now some Americans were asking whether it had worked too well. The strategy of empowering Japanese businesses seemed to be undermining America’s economic and technological edge.

Sporck saw Silicon Valley’s internal battles as fair fights, but thought Japan’s DRAM firms benefitted from intellectual property theft, protected markets, government subsidies, and cheap capital.

Jerry Sanders saw Silicon Valley’s biggest disadvantage as its high cost of capital. The Japanese “pay 6 percent, maybe 7 percent, for capital. I pay 18 percent on a good day,” he complained. Building advanced manufacturing facilities was brutally expensive, so the cost of credit was hugely important. A next-generation chip emerged roughly once every two years, requiring new facilities and new machinery. In the 1980s, U.S. interest rates reached 21.5 percent as the Federal Reserve sought to fight inflation. By contrast, Japanese DRAM firms got access to far cheaper capital. Chipmakers like Hitachi and Mitsubishi were part of vast conglomerates with close links to banks that provided large, long-term loans. Even when Japanese companies were unprofitable, their banks kept them afloat by extending credit long after American lenders would have driven them to bankruptcy. Japanese society was structurally geared to produce massive savings, because its postwar baby boom and rapid shift to one-child households created a glut of middle-aged families focused on saving for retirement. Japan’s skimpy social safety net provided a further incentive for saving.

With this cheap capital, Japanese firms launched a relentless struggle for market share. Toshiba, Fujitsu, and others were just as ruthless in competing with each other, despite the cooperative image painted by some American analysts. Yet with practically unlimited bank loans available, they could sustain losses as they waited for competitors to go bankrupt. In the early 1980s, Japanese firms invested 60 percent more than their U.S. rivals in production equipment, even though everyone in the industry faced the same cutthroat competition, with hardly anyone making much profit. Japanese chipmakers kept investing and producing, grabbing more and more market share. Because of this, five years after the 64K DRAM chip was introduced, Intel—the company that had pioneered DRAM chips a decade earlier—was left with only 1.7 percent of the global DRAM market, while Japanese competitors’ market share soared.

1987, Nobel Prize−winning MIT economist Robert Solow, who pioneered the study of productivity and economic growth, argued that the chip industry suffered from an “unstable structure,” with employees job hopping between firms and companies declining to invest in their workers. Prominent economist Robert Reich lamented the “paper entrepreneurialism” in Silicon Valley, which he thought focused too much on the search for prestige and affluence rather than technical advances. At American universities, he declared, “science and engineering programs are foundering.” American chipmakers’ DRAM disaster was somewhat related to GCA’s collapsing market share. The Japanese DRAM firms that were outcompeting Silicon Valley preferred to buy from Japanese toolmakers, benefitting Nikon at the expense of GCA. However, most of GCA’s problems were homegrown, driven by unreliable equipment and bad customer service. Academics devised elaborate theories to explain how Japan’s huge conglomerates were better at manufacturing than America’s small startups. But the mundane reality was that GCA didn’t listen to its customers, while Nikon did. Chip firms that interacted with GCA found it “arrogant” and “not responsive.” No one said that about its Japanese rivals.

The oil embargoes of 1973 and 1979 had demonstrated to many Americans the risks of relying on foreign production. When Arab governments cut oil exports to punish America for supporting Israel, the U.S. economy plunged into a painful recession. A decade of stagflation and political crises followed. American foreign policy fixated on the Persian Gulf and securing its oil supplies. President Jimmy Carter declared the region one of “the vital interests of the United States of America.” Ronald Reagan deployed the U.S. Navy to escort oil tankers in and out of the Gulf. George H. W. Bush went to war with Iraq in part to liberate Kuwait’s oil fields. When America said that oil was a “strategic” commodity, it backed the claim with military force.

But in 1986, Japan had overtaken America in the number of chips produced. By the end of the 1980s, Japan was supplying 70 percent of the world’s lithography equipment. America’s share—in an industry invented by Jay Lathrop in a U.S. military lab—had fallen to 21 percent. Lithography is “simply something we can’t lose, or we will find ourselves completely dependent on overseas manufacturers to make our most sensitive stuff,” one Defense Department official told the New York Times. But if the trends of the mid-1980s continued, Japan would dominate the DRAM industry and drive major U.S. producers out of business. The U.S. might find itself even more reliant on foreign chips and semiconductor manufacturing equipment than it was on oil, even at the depths of the Arab embargo. Suddenly Japan’s subsidies for its chip industry, widely blamed for undermining American firms like Intel and GCA, seemed like a national security issue.

As America lurched from crisis to crisis, however, the aura around men like Henry Kissinger and Pete Peterson began to wane. Their country’s system wasn’t working—but Japan’s was. By the 1980s, Morita perceived deep problems in America’s economy and society. America had long seen itself as Japan’s teacher, but Morita thought America had lessons to learn as it struggled with a growing trade deficit and the crisis in its high-tech industries. “The United States has been busy creating lawyers,” Morita lectured, while Japan has “been busier creating engineers.” Moreover, American executives were too focused on “this year’s profit,” in contrast to Japanese management, which was “long range.” American labor relations were hierarchical and “old style,” without enough training or motivation for shop-floor employees. Americans should stop complaining about Japan’s success, Morita believed. It was time to tell his American friends: Japan’s system simply worked better.

What made The Japan That Can Say No truly frightening to Washington was not only that it articulated a zero-sum Japanese nationalism, but that Ishihara had identified a way to coerce America. Japan didn’t need to submit to U.S. demands, Ishihara argued, because America relied on Japanese semiconductors. American military strength, he noted, required Japanese chips. “Whether it be mid-range nuclear weapons or inter-continental ballistic missiles, what ensures the accuracy of weapons is none other than compact, high-precision computers,” he wrote. “If Japanese semiconductors are not used, this accuracy cannot be assured.” Ishihara speculated that Japan could even provide advanced semiconductors to the USSR, tipping the military balance in the Cold War.

For a professor-turned-entrepreneur like Irwin Jacobs, DARPA funding and Defense Department contracts were crucial in keeping his startups afloat. But only some government programs worked. Sematech’s effort to save America’s lithography leader was an abject failure, for example. Government efforts were effective not when they tried to resuscitate failing firms, but when they capitalized on pre-existing American strengths, providing funding to let researchers turn smart ideas into prototype products. Members of Congress would no doubt have been furious had they learned that DARPA—ostensibly a defense agency—was wining and dining professors of computer science as they theorized about chip design. But it was efforts like these that shrank transistors, discovered new uses for semiconductors, drove new customers to buy them, and funded the subsequent generation of smaller transistors.

The U.S., Europe, and Japan had booming consumer markets that drove chip demand. Civilian semiconductor markets helped fund the specialization of the semiconductor supply chain, creating companies with expertise in everything from ultra-pure silicon wafers to the advanced optics in lithography equipment. The Soviet Union barely had a consumer market, so it produced only a fraction of the chips built in the West. One Soviet source estimated that Japan alone spent eight times as much on capital investment in microelectronics as the USSR.

As Bill Perry watched the Persian Gulf War unfold, he knew laser-guided bombs were just one of dozens of military systems that had been revolutionized by integrated circuits, enabling better surveillance, communication, and computing power. The Persian Gulf War was the first major test of Perry’s “offset strategy,” which had been devised after the Vietnam War but never deployed in a sizeable battle.

Then in 1990 crisis hit. Japan’s financial markets crashed. The economy slumped into a deep recession. Soon the Tokyo stock market was trading at half its 1990 level. Real estate prices in Tokyo fell even further. Japan’s economic miracle seemed to screech to a halt. Meanwhile, America was resurgent, in business and in war. In just a few short years, “Japan as Number One” no longer seemed very accurate. The case study in Japan’s malaise was the industry that had been held up as exemplary of Japan’s industrial prowess: semiconductors. Morita, now sixty-nine years old, watched Japan’s fortunes decline alongside Sony’s slumping stock price. He knew his country’s problems cut deeper than its financial markets. Morita had spent the previous decade lecturing Americans about their need to improve production quality, not focus on “money games” in financial markets. But as Japan’s stock market crashed, the country’s vaunted long-term thinking no longer looked so visionary. Japan’s seeming dominance had been built on an unsustainable foundation of government-backed overinvestment. Cheap capital had underwritten the construction of new semiconductor fabs, but also encouraged chipmakers to think less about profit and more about output. Japan’s biggest semiconductor firms doubled down on DRAM production even as lower cost producers like Micron and South Korea’s Samsung undercut Japanese rivals.

Like the rest of the Soviet military leadership, he’d grown more pessimistic over time. As early as 1983, Ogarkov had gone so far as to tell American journalist Les Gelb—off the record—that “the Cold War is over and you have won.” The Soviet Union’s rockets were as powerful as ever. It had the world’s largest nuclear arsenal. But its semiconductor production couldn’t keep up, its computer industry fell behind, its communications and surveillance technologies lagged, and the military consequences were disastrous. “All modern military capability is based on economic innovation, technology, and economic strength,” Ogarkov explained to Gelb. “Military technology is based on computers. You are far, far ahead of us with computers…. In your country, every little child has a computer from age 5.”

When Chang was hired by Taiwan’s government in 1985 to lead the country’s preeminent electronics research institute, Taiwan was one of Asia’s leaders in assembling semiconductor devices—taking chips made abroad, testing them, and attaching them to plastic or ceramic packages. Taiwan’s government had tried breaking into the chipmaking business by licensing semiconductor manufacturing technology from America’s RCA and founding a chipmaker called UMC in 1980, but the company’s capabilities lagged far behind the cutting edge. Taiwan boasted plenty of semiconductor industry jobs, but captured only a small share of the profit, since most money in the chip industry was made by firms designing and producing the most advanced chips. Officials like Minister Li knew the country’s economy would keep growing only if it advanced beyond simply assembling components designed and fabricated elsewhere.

As early as the mid-1970s, while still at TI, Chang had toyed with the idea of creating a semiconductor company that would manufacture chips designed by customers. At the time, chip firms like TI, Intel, and Motorola mostly manufactured chips they had designed in-house. Chang pitched this new business model to fellow TI executives in March 1976. “The low cost of computing power,” he explained to his TI colleagues, “will open up a wealth of applications that are not now served by semiconductors,” creating new sources of demand for chips, which would soon be used in everything from phones to cars to dishwashers. The firms that made these goods lacked the expertise to produce semiconductors, so they’d prefer to outsource fabrication to a specialist, he reasoned. Moreover, as technology advanced and transistors shrank, the cost of manufacturing equipment and R&D would rise. Only companies that produced large volumes of chips would be cost-competitive.

Before TSMC, a couple of small companies, mostly based in Silicon Valley, had tried building businesses around chip design, avoiding the cost of building their own fabs by outsourcing the manufacturing. These “fabless” firms were sometimes able to convince a bigger chipmaker with spare capacity to manufacture their chips. However, they always had second-class status behind the bigger chipmakers’ own production plans. Worse, they faced the constant risk that their manufacturing partners would steal their ideas. In addition, they had to navigate manufacturing processes that were slightly different at each big chipmaker. Not having to build fabs dramatically reduced startup costs, but counting on competitors to manufacture chips was always a risky business model.

However, Mao’s radicalism made it impossible to attract foreign investment or conduct serious science. The year after China produced its first integrated circuit, Mao plunged the country into the Cultural Revolution, arguing that expertise was a source of privilege that undermined socialist equality. Mao’s partisans waged war on the country’s educational system. Thousands of scientists and experts were sent to work as farmers in destitute villages. Many others were simply killed. Chairman Mao’s “Brilliant Directive issued on July 21, 1968” insisted that “it is essential to shorten the length of schooling, revolutionize education, put proletarian politics in command…. Students should be selected from among workers and peasants with practical experience, and they should return to production after a few years study.” The idea of building advanced industries with poorly educated employees was absurd. Even more so was Mao’s effort to keep out foreign technology and ideas. U.S. restrictions prevented China from buying advanced semiconductor equipment, but Mao added his own self-imposed embargo. He wanted complete self-reliance and accused his political rivals of trying to infect China’s chip industry with foreign parts, even though China couldn’t produce many advanced components itself.

The Cultural Revolution began to wane as Mao’s health declined in the early 1970s. Communist Party leaders eventually called scientists back from the countryside. They tried picking up the pieces in their labs. But China’s chip industry, which had lagged far behind Silicon Valley before the Cultural Revolution, was now far behind China’s neighbors, too. During the decade in which China had descended into revolutionary chaos, Intel had invented microprocessors, while Japan had grabbed a large share of the global DRAM market. China accomplished nothing beyond harassing its smartest citizens. By the mid-1970s, therefore, its chip industry was in a disastrous state. “Out of every 1,000 semiconductors we produce, only one is up to standard,” one party leader complained in 1975. “So much is being wasted.”

If anyone could build a chip industry in China, it was Richard Chang. He wouldn’t rely on nepotism or on foreign help. All the knowledge needed for a world-class fab was already in his head. While working at Texas Instruments, he’d opened new facilities for the company around the world. Why couldn’t he do the same in Shanghai? He founded the Semiconductor Manufacturing International Corporation (SMIC) in 2000, raising over $1.5 billion from international investors like Goldman Sachs, Motorola, and Toshiba. One analyst estimated that half of SMIC’s startup capital was provided by U.S. investors. Chang used these funds to hire hundreds of foreigners to operate SMIC’s fab, including at least four hundred from Taiwan.

When Dutch engineer Frits van Hout joined ASML in 1984 just after completing his master’s degree in physics, the company’s employees asked whether he’d joined voluntarily or was forced to take the job. Beyond its tie with Philips, “we had no facilities and no money,” van Hout remembered. Building vast in-house manufacturing processes for lithography tools would have been impossible. Instead, the company decided to assemble systems from components meticulously sourced from suppliers around the world. Relying on other companies for key components brought obvious risks, but ASML learned to manage them. Whereas Japanese competitors tried to build everything in-house, ASML could buy the best components on the market. As it began to focus on developing EUV tools, its ability to integrate components from different sources became its greatest strength. ASML’s second strength, unexpectedly, was its location in the Netherlands. In the 1980s and 1990s, the company was seen as neutral in the trade disputes between Japan and the United States. U.S. firms treated it like a trustworthy alternative to Nikon and Canon. For example, when Micron, the American DRAM startup, wanted to buy lithography tools, it turned to ASML rather than relying on one of the two main Japanese suppliers, each of which had deep ties with Micron’s DRAM competitors in Japan.

The computer industry was designed around x86 and Intel dominated the ecosystem. So x86 defines most PC architectures to this day. Intel’s x86 instruction set architecture also dominates the server business, which boomed as companies built ever larger data centers in the 2000s and then as businesses like Amazon Web Services, Microsoft Azure, and Google Cloud constructed the vast warehouses of servers that create “the cloud,” on which individuals and companies store data and run programs. In the 1990s and early 2000s, Intel had only a small share of the business of providing chips for servers, behind companies like IBM and HP. But Intel used its ability to design and manufacture cutting-edge processor chips to win data center market share and establish x86 as the industry standard there, too. By the mid-2000s, just as cloud computing was emerging, Intel had won a near monopoly over data center chips, competing only with AMD. Today, nearly every major data center uses x86 chips from either Intel or AMD. The cloud can’t function without their processors.

Shortly after the deal to put Intel’s chips in Mac computers, Jobs came back to Otellini with a new pitch. Would Intel build a chip for Apple’s newest product, a computerized phone? All cell phones used chips to run their operating systems and manage communication with cell phone networks, but Apple wanted its phone to function like a computer. It would need a powerful computer-style processor as a result. “They wanted to pay a certain price,” Otellini told journalist Alexis Madrigal after the fact, “and not a nickel more…. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100× what anyone thought.” Intel turned down the iPhone contract. Apple looked elsewhere for its phone chips. Jobs turned to Arm’s architecture, which unlike x86 was optimized for mobile devices that had to economize on power consumption. The early iPhone processors were produced by Samsung, which had followed TSMC into the foundry business. Otellini’s prediction that the iPhone would be a niche product proved horribly wrong. By the time he realized his mistake, however, it was too late. Intel would later scramble to win a share of the smartphone business. Despite eventually pouring billions of dollars into products for smartphones, Intel never had much to show for it. Apple dug a deep moat around its immensely profitable castle before Otellini and Intel realized what was happening.

Even within the semiconductor industry, it was easy to find counterpoints to Grove’s pessimism about offshoring expertise. Compared to the situation in the late 1980s, when Japanese competitors were beating Silicon Valley in terms of DRAM design and manufacturing, America’s chip ecosystem looked healthier. It wasn’t only Intel that was printing immense profits. Many fabless chip designers were, too. Except for the loss of cutting-edge lithography, America’s semiconductor manufacturing equipment firms generally thrived during the 2000s. Applied Materials remained the world’s largest semiconductor toolmaking company, building equipment like the machines that deposited thin films of chemicals on top of silicon wafers as they were processed. Lam Research had world-beating expertise in etching circuits into silicon wafers. And KLA, also based in Silicon Valley, had the world’s best tools for finding nanometer-sized errors on wafers and lithography masks. These three toolmakers were rolling out new generations of equipment that could deposit, etch, and measure features at the atomic scale, which would be crucial for making the next generation of chips. A couple Japanese firms—notably, Tokyo Electron—had some comparable capabilities to America’s equipment makers. Nevertheless, it was basically impossible to make a leading-edge chip without using some American tools.

the history of the semiconductor industry didn’t suggest that U.S. leadership was guaranteed. America hadn’t outrun the Japanese in the 1980s, though it did in the 1990s. GCA hadn’t outrun Nikon or ASML in lithography. Micron was the only DRAM producer able to keep pace with East Asian rivals, while many other U.S. DRAM producers went bust. Through the end of the 2000s, Intel retained a lead over Samsung and TSMC in producing miniaturized transistors, but the gap had narrowed. Intel was running more slowly, though it still benefitted from its more advanced starting point. The U.S. was a leader in most types of chip design, though Taiwan’s MediaTek was proving that other countries could design chips, too. Van Atta saw few reasons for confidence and none for complacency. “The U.S. leadership position,” he warned in 2007, “will likely erode seriously over the next decade.” No one was listening.

By the 2000s, it was common to split the semiconductor industry into three categories. “Logic” refers to the processors that run smartphones, computers, and servers. “Memory” refers to DRAM, which provides the short-term memory computers need to operate, and flash, also called NAND, which remembers data over time. The third category of chips is more diffuse, including analog chips like sensors that convert visual or audio signals into digital data, radio frequency chips that communicate with cell phone networks, and semiconductors that manage how devices use electricity.

Unlike Samsung and Hynix, which produce most of their DRAM in South Korea, Micron’s long string of acquisitions left it with DRAM fabs in Japan, Taiwan, and Singapore as well as in the United States. Government subsidies in countries like Singapore encouraged Micron to maintain and expand fab capacity there. So even though an American company is one of the world’s three biggest DRAM producers, most DRAM manufacturing is in East Asia.

Every PC maker, from IBM to Compaq, had to use an Intel or an AMD chip for their main processor, because these two firms had a de facto monopoly on the x86 instruction set that PCs required. There was a lot more competition in the market for chips that rendered images on screens. The emergence of semiconductor foundries, and the driving down of startup costs, meant that it wasn’t only Silicon Valley aristocracy that could compete to build the best graphics processors. The company that eventually came to dominate the market for graphics chips, Nvidia, had its humble beginnings not in a trendy Palo Alto coffeehouse but in a Denny’s in a rough part of San Jose.

Nvidia’s first set of customers—video and computer game companies—might not have seemed like the cutting edge, yet the firm wagered that the future of graphics would be in producing complex, 3D images. Early PCs were a dull, drab, 2D world, because the computation required to display 3D images was immense.

Jacobs, whose faith in Moore’s Law was as strong as ever, thought a more complicated system of frequency-hopping would work better. Rather than keeping a given phone call on a certain frequency, he proposed moving call data between different frequencies, letting him cram more calls into available spectrum space. Most people thought he was right in theory, but that such a system would never work in practice. Voice quality would be low, they argued, and calls would be dropped. The amount of processing needed to move call data between frequencies and have it interpreted by a phone on the other end seemed enormous. Jacobs disagreed, founding a company called Qualcomm—Quality Communications—in 1985 to prove the point. He built a small network with a couple cell towers to prove it would work. Soon the entire industry realized Qualcomm’s system would make it possible to fit far more cell phone calls into existing spectrum space by relying on Moore’s Law to run the algorithms that make sense of all the radio waves bouncing around. For each generation of cell phone technology after 2G, Qualcomm contributed key ideas about how to transmit more data via the radio spectrum and sold specialized chips with the computing power capable of deciphering this cacophony of signals. The company’s patents are so fundamental it’s impossible to make a cell phone without them. Qualcomm soon diversified into a new business line, designing not only the modem chips in a phone that communicate with a cell network, but also the application processors that run a smartphone’s core systems. These chip designs are monumental engineering accomplishments, each built on tens of millions of lines of code.

fabless chip design firms were hungry for a credible competitor to TSMC, because the Taiwanese behemoth already had around half of the world’s foundry market. The only other major competitor was Samsung, whose foundry business had technology that was roughly comparable to TSMC’s, though the company possessed far less production capacity. Complications arose, though, because part of Samsung’s operation involved building chips that it designed in-house. Whereas a company like TSMC builds chips for dozens of customers and focuses relentlessly on keeping them happy, Samsung had its own line of smartphones and other consumer electronics, so it was competing with many of its customers. Those firms worried that ideas shared with Samsung’s chip foundry might end up in other Samsung products. TSMC and GlobalFoundries had no such conflicts of interest.

As Jobs introduced new versions of the iPhone, he began etching his vision for the smartphone into Apple’s own silicon chips. A year after launching the iPhone, Apple bought a small Silicon Valley chip design firm called PA Semi that had expertise in energy-efficient processing. Soon Apple began hiring some of the industry’s best chip designers. Two years later, the company announced it had designed its own application processor, the A4, which it used in the new iPad and the iPhone 4. Designing chips as complex as the processors that run smartphones is expensive, which is why most low- and midrange smartphone companies buy off-the-shelf chips from companies like Qualcomm. However, Apple has invested heavily in R&D and chip design facilities in Bavaria and Israel as well as Silicon Valley, where engineers design its newest chips. Now Apple not only designs the main processors for most of its devices but also ancillary chips that run accessories like AirPods. This investment in specialized silicon explains why Apple’s products work so smoothly. Within four years of the iPhone’s launch, Apple was making over 60 percent of all the world’s profits from smartphone sales, crushing rivals like Nokia and BlackBerry and leaving East Asian smartphone makers to compete in the low-margin market for cheap phones.

In the early 2010s, Nvidia—the designer of graphic chips—began hearing rumors of PhD students at Stanford using Nvidia’s graphics processing units (GPUs) for something other than graphics. GPUs were designed to work differently from standard Intel or AMD CPUs, which are infinitely flexible but run all their calculations one after the other. GPUs, by contrast, are designed to run multiple iterations of the same calculation at once. This type of “parallel processing,” it soon became clear, had uses beyond controlling pixels of images in computer games. It could also train AI systems efficiently. Where a CPU would feed an algorithm many pieces of data, one after the other, a GPU could process multiple pieces of data simultaneously. To learn to recognize images of cats, a CPU would process pixel after pixel, while a GPU could “look” at many pixels at once. So the time needed to train a computer to recognize cats decreased dramatically. Nvidia has since bet its future on artificial intelligence. From its founding, Nvidia outsourced its manufacturing, largely to TSMC, and focused relentlessly on designing new generations of GPUs and rolling out regular improvements to its special programming language called CUDA that makes it straightforward to devise programs that use Nvidia’s chips. As investors bet that data centers will require ever more GPUs, Nvidia has become America’s most valuable semiconductor company.

Whether it will be Nvidia or the big cloud companies doing the vanquishing, Intel’s near-monopoly in sales of processors for data centers is ending. Losing this dominant position would have been less problematic if Intel had found new markets. However, the company’s foray into the foundry business in the mid-2010s, where it tried to compete head-on with TSMC, was a flop. Intel tried opening its manufacturing lines to any customers looking for chipmaking services, quietly admitting that the model of integrated design and manufacturing wasn’t nearly as successful as Intel’s executives claimed. The company had all the ingredients to become a major foundry player, including advanced technology and massive production capacity, but succeeding would have required a major cultural change. TSMC was open with intellectual property, but Intel was closed off and secretive. TSMC was service-oriented, while Intel thought customers should follow its own rules. TSMC didn’t compete with its customers, since it didn’t design any chips. Intel was the industry giant whose chips competed with almost everyone.

Why, then, was Xi Jinping worried about digital security? The more China’s leaders studied their technological capabilities, the less important their internet companies seemed. China’s digital world runs on digits—1s and 0s—that are processed and stored mostly by imported semiconductors. China’s tech giants depend on data centers full of foreign, largely U.S.-produced, chips. The documents that Edward Snowden leaked in 2013 before fleeing to Russia demonstrated American network-tapping capabilities that surprised even the cyber sleuths in Beijing. Chinese firms had replicated Silicon Valley’s expertise in building software for e-commerce, online search, and digital payments. But all this software relies on foreign hardware. When it comes to the core technologies that undergird computing, China is staggeringly reliant on foreign products, many of which are designed in Silicon Valley and almost all of which are produced by firms based in the U.S. or one of its allies.

China’s problem isn’t only in chip fabrication. In nearly every step of the process of producing semiconductors, China is staggeringly dependent on foreign technology, almost all of which is controlled by China’s geopolitical rivals—Taiwan, Japan, South Korea, or the United States. The software tools used to design chips are dominated by U.S. firms, while China has less than 1 percent of the global software tool market, according to data aggregated by scholars at Georgetown University’s Center for Security and Emerging Technology. When it comes to core intellectual property, the building blocks of transistor patterns from which many chips are designed, China’s market share is 2 percent; most of the rest is American or British. China supplies 4 percent of the world’s silicon wafers and other chipmaking materials; 1 percent of the tools used to fabricate chips; 5 percent of the market for chip designs. It has only a 7 percent market share in the business of fabricating chips. None of this fabrication capacity involves high-value, leading-edge technology.

China was disadvantaged, however, by the government’s desire not to build connections with Silicon Valley, but to break free of it. Japan, South Korea, the Netherlands, and Taiwan had come to dominate important steps of the semiconductor production process by integrating deeply with the U.S. chip industry. Taiwan’s foundry industry only grew rich thanks to America’s fabless firms, while ASML’s most advanced lithography tools only work thanks to specialized light sources produced at the company’s San Diego subsidiary. Despite occasional tension over trade, these countries have similar interests and worldviews, so mutual reliance on each other for chip designs, tools, and fabrication services was seen as a reasonable price to pay for the efficiency of globalized production. If China only wanted a bigger part in this ecosystem, its ambitions could’ve been accommodated. However, Beijing wasn’t looking for a better position in a system dominated by America and its friends. Xi’s call to “assault the fortifications” wasn’t a request for slightly higher market share. It was about remaking the world’s semiconductor industry, not integrating with it. Some economic policymakers and semiconductor industry executives in China would have preferred a strategy of deeper integration, yet leaders in Beijing, who thought more about security than efficiency, saw interdependence as a threat. The Made in China 2025 plan didn’t advocate economic integration but the opposite. It called for slashing China’s dependence on imported chips. The primary target of the Made in China 2025 plan is to reduce the share of foreign chips used in China.

The most controversial example of technology transfer, however, was by Intel’s archrival, AMD. In the mid-2010s, the company was struggling financially, having lost PC and data center market share to Intel. AMD was never on the brink of bankruptcy, but it wasn’t far from it, either. The company was looking for cash to buy time as it brought new products to market. In 2013, it sold its corporate headquarters in Austin, Texas, to raise cash, for example. In 2016, it sold to a Chinese firm an 85 percent stake in its semiconductor assembly, testing, and packaging facilities in Penang, Malaysia, and Suzhou, China, for $371 million. AMD described these facilities as “world-class.” That same year, AMD cut a deal with a consortium of Chinese firms and government bodies to license the production of modified x86 chips for the Chinese market. The deal, which was deeply controversial within the industry and in Washington, was structured in a way that didn’t require the approval of CFIUS, the U.S. government committee that reviews foreign purchases of American assets. AMD took the transaction to the relevant authorities in the Commerce Department, who don’t “know anything about microprocessors, or semiconductors, or China,” as one industry insider put it. Intel reportedly warned the government about the deal, implying that it harmed U.S. interests and that it would threaten Intel’s business. Yet the government lacked a straightforward way to stop it, so the deal was ultimately waved through, sparking anger in Congress and in the Pentagon. Just as AMD finalized the deal, its new processor series, called “Zen,” began hitting the market, turning around the company’s fortunes, so AMD ended up not depending on the money from its licensing deal. However, the joint venture had already been signed and the technology was transferred. The Wall Street Journal ran multiple stories arguing that AMD had sold “crown jewels” and “the keys to the kingdom.” Other industry analysts suggested the transaction was designed to let Chinese firms claim to the Chinese government they were designing cutting-edge microprocessors in China, when in reality they were simply tweaking AMD designs. The transaction was portrayed in English-language media as a minor licensing deal, but leading Chinese experts told state-owned media the deal supported China’s effort to domesticate “core technologies” so that “we no longer can be pulled around by our noses.” Pentagon officials who opposed the deal agree that AMD scrupulously followed the letter of the law, but say they remain unconvinced the transaction was as innocuous as defenders claim. “I continue to be very skeptical we were getting the full story from AMD,” one former Pentagon official says. The Wall Street Journal reported that the joint venture involved Sugon, a Chinese supercomputer firm that has described “making contributions to China’s national defense and security” as its “fundamental mission.” AMD described Sugon as a “strategic partner” in press releases as recently as 2017, which was guaranteed to raise eyebrows in Washington.

Chipmakers jealously guard their critical technologies, of course. But almost every chip firm has non-core technology, in subsectors that they don’t lead, that they’d be happy to share for a price. When companies are losing market share or in need of financing, moreover, they don’t have the luxury of focusing on the long term. This gives China powerful levers to induce foreign chip firms to transfer technology, open production facilities, or license intellectual property, even when foreign companies realize they’re helping develop competitors. For chip firms, its often easier to raise funds in China than on Wall Street. Accepting Chinese capital can be an implicit requirement for doing business in the country.

The ties between Huawei and the Chinese state are well documented but explain little about how the company built a globe-spanning business. To understand the company’s expansion, it’s more helpful to compare Huawei’s trajectory to a different tech-focused conglomerate, South Korea’s Samsung. Ren was born a generation after Samsung’s Lee Byung-Chul, but the two moguls have a similar operating model. Lee built Samsung from a trader of dried fish into a tech company churning out some of the world’s most advanced processor and memory chips by relying on three strategies. First, assiduously cultivate political relationships to garner favorable regulation and cheap capital. Second, identify products pioneered in the West and Japan and learn to build them at equivalent quality and lower cost. Third, globalize relentlessly, not only to seek new customers but also to learn by competing with the world’s best companies. Executing these strategies made Samsung one of the world’s biggest companies, achieving revenues equivalent to 10 percent of South Korea’s entire GDP.

Huawei’s critics often allege that its success rests on a foundation of stolen intellectual property, though this is only partly true. The company has admitted to some prior intellectual property violations and has been accused of far more. In 2003, for example, Huawei acknowledged that 2 percent of the code in one of its routers was copied directly from Cisco, an American competitor. Canadian newspapers, meanwhile, have reported that the country’s spy agencies believe there was a Chinese-government-backed campaign of hacking and espionage against Canadian telecom giant Nortel in the 2000s, which allegedly benefitted Huawei. Theft of intellectual property may well have benefitted the company, but it can’t explain its success. No quantity of intellectual property or trade secrets is enough to build a business as big as Huawei. The company has developed efficient manufacturing processes that have driven down costs and built products that customers see as high-quality. Huawei’s spending on R&D, meanwhile, is world leading. The company spends several times more on R&D than other Chinese tech firms. Its roughly $15 billion annual R&D budget is paralleled by only a handful of firms, including tech companies like Google and Amazon, pharmaceutical companies like Merck, and carmakers like Daimler or Volkswagen. Even when weighing Huawei’s track record of intellectual property theft, the company’s multibillion-dollar R&D spending suggests a fundamentally different ethos than the “copy it” mentality of Soviet Zelenograd, or the many other Chinese firms that have tried to break into the chip industry on the cheap.

Beijing’s aim isn’t simply to match the U.S. system-by-system, but to develop capabilities that could “offset” American advantages, taking the Pentagon’s concept from the 1970s and turning it against the United States. China has fielded an array of weapons that systematically undermine U.S. advantages. China’s precision anti-ship missiles make it extremely dangerous for U.S. surface ships to transit the Taiwan Strait in a time of war, holding American naval power at bay. New air defense systems contest America’s ability to dominate the airspace in a conflict. Long-range land attack missiles threaten the network of American military bases from Japan to Guam. China’s anti-satellite weapons threaten to disable communications and GPS networks. China’s cyberwar capabilities haven’t been tested in wartime, but the Chinese would try to bring down entire U.S. military systems. Meanwhile, in the electromagnetic spectrum, China might try to jam American communications and blind surveillance systems, leaving the U.S. military unable to see enemies or communicate with allies.

Measured by the number of AI experts, China appears to have capabilities that are comparable to America’s. Researchers at MacroPolo, a China-focused think tank, found that 29 percent of the world’s leading researchers in artificial intelligence are from China, as opposed to 20 percent from the U.S. and 18 percent from Europe. However, a staggering share of these experts end up working in the U.S., which employs 59 percent of the world’s top AI researchers. The combination of new visa and travel restrictions plus China’s effort to retain more researchers at home may neutralize America’s historical skill at stripping geopolitical rivals of their smartest minds.

The battle for the electromagnetic spectrum will be an invisible struggle conducted by semiconductors. Radar, jamming, and communications are all managed by complex radio frequency chips and digital-analog converters, which modulate signals to take advantage of open spectrum space, send signals in a specific direction, and try to confuse adversaries’ sensors. Simultaneously, powerful digital chips will run complex algorithms inside a radar or jammer that assess the 289signals received and decide what signals to send out in a matter of milliseconds. At stake is a military’s ability to see and to communicate. Autonomous drones won’t be worth much if the devices can’t determine where they are or where they’re heading.

DARPA’s budget is a couple billion dollars per year, less than the R&D budgets of most of the industry’s biggest firms. Of course, DARPA spends a lot more on far-out research ideas, whereas companies like Intel and Qualcomm spend most of their money on projects that are only a couple years from fruition. However, the U.S. government in general buys a smaller share of the world’s chips than ever before. The U.S. government bought almost all the early integrated circuits that Fairchild and Texas Instruments produced in the early 1960s. By the 1970s, that number had fallen to 10−15 percent. Now it’s around 2 percent of the U.S. chip market. As a buyer of chips, Apple CEO Tim Cook has more influence on the industry than any Pentagon official today.

Commerce Secretary Penny Pritzker gave a high-profile address in Washington on semiconductors, declaring it “imperative that semiconductor technology remains a central feature of American ingenuity and a driver of our economic growth. We cannot afford to cede our leadership.” She identified China as the central challenge, condemning “unfair trade practices and massive, non-market-based state intervention” and cited “new attempts by China to acquire companies and technology based on their government’s interest—not commercial objectives,” an accusation driven by Tsinghua Unigroup’s acquisition spree. With little time left in the Obama administration, however, there wasn’t much Pritzker could do. Rather, the administration’s modest goal was to start a discussion that—it hoped—the incoming Hillary Clinton administration would carry forward. Pritzker also ordered the Commerce Department to conduct a study of the semiconductor supply chain and promised to “make clear to China’s leaders at every opportunity that we will not accept a $150 billion industrial policy designed to appropriate this industry.” But it was easy to condemn China’s subsidies. It was far harder to make them stop.

U.S. intelligence had voiced concerns about Huawei’s alleged links to the Chinese government for many years, though it was only in the mid-2010s that the company and its smaller peer, ZTE, started attracting public attention. Both companies sold competing telecom equipment; ZTE was state-owned, while Huawei was private but was alleged by U.S. officials to have close ties with the government. Both companies had spent decades fighting allegations that they’d bribed officials in multiple countries to win contracts. And in 2016, during the final year of the Obama administration, both were accused of violating U.S. sanctions by supplying goods to Iran and North Korea. The Obama administration considered imposing financial sanctions on ZTE, which would have severed the company’s access to the international banking system, but instead opted to punish the company in 2016 by restricting U.S. firms from selling to it. Export controls like this had previously been used mostly against military targets, to stop the transfer of technology to companies supplying components to Iran’s missile program, for example. But the Commerce Department had broad authority to prohibit the export of civilian technologies, too. ZTE was highly reliant on American components in its systems—above all, American chips. However, in March 2017, before the threatened restrictions were implemented, the company signed a plea deal with the U.S. government and paid a fine, so the export restrictions were removed before they’d taken force.

Publicly, semiconductor CEOs and their lobbyists urged the new administration to work with China and encourage it to comply with trade agreements. Privately, they admitted this strategy was hopeless and feared that state-supported Chinese competitors would grab market share at their expense. The entire chip industry depended on sales to China—be it chipmakers like Intel, fabless designers like Qualcomm, or equipment manufacturers like Applied Materials.

Three companies dominate the world’s market for DRAM chips today, Micron and its two Korean rivals, Samsung and SK Hynix. Taiwanese firms spent billions trying to break into the DRAM business in the 1990s and 2000s but never managed to establish profitable businesses. The DRAM market requires economies of scale, so it’s difficult for small producers to be price competitive. Though Taiwan never succeeded in building a sustainable memory chip industry, both Japan and South Korea had focused on DRAM chips when they first entered the chip industry in the 1970s and 1980s. DRAM requires specialized know-how, advanced equipment, and large quantities of capital investment. Advanced equipment can generally be purchased off-the-shelf from the big American, Japanese, and Dutch toolmakers. The know-how is the hard part. When Samsung entered the business in the late 1980s, it licensed technology from Micron, opened an R&D facility in Silicon Valley, and hired dozens of American-trained PhDs. Another, faster, method for acquiring know-how is to poach employees and steal files.

There’s a long history in the chip industry of acquiring rivals’ technology, dating back to the string of allegations about Japanese intellectual property theft in the 1980s. Jinhua’s technique, however, was closer to the KGB’s Directorate T. First, Jinhua cut a deal with Taiwan’s UMC, which fabricated logic chips (not memory chips), whereby UMC would receive around $700 million in exchange for providing expertise in producing DRAM. Licensing agreements are common in the semiconductor industry, but this agreement had a twist. UMC was promising to provide DRAM technology, but it wasn’t in the DRAM business. So in September 2015, UMC hired multiple employees from Micron’s facility in Taiwan, starting with the president, Steven Chen, who was put in charge of developing UMC’s DRAM technology and managing its relationship with Jinhua. The next month, UMC hired a process manager at Micron’s Taiwan facility named J. T. Ho. Over the subsequent year, Ho received a series of documents from his former Micron colleague, Kenny Wang, who was still working at the Idaho chipmaker’s facility in Taiwan. Eventually, Wang left Micron to move to UMC, bringing nine hundred files uploaded to Google Drive with him. Taiwanese prosecutors were notified by Micron of the conspiracy and started gathering evidence by tapping Wang’s phone. They soon accumulated enough evidence to bring charges against UMC, which had since filed for patents on some of the technology it stole from Micron. When Micron sued UMC and Jinhua for violating its patents, they countersued in China’s Fujian Province. A Fujian court ruled that Micron was responsible for violating UMC and Jinhua’s patents—patents that had been filed using material stolen from Micron. To “remedy” the situation, Fuzhou Intermediate People’s Court banned Micron from selling twenty-six products in China, the company’s biggest market. This was a perfect case study of the state-backed intellectual property theft foreign companies operating in China had long complained of. The Taiwanese naturally understood why the Chinese preferred not to abide by intellectual property rules, of course. When Texas Instruments first arrived in Taiwan in the 1960s, Minister K. T. Li had sneered that “intellectual property rights are how imperialists bully backward countries.” Yet Taiwan had concluded it was better to respect intellectual property norms, especially as its companies began developing their own technologies and had their own patents to defend.

In May 2020, the administration tightened restrictions on Huawei further. Now, the Commerce Department declared, it would “protect U.S. national security by restricting Huawei’s ability to use U.S. technology and software to design and manufacture its semiconductors abroad.” The new Commerce Department rules didn’t simply stop the sale of U.S.-produced goods to Huawei. They restricted any goods made with U.S.-produced technology from being sold to Huawei, too. In a chip industry full of choke points, this meant almost any chip. TSMC can’t fabricate advanced chips for Huawei without using U.S. manufacturing equipment. Huawei can’t design chips without U.S.-produced software. Even China’s most advanced foundry, SMIC, relies extensively on U.S. tools. Huawei was simply cut off from the world’s entire chipmaking infrastructure, except for chips that the U.S. Commerce Department deigned to give it a special license to buy.

Since then, Huawei’s been forced to divest part of its smartphone business and its server business, since it can’t get the necessary chips. China’s rollout of its own 5G telecoms network, which was once a high-profile government priority, has been delayed due to chip shortages. After the U.S. restrictions took place, other countries, notably Britain, decided to ban Huawei, reasoning that in the absence of U.S. chips the company would struggle to service its products.

It’s commonly argued that the escalating tech competition with the United States is like a “Sputnik moment” for China’s government. The allusion is to the United States’ fear after the launch of Sputnik in 1957 that it was falling behind its rival, driving Washington to pour funding into science and technology. China certainly faced a Sputnik-scale shock after the U.S. banned sales of chips to firms like Huawei.

Samsung and its smaller Korean rival SK Hynix benefit from the support of the Korean government but are stuck between China and the U.S., with each country trying to cajole South Korea’s chip giants to build more manufacturing in their countries. Samsung recently announced plans to expand and upgrade its facility for producing advanced logic chips in Austin, Texas, for example, an investment estimated to cost $17 billion. Both companies face scrutiny from the U.S. over proposals to upgrade their facilities in China, however. U.S. pressure to restrict the transfer of EUV tools to SK Hynix’s facility in Wuxi, China, is reportedly delaying its modernization—and presumably imposing a substantial cost on the company. South Korea isn’t the only country where chip companies and the government work as a “team,” to use President Moon’s phrase. Taiwan’s government remains fiercely protective of its chip industry, which it recognizes as its greatest source of leverage on the international stage. Morris Chang, now ostensibly fully retired from TSMC, has served as a trade envoy for Taiwan. His primary interest—and Taiwan’s—remains ensuring that TSMC retains its central role in the world’s chip industry. The company itself plans to invest over $100 billion between 2022 and 2024 to upgrade its technology and expand chipmaking capacity. Most of this money will be invested in Taiwan, though the company plans to upgrade its facility in Nanjing, China, and to open a new fab in Arizona. Neither of these new fabs will produce the most cutting-edge chips, however, so TSMC’s most advanced technology will remain in Taiwan.

The primary hope for advanced manufacturing in the United States is Intel. After years of drift, the company named Pat Gelsinger as CEO in 2021. Born in small-town Pennsylvania, Gelsinger started his career at Intel and was mentored by Andy Grove. He eventually left to take on senior roles at two cloud computing companies before he was brought back to turn Intel around. He’s set out an ambitious and expensive strategy with three prongs. The first is to regain manufacturing leadership, overtaking Samsung and TSMC. To do this, Gelsinger has cut a deal with ASML to let Intel acquire the first next-generation EUV machine, which is expected to be ready in 2025. If Intel can learn how to use these new tools before rivals, it could provide a technological edge. The second prong of Gelsinger’s strategy is launching a foundry business that will compete directly with Samsung and TSMC, producing chips for fabless firms and helping Intel win more market share. Intel’s spending heavily on new facilities in the U.S. and Europe to build capacity that potential future foundry customers will require.

If TSMC’s fabs were to slip into the Chelungpu Fault, whose movement caused Taiwan’s last big earthquake in 1999, the reverberations would shake the global economy. It would only take a handful of explosions, deliberate or accidental, to cause comparable damage. Some back-of-the-envelope calculations illustrate what’s at stake. Taiwan produces 11 percent of the world’s memory chips. More important, it fabricates 37 percent of the world’s logic chips. Computers, phones, data centers, and most other electronic devices simply can’t work without them, so if Taiwan’s fabs were knocked offline, we’d produce 37 percent less computing power during the following year.

After a disaster in Taiwan, in other words, the total costs would be measured in the trillions. Losing 37 percent of our production of computing power each year could well be more costly than the COVID pandemic and its economically disastrous lockdowns. It would take at least half a decade to rebuild the lost chipmaking capacity. These days, when we look five years out we hope to be building 5G networks and metaverses, but if Taiwan were taken offline we might find ourselves struggling to acquire dishwashers.

Neil Thompson and Svenja Spanuth, two researchers, have gone so far as to argue that we’re seeing a “decline of computers as a general purpose technology.” They think the future of computing will be divided between “ ‘fast lane’ applications that get powerful customized chips and ‘slow lane’ applications that get stuck using general-purpose chips whose progress fades.” It’s undeniable that the microprocessor, the workhorse of modern computing, is being partially displaced by chips made for specific purposes. What’s less clear is whether this is a problem. Nvidia’s GPUs are not general purpose like an Intel microprocessor, in the sense that they’re designed specifically for graphics and, increasingly, AI. However, Nvidia and other companies offering chips that are optimized for AI have made artificial intelligence far cheaper to implement, and therefore more widely accessible. AI has become a lot more “general purpose” today than was conceivable a decade ago, largely thanks to new, more powerful chips. The recent trend of big tech firms like Amazon and Google designing their own chips marks another change from recent decades. Both Amazon and Google entered the chip design business to improve the efficiency of the servers that run their publicly available clouds. Anyone can access Google’s TPU chips on Google’s cloud for a fee. The pessimistic view is to see this as a bifurcation of computing into a “slow lane” and a “fast lane.” What’s surprising though, is how easy it is for almost anyone to access the fast lane by buying an Nvidia chip or by renting access to an AI-optimized cloud.

Posted in Review and Ideas | Tagged , , , , , , , , | Leave a comment

Key Points from Book: Oceans of Grain – How American Wheat Remade the World


by Scott Reynolds Nelson

By the spring of 2011, we were already seeing some of the longer-lasting results of the 2008 downturn. For example, a surge in the price of grain had led Arab states—which import most of their food—to stop subsidizing the price of bread in cities. Bread riots followed in an “Arab Spring” that would soon topple governments in Libya, Egypt, Tunisia, and Syria.4 Newspaper reporters were flying to the Arab world because of protests there, but as a historian I was heading to Odessa. Egyptian protesters called for “bread, freedom, and social justice” in 2011. I was thinking about calls for bread, freedom, and justice in the French Revolution of 1789, the downfall of Sultan Selim III in 1807, the European Revolutions of 1848, the Young Turk Revolution in 1910, and the Russian Revolution in 1917. Wars and revolutions now, just as in the past, have much to do with wheat. That is the topic of this book.

After Napoleon’s defeat, these vast fields of Russian wheat did not delight European landlords. They faced what is called “Ricardo’s paradox,” in which rents drop when food gets cheap. For forty years taxes on foreign grain slowed cheap sacks of Russian Azima and Ghirka wheat. But then a water mold, unknowingly carried in from America, killed potatoes and brought food insecurity that forced European states to open the trading floodgates to wheat again in 1846. A century-long contest emerged between the wheat fields of Russia and the wheat fields of America to feed Europe’s working class.

Russia’s boom went suddenly bust when larger boatloads of cheap American wheat burst across the ocean to European markets in the wake of the American Civil War. A group of US capitalists I call the boulevard barons helped break the power of southern enslavers and then stole a march on Russia’s grain trade. The boulevard barons who sold grain internationally had partnered with the Union Army to create a new financial instrument called the futures contract, which allowed a London merchant to buy ten thousand bushels of wheat in Chicago and sell it for future delivery on the same day in Liverpool, nearly eliminating the risk of price fluctuations. Other innovations cheapened the cost of delivering American wheat. An Atlantic telegraph allowed purchase of a futures contract. Portable nitroglycerin widened American rivers and cut through the Appalachian Mountains that separated American prairies from the coast. Huge sailing ships that could never pass through the Suez Canal were forced onto the Atlantic. While Odessa at its peak could export a million tons of wheat each year, New York in 1871 was putting a million tons of grain afloat every week. As a result, European grain prices dropped nearly 50 percent between 1868 and 1872, and merchant fees fell along with them.

By the middle of 1873, Ricardo’s paradox had done its work, not just in Russia but in much of Europe. The Bank of England, fearing that banks were using interbank lending credit to buy up real estate, raised interest rates in a series of shocks. A real estate bubble burst almost simultaneously in Odessa, Vienna, and Berlin. This so-called Agrarian Crisis set off a financial panic and then an economic downturn in agricultural Europe that was so severe, it was known, until the 1930s, as the Great Depression. In other words, oceans of grain had flooded Europe, and the flush times in Odessa and much of central Europe had ended, sending shock waves around the world.

By 1914 Russia’s anxiety that Turkey might halt Russian grain shipments on the Black Sea helped start World War I—a war over nothing less than foreign bread. If Russia lost a hundred thousand men in the Russo-Japanese War, it would soon lose millions more in a fight over oceans of grain. The loss of those men, who would never again harvest wheat, brought Russia again to the brink of revolution.

Parvus argued that trade was an active force of its own that “took on different forms and gained different meanings” in different societies, ancient, medieval, and modern. Trade, he thought, shaped the structure of a society in ways impossible to fully understand. Empires assembled themselves on paths of trade, he argued, but were vulnerable on the very lines that connected them to their inner and outer rings; they were thus prone to what he called Zusammenbruch: crash, breakdown, or collapse.

Historians, like geographers, have long treated grain ports, like those on the Black Sea, as the children of thalassic empires, with chumaki as their worker bees. The ancient Greek word for such a provisioning port is emporion, the source of the word “empire.” Port traders in these emporia specialized in gathering, drying, and storing food for shipment. Grain came as trade, tribute, and tax to the emporia to feed the arms of an empire, its armies. In the historian’s imagination the Roman Empire built trade in western Europe, for example, with Roman roads, mileposts, and armies. There was no China, the story goes, until Han canals fused the region into a single domain of trade. New archaeological evidence suggests that the folklorists have it right and that the black paths are prehistoric, nearly as old as bread itself. The proof that trade pathways were ancient is a tiny bacillus that traveled inside chumaki traders’ bodies: Yersinia pestis. This is the bacillus that causes what we now call plague but which Slavs called chuma. The chuma crossed these plains many times, each time riding on trading paths, each time decimating human populations in the towns where grain was gathered and stored. Chuma rode with the chumaki.

Empires, for their part, claimed to police and protect trade. Indeed, imperial origin stories often emphasize their capacity to drive out competing tax agents (commonly called robbers, highwaymen, or pirates). Thus Thomas Carlyle, in extolling the growing empire of Frederick the Great, argued that his greatness came from defeating the robbers that demanded tribute for trade over the Rhine River and were ruining Germany. “Such Princes, big and little, each wrenching off for himself what lay loosest and handiest to him, found [robbery] a stirring game, and not so much amiss.”11 The heroic Frederick the Great replaced local robbery with an even more stirring game: taxing robbers. For their own benefit emperors might cheapen trade by forcing imperial subjects to improve roads, build milestones and lighthouses, and deepen ports. In improving prehistoric trade routes between towns, an empire could decrease the price of what I will call, using an obsolete medieval term, “tollage,” a travel cost measured in pennies per ton per mile.12 This was simultaneously a measure of cost, weight, and distance. Absolutist states turned rivers into canals and built roads across rivers. Decreasing tollage centralized imperial authority and quickened trade.

In AD 324, after the Roman caesar Constantine defeated his rivals and declared himself emperor, he relocated the Roman Empire’s imperial capital to Byzas’s hill, the safe and defensible pinch point that could command the fruits of Europe, Asia, and Africa. In AD 330, Constantine planted a column—Cemberlitas—at the forum in Byzantium, rededicated Byzantium as New Rome, and invited wealthy and well-connected families throughout the old Roman Empire to settle there. At some point later it became Constantinople, in Constantine’s honor. Traders from the Black and Aegean Seas delivered grain to horrea, massive grain banks large enough to feed citizens during long sieges by rival empires.6 These granaries of the Greek, Roman, and Byzantine empires were the predecessors of modern banks.7 Elite citizens made deposits and withdrawals of grain by wheelbarrow. Individual vaults in a horreum stored valuables, just as safety-deposit boxes do in many downtown banks today. A receipt for grain stored in the horreum could be bought or sold, used as collateral for contracts, or seized in cases of debt. These grain receipts collectively became what we now call money.

Grain pathways on the Black Sea and the Mediterranean fed Constantinople for over a thousand years, from before 300 to 1453. The imperial city’s wealth rose and fell as the black paths converging on the Bosporus expanded and contracted.

Medieval western Europe, often cut off from regular trade with the East after 542, changed drastically. Just as bread made prehistoric fables and fed ancient empires, it increasingly defined medieval serfdom as lordship over smaller communities by bread-making masters. Formal slavery declined in Eurasia after the Plague of Justinian, though historians hotly debate whether that was a result of the plague. Aristocratic landlords derived their control in part by monopolizing grain milling and distribution, just as the kings of ancient empires had done, but on a much smaller scale. In medieval England, for example, the word “lord” comes from hlāford, meaning “loaf-ward”: the person who guards the loaves distributed from the medieval mill and bakery. The word “lady” derives from hlǣfdīge, meaning “loaf-kneader”: a maker of loaves. In part because the communal bakery turned wheat or rye into bread, controlling the loaf meant controlling people.

in 1347, the four horsemen appeared again, heralding the return of Yersinia. The bacillus probably came from the eastern steppe over the Silk Roads that stretched across the Mongol World Empire into the khanate of the Golden Horde. Yersinia’s first documented arrival from this route was in the Black Sea emporium of Caffa. According to legend, Mongols besieging the emporium became infected with plague. They then allegedly used catapults to launch infected corpses over the city gates.20 While there are reasons to doubt the story, new genetic evidence suggests that the plague’s expansion from Central Asia onto the steppes as early as the 1200s may have helped the Mongol Empire’s expansion east and west from what is now Mongolia.21 The plague started overland but found access to water by 1340. Genoese and Venetian traders had by this time established long-distance sea routes from the Black Sea to the Mediterranean, as chartered agents of Constantinople. Along with grain and slaves, traders again brought plague through the gates of Constantinople to western Europe.

Historians have called these Genoese and Venetian traders the first capitalists.22 As authorized agents of the Byzantines between eastern and western ports, and as competitors with the Islamic empires in the south, they combined the technologies of both trading corridors. Early in the fourteenth century they blended ancient Roman and more recent Islamic traditions, including Arabic numerals and legal agreements, to craft private bills of exchange. Using advances from Islamic algebra, these capitalist traders helped to develop and define double-entry bookkeeping. The first European central bank, the Camera del Frumento in Venice, purchased grain from ports along the Black Sea, then resold it to cities on the Mediterranean. Merchants borrowed from local citizens by drafting bills of exchange in banks with a promise to pay in ninety or more days when the ships came in. These bills of exchange were private credit instruments, guaranteed by the name of the trader, which any citizen could buy. A bill of exchange increased in value between the time it was issued and when the ship came in, allowing it to act as a privately issued, appreciating currency.

Between 541 and 1347, control of bread became baked into the laws of medieval European, North African, and Arabian empires that surrounded the Bosporus Strait. Kings, queens, aristocrats, sultans, and tsars built their power on grain, regulating the size of loaves and carefully controlling grain and the boundaries where it was grown. As late as 1835, bakers in Britain remained public employees paid by the state for each loaf of bread they produced. The same was true in Istanbul, where the nan-i ‘aziz, or standard loaf, weighed exactly 110 dirhem (just over thirteen ounces). If a baker’s loaf weighed less, a market inspector might instruct local police to parade him through the streets or, after multiple offenses, nail his ear to the door of his shop.24

When the Ottomans took over Constantinople in 1453, they had nominal control of Podolia but shortly lost it to Polish and Russian princes who warred over the bread lands on those rivers. While the princes put their names in chronicles, farmers north of the Black Sea did the more vital work over the centuries, seeking and recombining nearby strains of wheat to suit the weather. We have little worthwhile from the princes, but the thousands of unremembered farmers left us something much more vital for humanity’s long-term survival: dozens of varieties of wheat suited to dozens of microclimates and seasons. The later settlement of western Canada, the northern United States, Argentina, and Australia would have been impossible without the many landrace strains of wheat that developed over centuries in this region.

Empires survive only as long as they control the sources of food needed to feed soldiers and citizens; they fund themselves by taxing those who sell it. Before empires, ancestors of the chumaki traded food over long distances along with salt and leather. International trade shrank in periods when Yersinia pestis, in the bellies of rats, found a way to hitchhike on those same trade routes.

In 1768 Catherine relied on another note to trade for her army’s wheat: the assignat. The assignat, like the British pound, became an imperial currency and represented the tsarina’s future promise to pay for provisions. In the same period Catherine seized land previously owned by the Russian Orthodox Church inside Russia’s borders. Serf owners could buy this land with assignats. This made the assignat a particularly valuable form of currency.3 While Venetian bills of exchange represented grain in motion, Catherine’s assignats represented recently seized land and future land her empire would take by force.

The assignat was a bold move, one France soon adopted when French revolutionaries seized lands from the Roman Catholic Church.4 Catherine created a national debt in a strategy that would be embraced by an infant empire created at nearly the same moment: the United States. Indeed both Thomas Jefferson and Benjamin Franklin plunged into the same physiocratic waters that Catherine did. Physiocratic ideals shaped their vision of agricultural colonization of the West, investment in education, and plans for the export of grain. Those ideals would define their plans for independence.5 While French reformers defined American and Russian plans for expansion by wheat, the plan to turn national debt into currency likely came directly from capitalist traders in the Dutch Empire. Hope and Company in the Netherlands advised all three empires. A Hope representative would have pointed out how in the seventeenth century the Netherlands had established a national debt, created a national bank to issue that debt, and then used debt to expand its military empire around the world. Shortly after the Dutch expansion, Great Britain appropriated a deficit-based expansion strategy when English lords persuaded the Dutch prince William and his wife Mary to take the English throne. William formed the Bank of England in 1694. British consols and bonds helped fill the Atlantic with English ships. Historians have called this the financial revolution.

North America’s flour barrels occasionally made it all the way across the Atlantic, particularly during European wars. But the risk of selling flour was always great since the price and condition of flour barrels could change drastically during a stormy two-month journey across the Atlantic. Even still, the former colonies’ love affair with grain never faltered. From 1793 to 1815, continuing wars between Republican France and Europe provided opportunities for the Americans to provision the ships, as well as the tropical islands, of Britain, France, and Spain. In those years the country exported an average of a million barrels of flour a year at an average price of nearly $10 a barrel. “Our object is to feed and theirs to fight,” quipped Secretary of State Thomas Jefferson in 1793 after news emerged of France’s expanding war with the European powers. “We have only to pray that their souldiers may eat a great deal.” The French Revolutionary Wars drove wheat prices so high in Europe that American ships could occasionally feed ports in Spain, Italy, and Britain.

Because physiocrats viewed farmers as the primary creators of wealth, they saw landlords as a drain on the national balance sheet. As a result physiocrats argued that taxes should rest on landowners. In both the United States and Russia, powerful landed interests (enslavers and serf owners) strongly rejected this principle.23 Spending to boost agriculture, however, they strongly supported. As a result both landed empires imposed minor taxes on imported manufactures, because taxing goods at a few ports was simpler than collecting income or land taxes.

Provisioning flour to cities at war was as risky for the United States as it was for Russia. American shippers had to be especially inventive in defying British and French blockades. They introduced the concept of the “broken voyage” in which a shipper would bring sugar from a French colony, stop for a day in Baltimore to pick up grain, and then ship out again for France with a new ship manifest that declared all the goods to be American. Whether merchants practiced intentional physiocracy or not, selling grain past imperial blockades built merchant fortunes and expanded the international market for American flour, which fed the US Treasury before it had even erected a building.

In 1784, Britain imposed Foster’s Corn Law to ratchet up Irish grain imports over Russian and American grain. The danger, as the English Crown and Parliament saw it, was that spending foreign exchange on wheat would pull gold and silver out of Britain. Subsidizing grain fields just offshore was a classic imperial move, one that Julius Caesar would have applauded but that physiocrats abhorred. Britain expanded those corn laws in 1815 after Napoleon’s defeat. The United States responded with the American Navigation Acts of 1817 and 1818 to block selected British manufactures. Britain responded in turn with the Free Port Act of 1818, one of the most important and understudied acts in American history. Proclaimed in August 13, 1818, it blocked American ships from entering British ports, with the exception of the distant Canadian ports of Halifax, Nova Scotia, and St. John, New Brunswick. Once a Canadian buyer took grain and other provisions, that merchant could only use British ships to carry this American grain into the Caribbean.28 The result in America was a 50 percent drop in the price of flour and the American panic of 1819, perhaps the severest panic in nineteenth-century American history. Land prices dropped 40 percent, particularly along the Mississippi River. The US Land Office and the Second Bank of the United States seized thousands of acres for defaulted loans. In the sixteen years between 1803 and 1819, the total value of American wheat and flour exports had averaged $10 million, about the same as cotton exports. By 1820 American wheat and flour had shrunk to one-fifth the value of cotton exports. By the 1830s it had sunk to one-tenth. Cotton replaced wheat as America’s most valuable export. A Catherine-style policy of expanding by grain export appeared on the decline, at least relative to the growing empire of cotton.

Catherine was dead by the time Napoleon’s armies expanded east across Europe, adding up victories and bodies, crushing baronies and kingdoms. Understanding the physiocratic principle that food was power, he did his best to close off every European grain port to English commerce. He did so by seizing towns along the Baltic, North Atlantic, and Mediterranean coasts and inventing nearly a dozen tributary republics to control them, with every republic sworn to block British trade. Britain responded to this “Continental System” with orders in council that blockaded every one of Napoleon’s ports. No neutral state, according to the British orders in council, could use a port that blocked British commerce. This was warfare, in part, through bread: Britain couldn’t buy grain in European ports, but France couldn’t easily carry bread over water to feed armies. Bread brinksmanship of this kind would be repeated in World Wars I and II. By 1807, when it came to bread, the British and the French had checked each other. Britain could not buy flour abroad, and Napoleon’s armies would have to lug their food overland along vast army corridors with tree-lined vistas that are still visible in Europe today.

In roughly 7000 BC, South American hunter-gatherers began selecting and modifying nightshade plants for their fleshy roots. They created what we call the potato, though in the region that became the Inca Empire, hundreds of different varieties emerged, with dozens of different names. Phytophthora infestans coevolved as the potato’s parasite, an invasive water mold that colonizes, reproduces inside, and then devours potatoes, their living host.1 After Europeans encountered the Americas, they transplanted American potato tubers throughout western Europe, between roughly 1700 and 1840. The potato’s dry and dormant transmission over the cold Atlantic appears to have temporarily rescued the plant from its well-evolved, invisible freeloader. Though it took many decades for European growers to adjust to the food, the fleshy, bulbous, white potatoes soon became a crop for farmers, peasants, and their neighbors, most famously in Ireland but also throughout continental Europe. An underground crop that required work to bring up from the ground, potatoes were relatively safe from the depredations of a different kind of freeloader: imperial soldiers. Potatoes, unlike wheat, do not have a Persephone stage of dry safekeeping. Locking up a potato for long-range transmission to an imperial capital is difficult. Thus wheat-growing peasants began to grow potatoes for themselves and for those who lived near them. Because these vegetables traveled short distances on farmers’ trucks, Americans have called them “truck crops.” Within a few generations a social hierarchy emerged in Europe that resembled the Incan hierarchy. In the Inca Empire potatoes fed agricultural workers, while the dry, transportable starches—quinoa and other grains—were locked up and delivered to the elite.

Famine and revolution in the 1840s, though, burned new grain pathways from Russia into Europe. With infestans on the loose, bread increasingly replaced potatoes as poor peoples’ food, most decidedly in European cities whose sizes had always been limited by the availability of food. By 1850, as many as four hundred ships per year carried grain directly from the Black Sea to European ports, providing food for Europe’s urban workers.18 This cheap Black Sea wheat altered the quality of bread that Europeans ate and, with it, Europeans’ sense of divisions among social classes.

Odessa’s bounty allowed working families in the 1850s to buy their bread white, which most people preferred, not recognizing that the bran and endosperm in brown bread made it a healthier food because it supplied more protein and delivered indigestible bits (“roughage”) that scrubbed the sides of intestines. Urban, working-class families got shorter over the mid-nineteenth century, probably in part as a perverse side effect of the white bread upgrade that the wheat fields around the Black Sea provided to working-class diets.

Thus emerged what Parvus called the European consumption-accumulation city. Labor and capital accumulated where food was cheapest. Cheap food arriving by water meant that cities with the deepest docks thrived. As emigrants and orphan children from nearby rural areas filled these dock cities, would-be manufacturers collected and deployed them. Reformers and capitalists huddled poor people fresh from the countryside into workhouses, where they assembled goods from foreign and domestic sources: matchsticks, pencils, hard candy, lead toys, wooden boxes, and combs, to name just a few. Near the docks, storage and further processing of food expanded. Far from the docks, inland processing of grain declined. Tens of thousands of inland wind- and watermills became historical relics within a generation. Dozens of inland towns competed for river, canal, and railroad access to these consumption-accumulation cities. The successful ones became cities by drawing cheap food from coastal ports and specializing in manufacturing. Capital increasingly accumulated at ports in the hands of middle- and upper-class families with too little land to spend it on.

Both the United States and the Russian Empire ended forced labor without full cooperation of the owners of human flesh. The two empires disowned manorial lords and enslavers in a cataclysmic end to slavery and serfdom with effects that reverberate to this day in Russia, Poland, Ukraine, and the American South. Historians have tended to laud imperial and nationalist heroes for the end of slavery and serfdom: Catherine the Great who professed to dislike the harsh punishment of serfs; “the liberator” Alexander II, who demanded that his Council of Ministers end bondage by shouting, “I desire, I demand, I command”; and Abraham Lincoln, who wrote in 1860 that he could compromise with slaveholders on other things but would “hold firm” against slavery’s extension “as with a chain of steel.” Certainly the language used to end slavery and serfdom dropped from their lips like sweet-smelling myrrh.6 As we shall see, the end of serfdom and slavery had little to do with the bold pronouncements of emperors and presidents, however. The shearing of bondage and wheat, of slavery and capitalism, was a complicated and bloody matter. It had much to do with how wheat grew, who harvested it, and how farmers expanded across the plains. The economics of railroad freight, the influence of foreign investors, and the impact of war contributed more to the rapid end of serfdom and slavery than liberal impulses and amber waves of grain.

Russia’s ninth invasion of Turkey ended rather differently than the previous eight. It started the first global war over bread since Napoleon, and its conclusion would, more than any other factor, contribute to the end of serfdom in Russia. In the previous eight wars between Russia and Turkey, the British and French monarchies occasionally defended the Ottoman Empire against Russian and Austrian aggression, but Russian physiocratic expansion had mostly benefited those empires as it provided them with cheap grain. In part because France and Britain had fostered Ottoman independence movements in what became Greece and Egypt, they mostly they looked away as Russia and Austria alternated in carving off Ottoman-controlled regions along the Black Sea. But cheap bread, and Western European empires’ dependence on it after 1845, kept the attention of Britain and France on the region. Both states worried about a Russian grain monopoly. European observers of grain exports argued that Russia had been intentionally disabling its competitors on the inland ocean. The most glaring example was that the Russians had been entrusted with ensuring that the Danube exited freely into the Black Sea, but for decades they had allowed it to silt up, weakening the export prospects for the independent states of Wallachia and Moldavia.

Alexander, like nearly every tsar after Catherine, regretted serfdom, but the Russian failures in the Crimean War accelerated change. The empire’s key financial advisor, Julius Hagemeister, faced three related problems that became intimately connected after Nicholas’s abortive war for control of the Bosporus Strait. In the long term, according to Hagemeister, serfdom would always hold back full exploitation of the plains above the Black Sea. Having visited numerous farms and landed estates in that region in the 1830s, he felt that family ownership yielded more crops per acre and produced the cleanest, most sellable wheat. Serf estates, he had learned from grain traders, produced dirty wheat, filled with rocks and sand. As the wheat traveled inland, he continued, serfs had no desire to look after the grain and so often left it uncovered, causing it to spoil on its way to market and sell at a steep discount. In Russia, according to Hagemeister, wheat would always have a serfdom problem.

Unlike in the United States, the formerly bound population of Russia actually received land, even if it took small farmers more than a generation to pay it off. No such redistribution took place in the American South after emancipation, a result that has hobbled the family fortunes of the formerly enslaved to this day. As in Russia, wheat production relocated when slavery ended in the United States. After the American Civil War, grain came increasingly from the area around the Great Lakes rather than its former preserves in Virginia, Maryland, Missouri, and Kentucky.

Nicholas’s dream of capturing Istanbul with a serf army ended with a humbled, almost bankrupted empire in which his son, in a desperate attempt to stave off bankruptcy, abolished serfdom. The Peace of Paris in 1855 placed limits on Russia’s expansive power. The treaty created an independent international body that took control of the Danube’s grain route through the Black Sea. Within a few years, European powers would discover a way to blast out the Danube’s exit and allow a new grain state, Romania, to emerge as Russia’s miniature rival.31 Finally the allies against Russia banned Russian warships from passing through the strait at Istanbul. With serfdom ending and Russian imperial expansion diminished, Britain and France felt they had tamed Russia.

As historian Laurence Evans has suggested, the railway company, as both a road and the monopoly agent on that road, defied the logic of traditional economic models of supply and demand curves: “What is [the economist] to make of a good [like a railroad] that cannot be stored; that is dissipated forever if not used when it is available; that cannot be removed from the market except at substantial cost to the supplier; and that must be operated at less than maximal efficiency if it is to be of the greatest benefit to the market and the economy as a whole?”46 In many countries, the government response to the difficulties posed by this kind of monopoly pathway was to nationalize railway companies. As we shall see, a decade after the American Civil War brought cheap grain to Europe, Prussia and Russia assumed control of most railway companies, producing interesting, perverse incentives. Adjusting railway rates could sharpen or dull the effects of tariffs, encourage fiscal overreach, and make state capture by political elites more appealing. Continued private ownership in the United States before the Civil War, however, produced a different set of incentives. Because of the intertwining of economic and political power, railroad trunk lines remained in the hands of the merchant princes, allowing them to multiply and diversify their assets. From the outset, these merchants’ obsessive attention to the wants of the railroad’s customers turned them into social engineers, for railroads could carefully calibrate the prices charged for every manner of good that passed back and forth. A minor change in railroad rates could promote or dampen incentives—crop by crop—for farmers, artisans, and manufacturers. For example, grain always traveled at the cheapest, fourth-class rate on a railroad through the midwestern plains, making grain growing an obvious first choice for farmers near its edge. Grain farmers thought twice about diversifying into crops that would be charged second- and first-class rates. Monopoly corridors, by favoring a single commodity with low shipping rates, helped strengthen monocultures: wheat in the Midwest and cotton in the South. Railway companies also operated coal and copper mines along their corridors and frequently charged higher rates to competing mines to strangle their earnings.47 High rates for shipping manufactured goods to the West led rural people to manufacture their own substitutes, but by suddenly dropping the rate for imported goods, a railway company could destroy an inland manufacturer. American farmers on these monoculture railway lines did not despise capitalism; they despised the publicly favored, privately owned railway companies that—once built—charged rent on their every interaction with the outside world.

The Republican merchant princes of New York, Boston, and Philadelphia understood intimately why a railroad through the slave state of Missouri would fail miserably. As we shall see, slavery helped produce a society with an insubstantial middle class of resellers and consumers of eastern goods. Impoverished enslaved people couldn’t buy cloth, razors, plate glass, or hard candies. Without a sturdy middle class of consumers, no one would erect stores to sell eastern goods in interior regions. While it seems ironic that New York millionaires would resent slaveholders for their inordinate wealth, this was precisely the boulevard baron’s problem with slavery. From the founding of the Free Soil Party and the Republican Party that followed, these important merchants hated slavery not on moral but on economic grounds, declaring that slavery degraded labor, slave and free, producing a society of extreme inequality.56 Nonslaveholding communities had “populous, thriving villages and cities,” according to Republican orator and Iowan James Grimes, but if southern congressmen forced slavery into productive territories like Nebraska, then the developmental possibilities of the West would be lost.

The tension between bondage and railroads, between slavery and capitalism, was more than just political. Most southern railroads faced a serious problem with backhaul: railway cars moved east with the slave-produced staples of cotton and tobacco, but the demand for hardware, dry goods, manufactures, and imports in slave states was minuscule. Railway cars returned from east to west mostly empty, effectively doubling the price of goods sent from west to east.

A conflict over inequality and forced labor in the Western Hemisphere would alter the world’s grain pathways. Once the South seceded and Confederate cotton was blockaded, the Union cabinet and Congress knew that it needed a new crop for foreign exchange in order to fight secession. And Americans in the War Department recognized that if the nation’s roads could be refashioned to transport wheat more efficiently than the Ukrainian chumaki, they might turn Lake Michigan into another Black Sea and Chicago into another Odessa. The pathways of the world’s grain might change again. In December 1863, Peter H. Watson and David Dows had created a new technology that would alter the flow of grain: a futures market that could bring oats and grain to soldiers stationed a thousand miles away. A new kvassy empire, built on the export of wheat, was in the making.

Whereas Catherine the Great had successfully issued assignats to pay for her war against the Ottoman Empire, Union-issued paper money had given Watson nothing but headaches. After February 1862, the War Office was paying for supplies with the US Treasury’s legal tender notes—called greenbacks. Unlike dollars issued by prewar banks, these were not backed by gold or silver reserves. As a result they traded for as little as thirty-five cents to the gold dollar. Two prices were thus frequently quoted for commodities during the war: a low price for gold dollars and a higher one for greenbacks. The other problem for potential contractors was time: because the government paid its debts in greenbacks, signing a contract with the government could be risky if the value of the currency dropped by the time government auditors approved the purchase. Even worse, after mid-January 1863 the commissary-general’s office had been paying contractors not with checks but with “certificates of indebtedness” that would be payable at a later, unspecified date.10 Of course, if the Union won an important battle or two in the interim between billing and payment, the greenbacks would be worth more, sharply increasing the profits of a contractor.11 This made the intelligence gleaned by spies in the War Department valuable. The rapid fluctuations in both the price of oats and the value of the notes paid for them made contracting with the government risky. The number of contractors willing to assume the risk shrank through 1863, and the prices of provisions rose.

The futures contract was not entirely new. By the mid-nineteenth century, a forward contract for goods where parties agreed to future delivery, a fixed price, and a fixed quantity was well established and decades old. Parts of the process were centuries old. In 1859, a Baltimore commission merchant named Sackett with good references might enter into a contract with Mr. Tiller, a farm owner in Indiana, to take 253 bushels of his country wheat after harvest based on evidence of previous sales. Sackett would offer Tiller a cash advance for this business, which could be used to buy more land, pay for seed and provisions, or buy harvesting equipment. That contract might then be sold to a flour mill operator or a broker who collected such receipts or even sold to other brokers. A bank would certainly lend money to Sackett based on evidence of contracts in hand. Tiller and Sackett’s contract might pass through four or five hands, and a speculator who knew of a coming wheat shortage might pay more for it.

But the army’s futures contract had new features: a fixed month of delivery; a fixed percentage paid by each party to guarantee the contract (the “margin”); a standardized quality based on third-party inspection; a standardized (and smaller) quantity (one hundred or one thousand bushels), which was called the “contract”; a third-party arbiter (the Chicago Board of Trade) that collected the margins; and the arbiter’s legal authority to punish the buyer or seller for nondelivery. An Illinois state charter ensured the board’s arbitration committee had authority over these contracts. The harshest sanction was expulsion from the Board of Trade.

Using the force of nearly 275,000 atmospheres in nitroglycerin, humans could shatter molecular bonds in shale, limestone, or slate, bonds produced by planetary and interplanetary forces measured in millions of pounds per square inch. Civil engineers thought of it more viscerally: this new explosive could rip holes in the world’s mountains and blast passages in rock, allowing the construction of railway tunnels through mountains and turning inland river towns into ocean ports. Small cities like Antwerp, Rotterdam, and Amsterdam would become the planet’s grand gateways. Contractors exploded thousands of containers of nitroglycerin underwater in the five years after the accident in Colón. They had a dramatic effect on international trade by deepening ports, shrinking the distance between them, and allowing a radical realignment of grain pathways.

The merchants of Antwerp understood better than most how cheap grain could reshape Europe. After the American Civil War, the Antwerp Chamber of Commerce used Nobel’s new explosives to widen and canalize the Scheldt River, then tore down the historic city walls to erect a continuous wharf space nearly three miles long. Antwerp became an ocean port large enough to service deep-draft ocean vessels from anywhere in the world. “The big city,” to quote Parvus, “discards national egg shells and becomes the hub of the world market.”10 Antwerp became a consumption-accumulation city.

The new competition from Antwerp prompted the Dutch government to hatch its own Antwerp. It spent over three million Dutch guilders to blast through the “Hook of Holland,” to turn the inland town of Rotterdam into a seaport city for steamships. Once the route opened to steamship travel in 1871, Rotterdam—with easy access to hungry German cities along the Rhine—vied with Antwerp for the status of biggest grain port in continental Europe.

During the Middle Ages, the term “last mile” referred to the end of a journey or to death. Beginning in the 1970s, military suppliers and Bell Laboratories engineers redefined it. In their quest to minimize delivery costs, they identified the last mile as the longest and most expensive part of any delivery. Whether one delivers electricity, water, or bread, the last mile will consume up to 80 percent of the total cost of getting the product to the consumer. It includes things like storefront rent, hand delivery, physical connection, and billing to a house, all of which are distinct and particular. They require people, negotiation, and settlement of bills. Last-mile costs are the reason rural areas in the United States were the last to receive telephones in the nineteenth century, electricity in the twentieth century, and broadband internet in the twenty-first century.15 If we include grinding and baking in the last mile of grain’s delivery, a loaf of bread in your hand costs over one hundred times the price of the grain that goes into it. Yet, because the last mile was such a large part of the price, cheapening the long, narrow end of the supply chain had a profound effect: cheap grain made cheaper bread, especially in deepwater ports. A four-pound loaf of bread in the city of London cost an average of 8.5 pence in the 1850s but just over 5 pence by 1905.16 For new consumption-accumulation cities like London, Liverpool, Antwerp, and Rotterdam, consumers inside the last mile got the lowest prices. For wage workers, who for centuries paid half their wages just for food, cheap food in port cities became irresistible magnets after 1868. Irish and Scottish families moved to Liverpool and London; Antwerp drew dockworkers from rural Belgium, the Netherlands, France, and Germany. Just as American railroads from the 1830s to the 1850s allowed a surge in the size of the American port cities of New York, Philadelphia, and Baltimore compared to midsize American cities, so these European ports—favored by free trade and built by controlled explosions—began to grow more dramatically than other European cities. Antwerp’s population was just over 88,000 in 1846; by 1900 it was 273,000.17 American grain ships gave Antwerp international reach.

Industries emerged in the places where raw materials were abundant, food was cheap, and manufactured commodities could flow backward in the railway cars and ships that brought in food. Most disturbing to German and French landlords was that cheap American grain threatened to drive down rural rent in the European countryside. The explosion in 1866 was an accident, but the long-term effects of nitroglycerin on the black paths connecting growers and eaters of grain would, in just a few years, bring agrarian crisis to Europe.

Nitroglycerin helped speed the decline of the six-month bills that had once kept commodities afloat all over the world. The most memorable change wrought by nitroglycerin was the creation of the Suez Canal, which opened a route between the Indian Ocean and the Mediterranean that bypassed the Cape of Good Hope on the southern tip of Africa.28 After its completion in 1869, ship travel times from London to Calcutta dropped from six months to less than thirty days.29 A continuous journey could also supplant both the overland Silk Roads and the two-part passenger journey that required changing ships in Egypt.

Why did speed not matter to grain ships? The underwater telegraph, once completed and running reliably in 1866, perfectly complemented grain delivery by sail; combined with the futures contract, it simply changed the way that goods were ordered and paid for. As Walter Bagehot pointed out, “The telegraph enables dealers and consumers to regulate to a nicety the quantities of commodities to the varying demand.” A grain dealer could order grain in New York and either sell it before it arrived or have the skipper wait until the ship docked at the Isle of Man to determine if it would go to Hull, Liverpool, London, Antwerp, or Rotterdam. More disturbingly for London merchants, however, once grain was afloat, granaries became unnecessary in expensive cities of demand. If grain prices shrank, grain could wait in cities of supply like Chicago, Minneapolis, and Milwaukee. Thousands more bushels “on the float” at sea heading toward Europe could be ordered in transit. Just as wartime Cincinnati grain and oat dealers could be outfoxed by the Union Army’s use of futures markets and telegraphed orders, so English dealers were bypassed by a large-scale grain trader who could use the telegraph to order a hundred thousand bushels on the Chicago Exchange and—on the same day—sell it for future delivery in London or Liverpool. Buying and selling on the same day effectively eliminated the risk of a change in prices. Between 1866 and 1873, the “margin”—the difference between the buying and selling prices—for grain traders shrank from 20 percent to 1 or 2 percent for vastly larger quantities of grain. For a trader this meant that a loan for a six- or nine-month journey was unnecessary. Established grain traders who had already sold what grain they bought had less need to borrow.

Prussia’s need for foreign grain to fight a war was visible to everyone, and that stung. Men without titles, otherwise unknown because they lacked a “von” in their name, knew the dispensation of Germany’s forces. No German officer could order these footloose grain traders in fashionable hotels to work any more quickly. With thousands of ships at sea carrying grain, war had changed. Supply lines often became external to empires, internal lines were no longer always the most efficient way to feed an army, and news of an army’s victory or loss determined the price it paid for its food. Grain at sea made it increasingly possible for French, British, Italian, German, and Belgian armies to invade other places without worrying overmuch about finding local supplies or using costly, fuel-inefficient battleships to supply food. European imperialism after 1866, thanks in part to American grain, became easier for European empires to imagine. If foreign grain helped make a saltwater invasion easier, it also made that invasion everyone’s business. “Newsrooms” in the spacious trading halls of grain-receiving ports—London’s Baltic Exchange, Liverpool’s Corn Exchange, the bourses in Le Havre and Marseilles—kept abreast of every army at war, becoming the information gatherers of Europe. Traders received telegrams with the freshest news, well before it reached newspapers or general staffs. The grain exchanges accumulated stories of storms, revolutions, delayed soldiers, failed campaigns, droughts, and the high prices that resulted from these events. These traders, while unknown to von Goltz, knew everything. They traded on armies’ successes or failures, buying and selling boatloads of grain before it arrived in port. Warfare summoned pulses of grain, and the lack of grain could halt it. European cities competed with armies on the same exchange. Cities relied on fresh news and international markets to ensure their food supply, empires and soldiers be damned. This was the world that grain traders knew and the Prussian army despised. The London Baltic Exchange, the Berlin Bourse, and the burzha in Odessa received many of the same newspapers and magazines. These had been the multilingual centers of the world’s news for centuries, the true centers of power, the nerves of the world. Within a few years the Prussians would desperately need men in the grain trade, though this irked military officers to their core.

Marxists rejected traditional family life for its enslavement of women. A diverse and constantly squabbling group, they made inroads into working-class communities, particularly among skilled workers and professionals. Marx, in his ambitious, world-spanning histories, hoped to establish a model for the entire world economy that explained multiple things at once: the alienation of workers, the tyranny of husbands, class hatred by elites, the failure of religion, the solution to poverty, the brutality of states and empires, the horrors of child labor, and the evils of slavery. Marx’s understanding of the world and the future flowed from his understanding of Ricardo’s paradox. David Ricardo, a classical economist and Whig, had marveled at improvements in grain production. In the 1820s Ricardo sought to establish a mathematical formula to explain these changes. Some improvements, like enhanced crop rotation and the use of manure, allowed more production on less land. Other improvements, like better plows and threshing machines, required less labor.11 But a paradox left Ricardo puzzled. Landlords used these improvements, he said, but improved efficiency would probably hurt them collectively. Land-saving improvements meant that less land was needed to grow food. All things being equal, this would cause rents to fall. Labor-saving improvements were bad too. Because fewer workers would be needed, landlords would not have to borrow as much to hire workers. So interest rates (“money rent” in his phrasing) would also fall. Here was trouble. Improvements in agriculture provided short-term benefits to a single landlord but hurt landlords collectively as renters of land and lenders of money.

Radical improvements that bettered people’s lives—and they were everywhere—might motivate individual landlords and capitalists. But ultimately landlords were gonna landlord, Marx thought. Technical improvements would threaten the “rentier class” that made money on rents. Then Marx made a massive, but interesting, logical leap. Ricardo’s paradox, Marx posited, drove human history. A kind of landlord dominated each stage of a society’s development. In ancient societies, this was the slaveholder; in serfdom, this was the lord; in capitalism, this was the capitalist. The “forces of production,” according to Marx, advanced in each stage of a society’s progressive development: ancient slavery became medieval serfdom, which became modern capitalism. In each case the forces of production hit their peak, after which point existing property relations became oppressive. Then social change came through “contradictions”: slave uprisings, peasant revolts, and workers’ struggles against employers.

The consumption-accumulation cities of Europe became an ideal place for Marxist thought to spread. A polyglot collection of workers was assembling. The lowered cost of living after 1860 allowed workers to organize into unions and fight for shorter hours. For workers in Britain and industrial Europe, the period from approximately 1860 to 1890 really was a golden age.16 Shorter hours gave workers time to read and helped create a class of autodidacts who collected in consumption-accumulation cities. Shorter hours provided an opportunity for workers to band together in collective institutions and see a new world emerging that was not another bloody empire or racially exclusive state. The coherence of Marxist theory as a model for the history of the world and its future helped draw in both women and men, as well as democrats, socialists, utopian planners, engineers, and refugees from broken empires. While he rejected assassination, Marx suggested that the end to all the broken institutions would require a violent cataclysm. This prediction was millennial in a way that resembled the books of Daniel, Paul, and Revelation. The very coherence of Marxist theory made fragile empires regard Marxism as an existential threat.

The steamships—with compound engines, screw propellers, and capacities of approximately twenty thousand tons—found the choppy waters and cramped harbor outside Odessa challenging.1 Even sailing ships faced difficulties in Odessa. Shipmasters there complained about delays imposed by the workmen’s guild, the customshouse bureaucracy, and the Odessa banks.2 British trade officials, stationed in Odessa to help shipmasters, gave them little time or respect. “To few ports do a lower class of shipmasters come than to Odessa,” complained British consul Eustace Clare Grenville-Murray in 1869. “Five out of six are uneducated colliers from Shields or Sunderland,” he continued, and the worst were those “troublesome half-educated men known among sailors as a sea-lawyer.”3 Grenville-Murray was removed from his position, but the merchants of Odessa attested that shipmasters, whether knowledgeable about the law or not, faced numerous difficulties, including the inattention of the governor-general and negligent port officials.4 Other tribulations for shipmasters included the narrow passage at Constantinople, which the Ottoman Empire might block in case of war, famine, or revolt. The possibility of additional taxes or delays in paperwork at the strait had always given merchants pause.5 For all these reasons, by 1869 the cost of moving a bushel of wheat from Odessa to a European port was at least twenty-five cents. The same quantity of wheat could be transported from the United States for less than twenty cents, even though the route from Odessa was shorter, took less time, and did not cross the deepest part of the ocean.6 After 1870, then, cheap American grain and flour began to replace Russia’s as the food of Europe’s urban working class.

Two years after the start of the 1873 panic, the merchant Charles Magniac summarized the problems grain merchants faced and how they led to the crisis: “the Suez Canal, in conjunction with steam and ocean telegraphy” made obsolete “all the old machinery—warehouses, sailing vessels, capital, six months’ bills, and the British merchant, whose occupation [was] gone.”13 Sailing ships survived, but grain merchant warehouses and short-term bills of exchange did become outmoded.

The sudden drop in shipping prices brought by nitroglycerin’s collapse of travel times helped usher in the period economic historians call the first wave of globalization, from 1871 to 1914. Colonial goods worth more than roughly fifty cents a pound, like coffee, sugar, silver, and cotton, had been traveling across the Atlantic since the 1600s. With free trade, instant sharing of prices by telegraph, and nitroglycerin’s elimination of expensive barriers, shipping became cheap enough for bulkier, lower-value goods worth less than fifteen cents a pound, like wheat, beef, and kerosene.

Thus Parvus was a new kind of Marxist, one who studied a world system of commodity pathways around the world. He believed that this world system was older than capitalism. He also believed there was a bonus for everyone in shrinking the world, whether by lowering tariffs, improving grain-drying methods, building grain elevators, or deepening harbors. Cheaper bread, if the benefit could truly be shared, might save millions of workers from lives of endless toil. Having tried to organize workers in Odessa, he knew that their time mattered as much to them as money or more. He argued that the bounty realized from lowering the tollage in grain distribution should benefit everyone both in material and time. Shorter, tighter pathways might allow a shortening of the standard twelve-hour workday to ten hours, then eight hours. The international scope of Parvus’s model was as vast as the steppe and as deep as the ocean; explained clearly, it could attract workers to an international movement. Indeed, it required an international movement; otherwise workers in one country might—in a workshop bonded together by trade routes—compete against workers ten thousand miles away.

Between 1838 and 1911, the Ottoman Empire had become locked in the fiscal orbit of Great Britain. The difficulties started in 1833, when Sultan Mahmud II faced a revolt by his Egyptian governor, Muhammad Ali, which threatened to end the empire. Only hasty Russian intervention prevented Ali’s capture of Istanbul itself. Reeling from this threat, Mahmud II sought the support of the British navy. In 1838 he signed the unequal treaty of Balti Limani with Britain, which made Turkey a kind of fiscal vassal to Great Britain. British merchants received free access to Ottoman markets with no corresponding Ottoman access to English markets. In return for this enormous favor, Britain helped the Ottomans beat back Egyptian forces, most famously with the 1840 British bombardment of Acre in Jerusalem. Thereafter cheap foreign flour and textiles imported by English merchants continually weakened the Ottoman Empire’s internal industries, which had no ability to slow down imports. The empire imported more than it exported for the rest of its days. To make up for the loss of tariffs, it increased taxes on the Balkan states, adding fuel to the fire of independence movements in Serbia, Bulgaria, Wallachia, and Moldavia. Britain’s ability to bypass an empire’s tariffs became a model for squeezing resources out of Asia, Africa, and the Pacific thereafter.

While the physiocratic empires of Russia and the United States had most of their wealth on their edges, European states, like Germany and Italy, that consumed and taxed cheap grain strengthened and concentrated wealth in their capitals. The gullet cities that prospered from cheap food tried to fight back. Millers and other processors of grain inside European gullet cities resisted grain tariffs at first but then agreed upon a complex new system of exclusions and transformations for grain. The states introduced a “drawback” for all grain used to make exported flour. In France, for example, a miller who exported ten thousand sacks of flour to a French colony in 1892 received a drawback certificate for $2,900, which grain traders bought to reduce their tariff expenses. In this way a grain tax could be reduced if the resulting flour, bread, and biscuit produced in gullet cities could be exported to a hungry world outside Europe.16 Grain tariffs helped build railroads and battleships. European states would fight over potential markets in Asia, Africa, and the Pacific. Processing food from across the ocean and selling it abroad became the new work of European states.

Taxing the flood of foreign grain was not just a sop to landowners. The tariff had important benefits for state building in filling federal coffers. Tariffs and railway charges together made up the two largest sources of revenue in the Prussian budget. Both kinds of grain taxes—tariffs and high railroad rates for foreign grain passing through Germany—gave the empire a fund to buy off the smaller federal principalities that resisted the German Empire’s authority.31 German economists at the time justified the military rather than the economic advantages of grain tariffs, though as economists they recognized that cheap food benefited everyone who was not a landlord. They noted that, by 1881, the “double danger” of cheap food and cheap transport had allowed England’s agriculturalists to dwindle to fewer than 8.5 percent of the population compared to a European average of between 35 and 69 percent. Only by taxing cheap grain could Germany escape the threat of starvation in case of war.32 Just as importantly, the tax on cheap grain from abroad allowed Germany and Italy to build up war budgets without raising taxes on land. Cheap grain built European states, but it also gave them the resources to kill people.

Beginning around 1878, the empires lashed outward; the mile marker could not hold. European brutality in the desire for overseas colonies was not new of course, but after 1879 violence in the name of opening markets reached shocking levels, including in the Anglo-Zulu war of 1879, the French conquest of Tunisia in 1881, the Russian capture of the region east of the Caspian Sea from Iran in 1881, the British occupation of Egypt in 1882, and the continuing Dutch war against the Aceh in Indonesia. European states established brutal colonial governments throughout Asia, Africa, and the Middle East. This was the scramble for Africa, the scramble for Asia, and the Great Game in the Middle East. Prosperous European states battled one another for imperial markets.

If European empires found a way to respond to the promise and prospect of cheap food by cranking up armies and navies, the story was different for the Ottoman and Qing empires, which struggled against cheap foreign food that drained their empires of gold and silver. Subjects of the Qing Empire, especially in its port cities, bought enormous quantities of California flour and its products, leading urban diets to shift from rice or noodles to bread and cakes.44 These two empires mortgaged their futures on international bond markets. To compete with the German, French, British, and Italian empires, the Ottoman and Qing empires issued bonds and laid impossibly long railroads, built deepwater ports, and funded trading fleets. They borrowed and borrowed with little oversight.45 The Ottoman and Qing empires, to pay their ever-growing infrastructure bills, allowed international firms to take over their tax collection, a dangerous step. The Chinese Maritime Customs Service (founded 1854) was technically international, but nearly all its agents were British. Its officers taxed junks that crossed the Yellow Sea but exempted British-owned steamships. The Ottoman Public Debt Administration (OPDA), founded 1881, was an organization elected by British, French, and German bondholders, though, according to Parvus, the French administrators were strongest inside it. Each tax agency had its own internal police force and had nearly complete autonomy inside the state. The Ottoman sultan could inspect the books of the OPDA’s salt and tobacco monopolies, and the Qing emperor could do the same for internal customs inside China, but neither could alter the manner and method of tax collection.

Beginning with his 1891 dissertation and continuing with his 1895 article “The World Market and the Agrarian Crisis,” Parvus argued that cheap American food had changed the world’s food roads, bringing Europe its crisis in 1873. When he arrived in Berlin in 1892, he understood that these self-proclaimed empires were responding to these food roads by building battleships and submarines. The agrarian empires in particular—Russian, Ottoman, Qing, and Habsburg—might not survive in a world where oceans of grain could stream from Odessa, New York, or San Francisco to whatever ocean port had the gold to pay for it.

The Japanese siege of the Russian citadel at Port Arthur lasted from August 1904 to March 1905. In May 1905 the Russian Atlantic fleet finally arrived at Port Arthur. It took the Japanese navy only three days to destroy it. The Russian minister to Tokyo, Baron Roman Romanovich Rosen, pointed out (with the benefit of hindsight) that with Russia finally forced to surrender the port, the billions of francs spent on a railway across Siberia could never be repaid. The Russian Empire’s “sacrifices in blood and treasure” were already enormous, and it would have to default on its long-term debt for a road to nowhere. Russia was essentially bankrupt. This futile, costly railway expansion and the surrender that followed, Rosen argued, spelled the end of the Russian Empire.

PARVUS’S ARRIVAL AT the end of 1910 was fortuitous, for the Ottoman Empire soon faced catastrophe. In September 1911 Italy invaded North Africa west of Egypt in what is now Libya. As soon as a large portion of the Ottoman army was away in North Africa, a “Balkan League” hurriedly formed to invade and seize all Ottoman land in the European part of the empire. Here, in the slow-motion collapse of the Ottoman Empire, World War I began.11 Parvus was ready with a prescription for the empire’s malady. As he did in his book Starving Russia, he performed in 1911 a forensic accounting of empire in a series of articles in Türk Yurdu. He examined how it contracted debt, how interest rates were set, and how foreign-controlled institutions ensured payment. The Ottoman Empire’s debt problems had begun with the Crimean War, he concluded, after carefully studying its accounts. In seeking to save Istanbul from invading Russia, Sultan Abdulaziz had borrowed heavily, and as debts came due, his successors had gradually turned over the empire’s most valuable monopolies—in tobacco, for example—to European states. Parvus calculated that the European-controlled Ottoman Public Debt Administration (OPDA) had probably already collected all the taxes necessary to pay off Ottoman debts. Yet it continued to control tax collection in the empire and could easily disguise its prodigious bounty by (for example) expanding the OPDA printing, publishing, training, and foreign relations apparatus while counting those as expenses recouped directly from the tax. The Turkish Empire would always be on a short leash so long as the foreign-controlled OPDA collected its most valuable internal taxes on tobacco and salt and allocated the benefits to its growing infrastructure. This was the same system of external taxation on an empire’s internal trade that Britain had imposed upon China with the Maritime Customs Service.

Empires, just as they had from the days of Julius Caesar’s milestones, needed cheap, fast, efficient paths that delivered food to cities and brought a backhaul of manufactured goods to the countryside. The sultan had spent too much on railroads that could move armies over mountains. This made the Ottoman Empire’s logistical pathways expensive and prone to breakdown. Parvus worried that these costs could never be recouped. Parvus was making a case not for traditional economic nationalism and tariff barriers but rather for a grain-to-city infrastructure, protection of private agricultural property, expanded loans to farmers, a currency exchangeable internationally, and higher taxes on farmland that would force agricultural improvement. Bulgaria had once been a poor part of the Ottoman Empire, Parvus pointed out. But once it gained independence, it exported much more grain than it ever had in the Ottoman domain and could thus pay higher taxes.13 Parvus’s observations must have puzzled orthodox Marxist readers of his Turkish articles. His Marxist-inspired development strategy, emphasizing private property in agriculture combined with a mix of public and private control of industry, had been his prescription for the Prussian state as early as 1895. The same strategy would transform China and Vietnam a century later, though without his direct influence.

As Parvus saw it, the Young Turks needed to understand that the Ottoman Empire’s biggest problem lay not in its position as a target for the Russians, nor in its frequent fires, nor in its colossal debt. Its biggest problem lay in how it funded, produced, taxed, and distributed the grain that passed from its farmers’ fields, to its ports, to its capital on the narrow strait of the Bosporus.

World War I has been characterized as a “great powers” conflict with Germany as the aggressor. A Serbian assassin killed Archduke Franz Ferdinand, heir to Austria-Hungary’s throne, leading that empire to declare war on Serbia. Russia backed Serbia, mobilizing its army near the Austro-Hungarian border. Germany, itching for conflict, supported Austria-Hungary and demanded that Russia demobilize. When Russia refused, Germany invaded Belgium to attack France—Russia’s ally and financial backer. In the same month the Germans and Austrians attacked Russia near Tannenberg, wiping out the Russian First and Second Armies. England joined the side of the Franco-Russian Allies after Germany invaded Belgium. The Ottomans only joined the Austrian-German Central Powers two months later.1 That’s an oft-told story, but for scholars of the pathways of grain around the world, the war’s history begins a little earlier and much farther east. In 1911, Italy invaded what would become Libya, taking it from Turkey. The day after the fighting stopped, Greece, Bulgaria, Serbia, and Montenegro took advantage of the conflict to invade Turkey. Then, crucially, Turkey closed the Bosporus and Dardanelles Straits to commerce, blocking all Russian grain and oil exports. Russians, fearing that Bulgaria or Greece might capture Istanbul, put both their army and the Black Sea fleet on alert. Russian agriculture minister Alexander Krivoshein, who then dominated the tsar’s council, reorganized the Russian cabinet in 1914 to prepare for a global war. From the cabinet’s perspective, this coming conflict would be the seventh Russo-Turkish war since the reign of Catherine the Great, yet another attempt to protect Russia’s precious grain-export trade. Krivoshein saw in Istanbul an existential threat. He recognized that Germany, in helping build up Turkey’s military, was drawing Istanbul into its orbit. The paranoid Russian cabinet saw signs of this German-Ottoman alliance everywhere. German officers had been training the Ottoman army since 1883, and Prussian officers organized the placement of the artillery that Parvus had purchased on city walls in Istanbul and Adrianople. Most concerning was that, in July 1914, the Turkish state would receive its first dreadnought: a costly state-of-the-art ship from the English firm Vickers & Co., with other ships on order. This dreadnought was a massive upgrade from previous generations of battleship, with more guns on board than any ship afloat. Russia feared a repeat of its defeat in the Yellow Sea: a single Japanese battleship had led a small armada that destroyed Russia’s eastern and then its western fleets. A single Turkish dreadnought, with a small escort of torpedo boats, might wipe out the Russian navy on the Black Sea. Such one-sided battles had become familiar. The Americans had done the same to Spain in the Spanish-American War in 1898, Italy had done it all over North Africa in 1911, and the Greeks had done it to the Ottomans in the Balkan Wars in 1912 and 1913. If a Turkish dreadnought passed through the Dardanelles, wrote Russia’s naval minister, the “Turks would have undisputed mastery of the Black Sea.”

This account, most associated with Russian historian Sean McMeekin, puts Russia as the primary aggressor in World War I. Fearing a rapid Turkish buildup on the Bosporus Strait, the Russians sought the earliest-possible opportunity for a conflict with Turkey. They feared that the combination of new harbor defenses and a dreadnought would make the passageway to the Black Sea impregnable and threaten Russian trade. The assassination of Franz Ferdinand provided Russia, already prepared for conflict, a perfect pretext to assemble troops on the border. The Russians had little interest in defending Serbia but knew that massing troops at the border would provoke Germany and Austria-Hungary to declare war first, and if war was declared before the Turkish dreadnought arrived, Istanbul might be easy prey for Russian ships. Russia hoped that a hasty German attack would provoke Britain. A too-rapid attack on Turkey, however, risked revealing the Russian dagger: the deep desire to take Istanbul.

By 1916, Russia’s grain prices had risen more quickly than prices on the world grain market, an astonishing transformation for a country where grain had been so cheap and that had exported half of its grain before the war. In response to the rapid increase in grain prices, a black market in grain trading emerged. Governors then imposed tariffs and finally embargoes on grain exports from their regions. Soon governors and tsarist militias were competing to block and sideline grain cars bound for Russian cities. While grain prices had doubled around the world, including in the grain-rich United States, in Russia the price of bread increased more than tenfold between the spring of 1916 and the spring of 1917.4 Then on March 8, 1917 (February 23 on the Russian calendar), protests over food rationing in Saint Petersburg led to a riot. Instead of suppressing the unrest, as it had in 1905, the army turned against its officers. Within days Tsar Nicholas II abdicated. Parvus, who had predicted much of this in his twenty-page memo, suddenly drew intense interest from the German war ministry, which he admonished not to extract a price from Russia’s ruling Duma or break Russia into pieces. Both actions would now only embolden the Russian opposition and maintain the war, he wrote.5 He also knew these actions would strengthen the authority of nationalists, liberals, and manufacturers. For Parvus, who remembered the fate of communists in the Paris Commune and his friends executed by Russia in 1905, this would be unacceptable. Instead, Parvus said, the German government needed to spend much more, perhaps an additional fifty million deutsche marks, to send a sealed train of Bolsheviks and Mensheviks to the Finland Station outside Saint Petersburg. The Germans would have to follow up with delivery of pistols, dynamite, and medicines. He could arrange for grain deliveries to Germany from Russian warehouses on the Baltic. His agents in neutral Denmark would contact agents in Petersburg and elsewhere on the Baltic by wireless telegraph. He already controlled neutral ships with Danish and Swedish flags.6 The Bolsheviks and many of the Mensheviks, Parvus promised, would embrace defeat. Some Russian socialists supported the war. These Social Patriots, Russian socialists who supported Russia’s side in the conflict, needed to be defeated with counterpropaganda. He promised that the Bolsheviks and Mensheviks would permit an independent Ukraine and an independent Finland and would surrender on the eastern front.7 His new trading agencies between Copenhagen and other Baltic ports would have the support of socialist dockyard workers and permits to trade on the Baltic. Russian grain would be traded for German munitions and medicines.8 It was much like the trade Parvus had organized in the Black Sea during the Balkan Wars. The German army would have bread, the measure of victory; Russia would have revolution. Between fifty and two hundred million deutsche marks flowed from Germany to the Bolsheviks in 1917. Parvus multiplied that aid with his efficient smuggling operation. Much went to the delivery of newspapers. The Bolsheviks’ access to machine guns and artillery by mid-1917 gave them the military capacity to defend themselves against the Duma and the counterrevolution under General Lavr Kornilov, just as the Young Turks had defended themselves against a counterrevolution by the sultan in 1909. In mid-1917 the Duma tried to put Vladimir Lenin, Leon Trotsky, and others on trial as German agents. Prosecutors announced in the newspapers that they had considerable evidence of telegraphic communication between Lenin and Parvus through third parties that showed how German money was funding the Bolsheviks. The October Revolution prevented the trial, and the documents have disappeared.

The Bolshevik “Decree on Land” converted all land to state land and then declared that it would be redistributed. While the redistribution of land shrank the number of landless peasants, it also broke up the “frontier estates” that had been Russia’s primary source of grain. For a variety of reasons, five-hundred- to one-thousand-acre estates may have been the most practical way to grow wheat. Larger plots may have been necessary for growing grain on the steppe for numerous reasons. Efficient grain plowing and harvesting on rugged, flat plains demanded heavy equipment; the dry plains needed coordinated, long-distance irrigation; and the plains had long used a four-field rotation system that required leaving many acres unused each season.16 The revolution also apparently revealed other difficulties in relying on the peasant estates to produce more food. Russian agrarian economist Alexander Chayanov did careful measurements of peasant productivity immediately after the revolution. He noted, based on these close studies, that peasants didn’t respond to the market the way one would think. Disagreeing with David Ricardo, he argued that land, labor, and capital were not three equally replaceable quantities for a peasant. Because peasant families employed themselves to work, their resistance to drudgery was exponential. As they got closer to the peak amount of work they and their families could do, their resistance to that drudgery got sharper and sharper. In that environment, if the price of grain increased, as it did in 1918 and 1919, a family might not apply more labor to produce more crops in order to get more capital or land. Instead, peasants might actually produce less grain when prices rose because no increase in capital or land could match the satisfaction peasants got from not working themselves so hard. He also found that peasants worked hardest when they had young children, then gradually lowered the total working hours on the farm when the children got older. The family life cycle, not prices, governed their behavior. A “frontier estate,” by comparison, looked more like a capitalist firm in that a farmer could purchase extra land, labor, and capital when grain prices were high. Bolsheviks rejected Chayanov’s assessment of the peasant economy because it appeared to favor kulaks and suggested that peasant agriculture could not save Russia. He was arrested in 1930 on made-up charges and exiled to Kazakhstan. In 1937 he was rearrested and shot on the same day.

World War I, as a battle between European nations dependent on foreign grain, meant that the grain-powered great powers could only last so long. The allies’ inability to break the blockade at Istanbul prolonged the war, leading to starvation in Belgium and long-lasting devastation in France. Germany endured longer in part through secret negotiations for Baltic grain that are still not fully understood. It is possible that only Parvus could answer the question of how much grain Germany got in its financial arrangement with him and, indirectly, the Bolsheviks. For the Ottoman, Qing, and Russian empires, revolutions arrived in 1908, 1911, and 1917. World War I was an interregnum in which most of the world’s empires fought for control of the food-trade-tax nexus. By 1917 Bolshevik revolutionaries had gained insights from the Young Turk Revolution into how to successfully topple the massive Russian Empire. Crucial to their success but contrary to their revolutionary program, the Bolsheviks redistributed land to peasants across the steppe in 1917. They learned that authority was constituted through the control of bread and that breaking up the grain-delivering power of the Duma, the revolutionary line committees, and even the mesochniki was critical to seizing power. The Soviet Union would continue to define itself as the monopoly holder and distributor of bread, just as the ancient Romans had done in the days of the annona.

Posted in Review and Ideas | Tagged , , , , , , | Leave a comment

Key Points from Book: How Civil Wars Start and How to Stop Them

by Barbara F. Walter

When Saddam Hussein was captured, researchers who study democratization didn’t celebrate. We knew that democratization, especially rapid democratization in a deeply divided country, could be highly destabilizing. In fact, the more radical and rapid the change, the more destabilizing it was likely to be. The United States and the United Kingdom thought they were delivering freedom to a welcoming population. Instead, they were about to deliver the perfect conditions for civil war.

Civil wars rose alongside democracies. In 1870, almost no countries were experiencing civil war, but by 1992, there were over fifty. Serbs, Croats, and Bosniaks (Bosnian Muslims) were fighting one another in a fracturing Yugoslavia. Islamist rebel groups were turning on their government in Algeria. Leaders in Somalia and the Congo suddenly faced multiple armed groups challenging their rule, as did the governments in Georgia and Tajikistan. Soon the Hutus and the Tutsis would be slaughtering each other in Rwanda and Burundi. By the early nineties, the number of civil wars around the world had reached its highest point in modern history. That is, at least until now. In 2019, we reached a new peak. It turns out that one of the best predictors of whether a country will experience a civil war is whether it is moving toward or away from democracy. Yes, democracy. Countries almost never go from full autocracy to full democracy without a rocky transition in between. Attempts by leaders to democratize frequently include significant backsliding or stagnation in a pseudo-autocratic middle zone. And even if citizens succeed in gaining full democracy, their governments don’t always stay there. Would-be despots can whittle away rights and freedoms, and concentrate power, causing democracies to decline. Hungary became a full democracy in 1990 before Prime Minister Viktor Orbán slowly and methodically nudged it back toward dictatorship. It is in this middle zone that most civil wars occur. Experts call countries in this middle zone “anocracies”—they are neither full autocracies nor democracies but something in between. Ted Robert Gurr, a professor at Northwestern, coined the term in 1974 after collecting data on the democratic and autocratic traits of governments around the world. Prior to that, he and his team had debated what to call these hybrid regimes, sometimes using the term “transitional” before settling on “anocracy.” Citizens receive some elements of democratic rule—perhaps full voting rights—but they also live under leaders with extensive authoritarian powers and few checks and balances.

To everyone’s surprise, they found that the best predictor of instability was not, as they might have guessed, income inequality or poverty. It was a nation’s polity index score, with the anocracy zone being the place of greatest danger. Anocracies, particularly those with more democratic than autocratic features—what the task force called “partial democracies”—were twice as likely as autocracies to experience political instability or civil war, and three times as likely as democracies.

A government that is democratizing is weak compared to the regime before it—politically, institutionally, and militarily. Unlike autocrats, leaders in an anocracy are often not powerful enough or ruthless enough to quell dissent and ensure loyalty. The government is also frequently disorganized and riddled with internal divisions, struggling to deliver basic services or even security. Opposition leaders, or even those within a president’s own party, may challenge or resist the pace of reform, while new leaders must quickly earn the trust of citizens, fellow politicians, or army generals. In the chaos of transition, these leaders often fail.

A primary reason for revolt is that democratic transitions create new winners and losers: In the shift away from autocracy, formerly disenfranchised citizens come into new power, while those who once held privileges find themselves losing influence. Because the new government in an anocracy is often fragile, and the rule of law is still developing, the losers—former elites, opposition leaders, citizens who once enjoyed advantages—are not sure the administration will be fair, or that they will be protected. This can create genuine anxieties about the future: The losers may not be convinced of a leader’s commitment to democracy; they may feel their own needs and rights are at stake.

A painful reality of democratization is that the faster and bolder the reform efforts, the greater the chance of civil war. Rapid regime change—a six-point or more fluctuation in a country’s polity index score—almost always precedes instability, and civil wars are more likely to break out in the first two years after reform is attempted.

Democratic countries that veer into anocracy do so not because their leaders are untested and weak, like those who are scrambling to organize in the wake of a dictator, but rather because elected leaders—many of whom are quite popular—start to ignore the guardrails that protect their democracies. These include constraints on a president, checks and balances among government branches, a free press that demands accountability, and fair and open political competition. Would-be autocrats such as Orbán, Erdoğan, Vladimir Putin, or Brazilian president Jair Bolsonaro put their political goals ahead of the needs of a healthy democracy, gaining support by exploiting citizen fears—over jobs, over immigration, over security. They persuade citizens that democracy as it has existed will lead to more corruption, more lies, and greater bungling of economic and social policy. They decry political leaders’ compromises as ineffective, and the government as a failure. They understand that if they can persuade citizens that “strong leadership” and “law and order” are necessary, citizens will voluntarily vote them into office. People will often sacrifice freedom if they believe it will make them more secure. Then, once in power, these leaders plunge their countries into anocracy by exploiting weaknesses in the constitution, electoral system, and judiciary. Because they typically use legal methods—partisan appointments, executive orders, parliamentary votes—they are able to consolidate power in ways that other politicians are unable, or unwilling, to stop. This increasing autocratization puts countries at higher risk of civil war.

For a decaying democracy, the risk of civil war increases almost the moment it becomes less democratic. As a democracy drops down the polity index scale—a result of fewer executive restraints, weaker rule of law, diminished voting rights—its risk for armed conflict steadily increases. This risk peaks when it hits a score of between +1 to −1—the point when citizens face the prospect of real autocracy. The chance of civil war then sharply drops if the country weathers this moment by becoming even more authoritarian, or changes course and begins to rebuild its democracy.

starting in the mid-twentieth century, more and more civil wars were fought by different ethnic and religious groups, rather than political groups, each looking to gain dominance over the other. In the first five years after World War II, 53 percent of civil wars were fought between ethnic factions, according to a dataset compiled by James Fearon and David Laitin, two civil war experts at Stanford University. Since the end of the Cold War, as many as 75 percent of civil wars have been fought by these types of factions. Think of the many wars that have made headlines in the past several decades: Syria, Iraq, Yemen, Afghanistan, Ukraine, Sudan, Ethiopia, Rwanda, Myanmar, Lebanon, Sri Lanka. All were fought between groups divided along ethnic or religious lines, and oftentimes both.

Countries that factionalize have political parties based on ethnic, religious, or racial identity rather than ideology, and these parties then seek to rule at the exclusion and expense of others.

Two variables—anocracy and factionalism—predicted better than anything else where civil wars were likely to break out.

Political parties begin to coalesce around ethnic, racial, or religious identity, rather than a particular set of policies—as Hutus and Tutsis did in Rwanda, for example, or as many political parties did in Ethiopia. It is a crafty way for leaders to cement both their following and their future. Identity-based parties make it impossible for voters to switch sides; there is nowhere for them to go if their political identity is tied to their ethnic or religious identity.

ETHNIC NATIONALISM, and its expression through factions, doesn’t take hold in a country on its own. For a society to fracture along identity lines, you need mouthpieces—people who are willing to make discriminatory appeals and pursue discriminatory policies in the name of a particular group. They are usually people who are seeking political office or trying to stay in office. They provoke and harness feelings of fear as a way to lock in the constituencies that will support their scramble for power. Experts have a term for these individuals: ethnic entrepreneurs. The term was first used in the 1990s to explain figures such as Milošević and Tudjman, but it’s a phenomenon that has since occurred many times over, in all parts of the world. These instigators of war are often at high risk of losing power or have recently lost it. Seeing no other routes to securing their futures—because, perhaps, they are ex-Communists—they cynically exploit divisions to try to reassert control. They foster identity-based nationalism to sow violence and chaos, using a strategy scholars call “gambling for resurrection”—an aggressive effort to provoke massive change, even against the odds.

People were especially likely to fight if they had once held power and saw it slipping away. Political scientists refer to this phenomenon as “downgrading,” and while there are many variations on the theme, it is a reliable way to predict—in countries prone to civil war—who will initiate the violence.

Native speakers of a country’s official language enjoy a huge economic advantage over citizens whose language is not recognized by the state. Francisco Franco, dictator of Spain from 1939 to 1975, understood this. One of the ways Franco consolidated power was to elevate Castilian over other languages, declaring it Spain’s only official tongue. He then banned citizens from speaking Basque, Catalan, Galician, or any other language in public. Newborns were not allowed to be given regional names, and dialects were no longer allowed to be taught in school or used to conduct business. Language, it turns out, is strongly tied to the identity of a nation, and it determines whose culture ultimately dominates. One of the main fears of ethnic Russians in the Donbas region of Ukraine was that the new nationalist government would make Ukrainian the official language of the state to the exclusion of Russian. It’s hard to compete for well-paying jobs if you don’t speak the language. Controlling access to education, especially higher education, is another way to elevate one ethnic group over another. The same is true of access to civil service jobs, which are some of the most steady and lucrative positions in a country. When people face the loss of such privileges, they can become deeply aggrieved and motivated to resist.

Citizens in poor countries were much more likely to fight than citizens in rich countries. But when scholars took into account measures of good governance—including citizen participation, the competitiveness of elections, and constraints on the power of the executive—economic variables became much less important. Income inequality, which many considered a red flag for war, proved to be the opposite. As James Fearon wrote in a 2010 report for the World Bank, “Not only is there no apparent positive correlation between income inequality and conflict, but if anything, across countries, those with more equal income distributions have been marginally more conflict prone.”

If a country was already at risk of civil war, natural disasters tended to make things worse. In a world where drought, wildfire, hurricanes, and heat waves will be more frequent and more intense—driving greater migration—the downgraded will have even more reasons to rise up.

Scholars know where civil wars tend to break out and who tends to start them: downgraded groups in anocracies dominated by ethnic factions. But what triggers them? What finally tips a country into conflict? Citizens can absorb a lot of pain. They will accept years of discrimination and poverty and remain quiet, enduring the ache of slow decline. What they can’t take is the loss of hope. It’s when a group looks into the future and sees nothing but additional pain that they start to see violence as their only path to progress.

It’s the failure of protests that eliminates hope and incentivizes violence. That’s when citizens finally see that their belief in the system has been misplaced. In Israel, Palestinians engaged in nonviolent protests for years—participating in mass demonstrations, work stoppages, strikes, and boycotts—but made no progress in negotiations with the government. The result? “People exploded,” said Radwan Abu Ayyash, a Palestinian journalist. This helps explain why violence tends to escalate in the aftermath of failed protests. Protests are a last-ditch effort to fix the system—the Hail Mary pass for optimists seeking peaceful change—before the extremists take over.

Early militants, of course, know that civilian deaths at the hands of the government can tip conflicts into all-out war; they see the opportunity in a harsh government response and plan accordingly. Hamas has stored weapons in schools, mosques, and residential neighborhoods, goading the Israeli military to bomb them. Carlos Marighella, a Brazilian Marxist revolutionary, urged fellow militants to target government forces in order to provoke a violent reaction. He believed that if the government intensified its repression against Brazilians, arresting innocent people and making life in the city unbearable, citizens would turn against it. In Northern Ireland, Tommy Gorman, a member of the IRA, recalled that the British Army and government, with their harsh tactics, “were our best recruiting agents.” And in Spain, the violent separatist group ETA was not particularly popular with Basque citizens until President Franco allowed the Germans, during World War II, to viciously bomb Basque villages. According to one expert on the Basques, “Nothing radicalizes a people faster than the unleashing of undisciplined security forces on its towns and villages.” That’s why civil wars appear to explode after governments decide to play hardball. Extremists have already embraced militancy. What changes is that average citizens now decide that it’s in their interest to do so as well.

leaders were less inclined to negotiate—and more likely to fight—in nations with multiple potential separatist groups. If a leader believed that granting independence to one group would lead others to make their own demands—setting off a secessionist chain reaction—then fighting would help deter future challenges. Indonesia’s harsh response to East Timor’s declaration of independence, which killed an estimated 25 percent of East Timor’s population, was made in part to dissuade the country’s many other ethnic groups from demanding independence as well.

What America’s eighteenth-century leaders couldn’t have predicted was that the factionalization they feared would be rooted not in class but in ethnic identity. That’s because in 1789, at least at the federal level, all American voters were white (and all of them were men). Today, the best predictor of how Americans will vote is their race. Two-thirds or more of Black, Latino, and Asian Americans consistently vote for Democrats, while roughly 60 percent of white Americans vote for Republicans. That represents a dramatic shift from the middle part of the last century, when the ethnic minority vote was split roughly between the two parties, and most white working-class Americans tended to vote Democratic. In fact, as late as 2007—the year before Barack Obama was elected president—whites were just as likely (51 percent) to be Democrats as they were Republicans. Today, 90 percent of the Republican Party is white.

Working-class whites had been hailed as the backbone of America, their ways and values memorialized in Norman Rockwell paintings. And now, it seemed, the government was abandoning them. Global trade agreements were signed that benefited coastal elites and city dwellers at their expense. Immigration continued, and allowances were made for illegal immigrants. To whites experiencing real economic and social decline, the U.S. government was like the Indian government that encouraged Bengalis to migrate to Assam, the Indonesian government that encouraged Javanese to migrate to West Papua, or the Sri Lankan government that had encouraged the Sinhalese to migrate to Tamil regions. White Americans were seeing young people from countries like India and China—whose first language wasn’t English, whose religion was not Christianity—get lucrative tech jobs and live an American dream that no longer existed for them.

Members of AWD were among those who participated in the Unite the Right rally in Charlottesville, yelling “You will not replace us!” as they marched with torches. Soon after the rally, the hashtag #ReadSiege spread like wildfire on Twitter. Some in the group found Charlottesville—and the subsequent arrests, deplatforming, and bad press—to be disheartening, proof that Mason had been right all along: They would not be successful if they stayed within the bounds of the law. As one former AWD member later told investigative journalist A. C. Thompson (who made the ProPublica documentary), Charlottesville sparked the group’s shift toward violence, because members felt their efforts had been ineffectual. “Huge rallies don’t work,” he explained. “All that happens is people get arrested, people lose jobs, and you get put on some FBI watch list.” The answer, he continued, was to go underground, and to pursue a form of cell-style terrorism known as “leaderless resistance.” The term “leaderless resistance” originated in the 1950s with a former CIA officer named Ulius Amoss, who was analyzing ways to protect CIA-supported resistance cells in Eastern Europe. The concept was picked up by Louis Beam, a soldier in the Vietnam War who, after returning to the United States, became a Ku Klux Klan member. In 1983, Beam published an essay advocating leaderless resistance as the best way for white nationalists to continue their struggle against the far more powerful U.S. government. Beam believed that the movement could survive only if it became decentralized.

Extremist groups also tend to wield greater psychological power by offering greater recompense: Honor, martyr status, and glory in the afterlife, and an extreme ideology weeds out those who are less committed to a cause, reducing the problem of poor performance, side switching, or betrayal. We have not yet seen the outbidding strategy take hold in the United States, but it’s easy to imagine it as right-wing groups proliferate. What ISIS did in Iraq and Syria provides a blueprint: The group invested heavily in internet propaganda, advertising its military strength and publicizing both the brutal acts it was willing to commit and the public services it was willing to provide to local populations. When it entered a town, it quickly targeted leaders of the opposition. If this was to occur in the United States, you would see one extreme group, such as Atomwaffen, escalating to ever-more brutal acts of violence, to prove that it was stronger, more capable, and more dedicated to the cause than other groups. A final terror strategy is “spoiling.” Terrorists wield this tactic when they fear that more moderate groups—those that would put aside violence in exchange for, say, concessions from the government on immigration—will compromise and subvert the larger goal of establishing a new ethno-state. This strategy usually comes into play when relations between more moderate insurgent groups and the government are improving, and a peace agreement seems imminent. Terrorists know that most citizens will not support ongoing violence once a deal is in place. When Iranian radicals kidnapped fifty-two Americans in Tehran in 1979, it wasn’t because relations between the United States and Iran were worsening, but because there were signs of rapprochement: Three days earlier, Mehdi Bazargan, Iran’s relatively moderate prime minister, and Zbigniew Brzezinski, the U.S. national security adviser, had appeared in a photograph together shaking hands. The radicals knew that reconciliation between the two countries would be disastrous for them, so they did whatever they could to prevent it. Arab-Israeli peace negotiations, and talks between Protestants and Catholics in Northern Ireland, have also been “spoiled” in this way.

Most countries that were able to avoid a second civil war shared an ability to strengthen the quality of their governance. They doubled down on democracy and moved up the polity scale. Mozambique did this after its civil war ended in 1992, when the country moved from one-party rule to multiparty elections. In the wake of a conflict that ended in 2003, Liberia increased institutional restraints on presidential power and pushed for more judicial independence. Countries that created more transparent and participatory political environments and limited the power of their executive branch were less susceptible to repeat episodes of violence.

Free elections are the central mechanism of accountability in a democracy, but unlike many other countries, America lacks an independent and centralized election management system. According to the political scientist Pippa Norris, an elections expert and the founding director of Harvard University’s Electoral Integrity Project, almost every new democracy going through a transition sets up a central independent election management system to protect the integrity of elections. This helps to build trust in the electoral process. Uruguay, Costa Rica, and South Korea all did this when they created their democracies. Large federal democracies such as Australia, Canada, India, and Nigeria have also managed their elections this way. Canada’s election system is run by Elections Canada, and all voters follow the same procedures no matter where they live. An independent and centralized election management system establishes a standard procedure for designing and printing ballots and tabulating votes accurately and securely, untainted by partisan politics. It can handle legal disputes without the involvement of politicized courts. In a 2019 report, the Electoral Integrity Project examined countries’ electoral laws and processes and found that the quality of U.S. elections from 2012 to 2018 was “lower than any other long-established democracies and affluent societies.” The United States received the same score as Mexico and Panama, and a much lower score than Costa Rica, Uruguay, and Chile. This is the reason why it is easier to spread claims about voter fraud in the United States, and why Americans are more likely to question the results.

To fulfill the promise of a truly multiethnic democracy, the nation must navigate deep peril. We need to shore up our democracy, stay out of the anocracy zone, and rein in social media, which will help reduce factionalism. This will give us a chance to avoid a second civil war.

Posted in Review and Ideas | Tagged , , , , , , , , , | Leave a comment

Key Points from Book: Wanting – The Power of Mimetic Desire in Everyday Life


by Luke Burgis

Girard discovered that most of what we desire is mimetic (mi-met-ik) or imitative, not intrinsic. Humans learn—through imitation—to want the same things other people want, just as they learn how to speak the same language and play by the same cultural rules. Imitation plays a far more pervasive role in our society than anyone had ever openly acknowledged.

It means learning something new about your own past that explains how your identity has been shaped and why certain people and things have exerted more influence over you than others. It means coming to grips with a force that permeates human relationships—relationships which you are, at this moment, involved in. You can never be a neutral observer of mimetic desire.

An unbelieved truth is often more dangerous than a lie. The lie in this case is the idea that I want things entirely on my own, uninfluenced by others, that I’m the sovereign king of deciding what is wantable and what is not. The truth is that my desires are derivative, mediated by others, and that I’m part of an ecology of desire that is bigger than I can fully understand. By embracing the lie of my independent desires, I deceive only myself. But by rejecting the truth, I deny the consequences that my desires have for other people and theirs for me. It turns out the things we want matter far more than we know.

He uncovered something perplexing, something which seemed to be present in nearly all of the most compelling novels ever written: characters in these novels rely on other characters to show them what is worth wanting. They don’t spontaneously desire anything. Instead, their desires are formed by interacting with other characters who alter their goals and their behavior—most of all, their desires. Girard’s discovery was like the Newtonian revolution in physics, in which the forces governing the movement of objects can only be understood in a relational context. Desire, like gravity, does not reside autonomously in any one thing or person. It lives in the space between them.

The characters in the great novels are so realistic because they want things the way that we do—not spontaneously, not out of an inner chamber of authentic desire, not randomly, but through the imitation of someone else: their secret model.

Desire, as Girard used the word, does not mean the drive for food or sex or shelter or security. Those things are better called needs—they’re hardwired into our bodies. Biological needs don’t rely on imitation. If I’m dying of thirst in the desert, I don’t need anyone to show me that water is desirable. But after meeting our basic needs as creatures, we enter into the human universe of desire. And knowing what to want is much harder than knowing what to need.

Gravity causes people to fall physically to the ground. Mimetic desire causes people to fall in or out of love, or debt, or friendships, or business partnerships. Or it may subject them to the degrading slavery of being merely a product of their milieu.

Girard opening the very first session of his class Literature, Myth, and Prophecy with these words: “Human beings fight not because they are different, but because they are the same, and in their attempts to distinguish themselves have made themselves into enemy twins, human doubles in reciprocal violence.”

The biblical story of Cain and Abel is about Cain killing his brother, Abel, after his ritual sacrifice pleased God less than Abel’s. They both wanted the same thing—to win favor with God—which brought them into direct conflict with each other. In Girard’s view, the root of most violence is mimetic desire.

Thiel left the corporate world and co-founded Confinity with Max Levchin in 1998. He began to use his knowledge of mimetic theory to help him manage both the business and his life. When competitive rivalries flared up within his company, he gave each employee clearly defined and independent tasks so they didn’t compete with one another for the same responsibilities. This is important in a start-up environment where roles are often fluid. A company in which people are evaluated based on clear performance objectives—not their performance relative to one another—minimizes mimetic rivalries.

Models of desire are what make Facebook such a potent drug. Before Facebook, a person’s models came from a small set of people: friends, family, work, magazines, and maybe TV. After Facebook, everyone in the world is a potential model.

When a person’s identity becomes completely tied to a mimetic model, they can never truly escape that model because doing so would mean destroying their own reason for being.

The more that people are forced to be the same—the more pressure they feel to think and feel and want the same things—the more intensely they fight to differentiate themselves. And this is dangerous. Many cultures have had a myth in which twins commit violence against each other. There are at least five separate stories of sibling rivalry in the book of Genesis alone: Cain and Abel, Ishmael and Isaac, Esau and Jacob, Leah and Rachel, Joseph and his brothers. Stories of sibling rivalry are universal because they’re true—the more people are alike, the more likely they are to feel threatened. While technology is bringing the world closer together (Facebook’s stated mission), it is bringing our desires closer together and amplifying conflict. We are free to resist, but the mimetic forces are accelerating so quickly that we are close to becoming shackled.

In the days before the terrorist attacks of September 11, 2001, hijacker Mohammed Atta and his companions were carousing in south Florida bars and binge-playing video games. “Who asks about the souls of these men?” wondered Girard in his last book, Battling to the End.14 The Manichean division of the world into “evil” and “not evil” people never satisfied him. He saw the dynamics of mimetic rivalry at work in the rise of terrorism and class conflict. People don’t fight because they want different things; they fight because mimetic desire causes them to want the same things. The terrorists would not have been driven to destroy symbols of the West’s wealth and culture if, at some deep level, they had not secretly desired some of the same things. That’s why the Florida bars and video game–playing are an important piece of the puzzle. The mysterium iniquitatis (the mystery of evil) remains just that: mysterious. But mimetic theory reveals something important about it. The more people fight, the more they come to resemble each other. We should choose our enemies wisely, because we become like them.

Buried in a deeper layer of our psychology is the person or thing that caused us to want something in the first place. Desire requires models—people who endow things with value for us merely because they want the things. Models transfigure objects before our eyes. You walk into a consignment store with a friend and see racks filled with hundreds of shirts. Nothing jumps out at you. But the moment your friend becomes enamored with one specific shirt, it’s no longer a shirt on a rack. It’s the shirt that your friend Molly chose—the Molly who, by the way, is an assistant costume designer on major films. The moment she starts ogling the shirt, she sets it apart. It’s a different shirt than it was five seconds ago, before she started wanting it. “O hell! to choose love by another’s eyes!” says Hermia in Shakespeare’s A Midsummer Night’s Dream. It’s hell to know we have chosen anything by another’s eyes. But we do it all the time: we choose brands, schools, and dishes at a restaurant by them.

The Bible contains a story about the Romantic Lie at the dawn of humanity. Eve originally had no desire to eat the fruit from the forbidden tree—until the serpent modeled it. The serpent suggested a desire. That’s what models do. Suddenly, a fruit that had not aroused any particular desire became the most desirable fruit in the universe. Instantaneously. The fruit appeared irresistible because—and only after—it was modeled as a forbidden good.

sometimes even from a glance of the eyes. We do the same thing. Meltzoff explains: “A mother looks at something. A baby takes that as a signal that the mother desires the object, or is at least paying attention to it because it must be important. The baby looks at the mother’s face, then at the object. She tries to understand the relationship between her mother and the object.” It’s not long before a baby can follow not just her mother’s eyes but even the intentions behind her actions.

Desire is our primordial concern. Long before people can articulate why they want something, they start wanting it. The motivational speaker Simon Sinek advises organizations and people to “start with why” (the title of one of his books), finding and communicating one’s purpose before anything else. But that is usually a post hoc rationalization of whatever it is we already wanted. Desire is the better place to start.

This natural and healthy concern in children about what other people want seems to morph in adulthood into an unhealthy concern about what other people want. It grows into mimesis. Adults do expertly what babies do clumsily. After all, each of us is a highly developed baby. Rather than learning what other people want so that we can help them get it, we secretly compete with them to possess it.

We’re so sensitive to imitation that we notice the slightest deviance from what we could call acceptable imitation. If we receive a response to an email or text that doesn’t sufficiently tone-match, we can go into a mini-crisis (Does she not like me? Does he think he’s superior to me? Did I do something wrong?). Communication practically runs on mimesis. In a study published in 2008 in the Journal of Experimental Social Psychology, sixty-two students were assigned to negotiate with other students. Those who mirrored others’ posture and speech reached a settlement 67 percent of the time, while those who didn’t reached a settlement 12.5 percent of the time.13

He gave the illusion of autonomy—because that’s how people think desire works. Models are most powerful when they are hidden. If you want to make someone passionate about something, they have to believe the desire is their own.

It was as if her lack of desire for him affected the strength of his desire for her. What’s more, the interest that other men showed in her affected him. They were modeling her desirability to him. Through her withdrawal from him, she was modeling it, too. “I suddenly realized that she was both object and mediator for me—some kind of model,” Girard remembered. People don’t only model the desire for third parties or objects; they can also model the desire for themselves. Playing hard to get is a tried-and-true method to drive people crazy, but few ever ask why. Mimetic desire provides a clue. We are fascinated with models because they show us something worth wanting that is just beyond our reach—including their affection.

Or consider a sophomore in high school who posts a selfie to Instagram. She’s beaming next to her new boyfriend at a sushi restaurant. Immediately, her ex—who broke up with her only a few weeks ago, confident in his decision, and whom she hasn’t heard from since—starts texting her, confessing his love. “You don’t know what you want,” she tells him. “Make up your mind!” She’s right: he didn’t know what he wanted until he saw her with another guy—a senior, his older brother’s age, who is going to the University of North Carolina on a basketball scholarship. Her renewed desirability has nothing to do with how she looks in her Instagram photo; it’s a product of her being wanted by another man—and not just any man, but one who possesses all of the characteristics that her ex-boyfriend would like to have.

Elite colleges don’t keep their admissions rates low because they have to; they keep them low to protect the value of their brands.

The pride that makes a person believe they are unaffected by or inoculated against biases, weaknesses, or mimesis blinds them to their complicity in the game. If a news organization can convince its viewers that its programming is neutral, it disables their defense mechanisms. Big Tech companies do something similar. They present their technology as agnostic—as just a “platform.” And that’s true, so long as we evaluate it in a materialistic way, as bits and bytes. Yet, on a human level, social media companies have built engines of desire.

Desire is not a function of data. It’s a function of other people’s desires. What stock market analysts referred to as “mass psychosis” was not so psychotic after all. It was the phenomenon of mimetic desire that Girard had discovered more than fifty years earlier. In both bubbles and crashes, models are multiplied. Desire spreads at a speed so great we can’t wrap out rational brains around it. We might consider taking a different, more human, perspective. “Conformity is a powerful force that can counteract gravity for longer than skeptics expect,” writes Wall Street Journal finance columnist Jason Zweig. “Bubbles are neither rational nor irrational; they are profoundly human, and they will always be with us.”

We are generally fascinated with people who have a different relationship to desire, real or perceived. When people don’t seem to care what other people want or don’t want the same things, they seem otherworldly. They appear less affected by mimesis—anti-mimetic, even. And that’s fascinating, because most of us aren’t.

It’s as if everyone is saying, “Imitate me—but not too much,” because while everyone’s flattered by imitation, being copied too closely feels threatening.

That’s because rivalry is a function of proximity. When people are separated from us by enough time, space, money, or status, there is no way to compete seriously with them for the same opportunities. We don’t view models in Celebristan as threatening because they probably don’t care enough about us to adopt our desires as their own. There is another world, though, where most of us live the majority of our lives. We’ll call it Freshmanistan. People are in close contact and unspoken rivalry is common. Tiny differences are amplified. Models who live in Freshmanistan occupy the same social space as their imitators. We’re easily affected by what other people in Freshmanistan say or do or desire. It’s like being in our freshman year of high school, having to jostle for position and differentiate ourselves from a bunch of other people who are in the same situation. Competition is not only possible, it is the norm. And the similarity between the people competing makes the competition peculiar.

Girard believed that all true desire—the post-instinctual kind—is metaphysical. People are always in search of something that goes beyond the material world. If someone falls under the influence of a model who mediates the desire for a handbag, it’s not the handbag they are after. It’s the imagined newness of being they think it will bring. “Desire is not of this world,” Girard has said, “… it is in order to penetrate into another world that one desires, it is in order to be initiated into a radically foreign existence.”

Reflexivity in markets is partly what leads to market crashes and bubbles. Investors perceive there might be a crash, so they behave in a way that precipitates the crash.

People worry about what other people will think before they say something—which affects what they say. In other words, our perception of reality changes reality by altering the way we might otherwise act. This leads to a self-fulfilling circularity. This principle affects public and personal discourse. The German political scientist Elisabeth Noelle-Neumann coined the term “spiral of silence” in 1974 to refer to a phenomenon that we see often today: people’s willingness to speak freely depends upon their unconscious perceptions of how popular their opinions are. People who believe their opinions are not shared by anyone else are more likely to remain quiet; their silence itself increases the impression that no one else thinks as they do; this increases their feelings of isolation and artificially inflates the confidence of those with the majority opinion.

Why do all hipsters look alike, and why does nobody identify themselves as one? The answer is mirrored imitation. Mirrors distort reality. They flip the sides on which things appear: your right hand appears on the left side in the mirror, and your left hand appears as if it’s on the right side. The mirror image is, in some sense, an image of opposites. Mirrored imitation, then, is imitation that does the opposite of whatever a rival does. It is reflexive to a rival by doing something different from what the rival models. When mimetic rivals are caught in a double bind, obsessed with each other, they go to any length to differentiate themselves. Their rival is a model for what not to desire. For a hipster, the rival is popular culture—he eschews anything popular and embraces what he believes to be eclectic, but he does so according to new models. According to Girard, “the effort to leave the beaten paths forces everyone into the same ditch.”

When one of the two parties to a rivalry renounces the rivalry, it defuses the other party’s desire. In a mimetic rivalry, objects take on value because the rival wants them. If the rival suddenly stops wanting something, so do we.

We imitate not for the sake of imitation itself but for the sake of differentiating ourselves—to try to forge an identity relative to other people.

Mimetic desire tends to move in one of two cycles. Cycle 1 is the negative cycle, in which mimetic desire leads to rivalry and conflict. This cycle runs on the false belief that other people have something that we don’t have and that there isn’t room for fulfillment of both their desires and ours. It comes from a mindset of scarcity, of fear, of anger. Cycle 2 is the positive cycle in which mimetic desire unites people in a shared desire for some common good. It comes from a mindset of abundance and mutual giving. This type of cycle transforms the world. People want something that they couldn’t imagine wanting before—and they help others go further, too.

Giro’s business flywheel, according to Collins, worked like this: “Invent great products; get elite athletes to use them; inspire Weekend Warriors to mimic their heroes; attract mainstream customers; and build brand power as more and more athletes use the products. But then, to maintain the ‘cool’ factor, set high prices and channel profits back into creating the next generation of great products that elite athletes want to use.”

Aristotle invented the word “entelechy” to refer to a thing that has its own principle of development within it, a vital force that propels it forward to become fully what it is.

C. S. Lewis called this invisible system the inner ring. It means that no matter where a person is in life, no matter how wealthy or popular a person is, there is always a desire to be on the inside of a certain ring and a terror of being left on the outside of it. “This desire [to be in the inner ring] is one of the great permanent mainsprings of human action,” Lewis said. “It is one of the factors which go to make up the world as we know it—this whole pell-mell of struggle, competition, confusion, graft, disappointment and advertisement.… As long as you are governed by that desire, you will never get what you want.”27 Zappos dismantled any visible signs of an outer ring. They forgot about the inner one. Hierarchical Values Tony’s project to make downtown Las Vegas an entrepreneurial hub and happy community was a noble one, in principle. Its downfall was an impoverished view of human nature. CEOs, teachers, policymakers, and others responsible for shaping an environment should understand how decisions affect people’s desires. As a city planner needs to consider the effect of parks and murals and bike paths on everything from traffic to crime, so a good leader needs to consider the impact of their decisions on human ecology—the web of relationships that affect human life and development. No aspect of human ecology is more overlooked than mimetic desire. Early on at one of my companies, I made the mistake of forming a way-too-serious flag football team that competed in a city league, not realizing that it divided our young start-up into factions. Having fun and freely associating outside of work was not a problem. The problem was that I, the CEO, was the one who organized and led the effort. At that stage of our company (there were only about ten of us), the idea and organization needed to come from someone other than me for it not to feel like a top-down imposition of cultural expectations. My football fanaticism inflamed a few rivalries and bent desires toward small-spirited goals rather than great ones.

A hierarchy of values is an antidote to mimetic conformity. If all values are treated as equal, then the one that wins out—especially at a time of crisis—is the one that is most mimetic.

Girard saw a close connection between mimetic desire and violence. “People everywhere today are exposed to a contagion of violence that perpetuates cycles of vengeance,” he said in his book The One by Whom Scandal Comes. “These interlocking episodes resemble each other, quite obviously, because they all imitate each other.”2 How do these cycles of vengeance start? Mimetic desire. “More and more, it seems to me,” wrote Girard in the same book, “modern individualism assumes the form of a desperate denial of the fact that, through mimetic desire, each of us seeks to impose his will upon his fellow man, whom he professes to love but more often despises.”3 These small, interpersonal conflicts are a microcosm of the instability that threatens the entire world. And before the world: our families, cities, institutions.

The word pharmakós is related to the English word “pharmacy.” In ancient Greece, the pharmakós was someone initially seen as a poison to the community. The people believed that they had to destroy or expel this person to protect themselves. The elimination of the pharmakós was the remedy to the problem. In this sense, the pharmakós was both the poison and the cure.

In real life, scapegoats are usually singled out due to some combination of the following: they have extreme personalities or neurodiversity (such as autism) or physical abnormalities that make them noticeable; they’re on the margins of society in terms of status or markets (they are outside the system, like the Amish or people who have chosen to live off the grid); they’re considered deviants in some way (their behavior falls outside societal norms, whether related to lifestyle, sexuality, or style of communication); they’re unable to fight back (this applies even to rulers or kings—when it is all against one, even the most powerful person is impotent); or they appear as if by magic without society knowing where they came from or how they got there, which makes them easy to blame as the cause of social unrest (climate change activist Greta Thunberg’s arrival in New York to speak at the United Nations on a zero-carbon yacht marks her as a potential scapegoat). All scapegoats have the power to unite people and defuse mimetic conflict. A scapegoat doesn’t have traditional power; a scapegoat has unifying power. A prisoner on death row possesses power that not even the state governor has. For a family or community in crisis, it can seem like only the death of that prisoner will bring them the kind of healing they seek. The prisoner, then, possesses a quasi-supernatural quality that no one else can stand in for. Only he can heal.

“A scapegoat remains effective as long as we believe in its guilt,” wrote Girard in his final book, Battling to the End: Conversations with Benoît Chantre. “Having a scapegoat means not knowing that we have one.”

As mentioned earlier, one of Jenny Holzer’s billboards in Times Square pleaded: “PROTECT ME FROM WHAT I WANT.” It drew attention because it was a sign of contradiction. Through its stark contrast with its surroundings, Holzer’s art drew communal attention to its message. And through it, people were drawn toward a more honest examination of themselves. The message led not to rivalry and blame and violence but to self-reflection and maybe even transformation. Consumer culture did not have to have the last word.

Some trends in goal setting: don’t make goals vague, grandiose, or trivial; make sure they’re SMART (specific, measurable, assignable, relevant, and time-based)2; make them FAST (another acronym: frequent, ambitious, specific, and transparent)3; have good OKRs (objectives and key results)4; put them in writing; share them with others for accountability. Goal setting has become very complicated. If someone tried to take all the latest tactics into account, it would be a wonder if they managed to set any goals at all.

The ultimate way to test desires—especially major life choices such as whether to marry someone or whether to quit your job and start a company—is to practice this same exercise but to do it while imagining yourself on your deathbed. Which choice leaves you more consoled? Which choice causes you more agitation? Steve Jobs, in his 2005 commencement speech at Stanford, noted, “Death is very likely the single best invention of life. It is life’s change agent. It clears out the old to make way for the new.” The deathbed is where unfulfilling desires are exposed. Transport yourself there now rather than waiting until later, when it might be too late.

The term “sour grapes” was popularized in one of Aesop’s fables. A fox sees a beautiful cluster of ripe grapes hanging from a high branch. The grapes look ready to burst with juice. His mouth begins watering. He tries to jump up and grab them, but he falls short. He tries again and again, but the grapes are always just out of reach. Finally, he sits down and concludes that the grapes must be sour and aren’t worth the effort anyway. He walks away scornfully. By calling the grapes “sour,” the fox invented a narrative in his mind to ease the pain of loss. If you accept this notion uncritically, then you might believe that one can’t legitimately despise rich people without first being rich, or scorn Ivy League schools without having gained admission to one, or reject the desire for three Michelin stars without first having earned them. To do so would be self-deception, resentment, weakness. Don’t believe that a person has to buy into and play a mimetic game and win before they can opt out of it with a clear conscience. If you decline an invitation to be on the reality TV show The Bachelor, rejecting it as a silly charade, does that mean that it’s sour grapes? Could you only criticize the show after you’ve won? Of course not. “Don’t knock it till you try it” is a sophomoric argument. Girard recognized that resentment is real—and that it happens primarily in the world of internal mediation (Freshmanistan), when we are inside a system of desire without social or critical distance from it.15 But only the worst kind of cynic believes that every renunciation necessarily has something to do with resentment.

Empathy is the ability to share in another person’s experience—but without imitating them (their speech, their beliefs, their actions, their feelings) and without identifying with them to the point that one’s own individuality and self-possession are lost. In this sense, empathy is anti-mimetic. Empathy could mean smiling and giving a cold bottle of water to people collecting signatures for a petition you would never sign—because it’s a sweltering day and you know what it’s like to be that hot, and you also know what it feels like to be that passionate about something you care about. It would not entail empty platitudes or white-lie niceties that we often say to people with whom we disagree; rather, it means finding a shared point of humanity through which to connect without sacrificing our integrity in the process. Empathy disrupts negative cycles of mimesis. A person who is able to empathize can enter into the experience of another person and share her thoughts and feelings without necessarily sharing her desires.

The distinction between thick and thin desires can’t easily be made based on feelings alone. Desires feel very strong when we’re young—to make a lot of money, date a person with certain physical attributes, or become famous. The feelings are often more intense the thinner a desire is. As we get older, many of our adolescent feelings of intense desire fade away. It’s not because we realize that some of the things we wanted are no longer attainable. It’s because we have more pattern recognition ability and so can recognize the kinds of desires that leave us unfulfilled. As a result, most people do learn to cultivate thicker desires as they age.

If you want to build a ship, don’t drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea. —Antoine de Saint-Exupéry Hard times are coming, when we’ll be wanting the voices of writers who can see alternatives to how we live now … to other ways of being, and even imagine real grounds for hope. We’ll need writers who can remember freedom—poets, visionaries—realists of a larger reality. —Ursula K. Le Guin

“The goal of early childhood education should be to cultivate the child’s own desire to learn,” Montessori wrote in The Montessori Method. And elsewhere: “We must know how to call to the man which lies dormant within the soul of the child.”9 The desire to grow into mature adults—not the desire to earn A’s or win Little League games or get a sticker for good behavior—is each child’s primary and most important project, the thing each of them secretly cares most deeply about.

If truth is not confronted courageously, communicated effectively, and acted upon quickly, a company will never be able to adhere to reality and respond appropriately to it. The health of any human project that relies on the ability to adapt depends on the speed at which truth travels. That holds for a classroom, a family, and a country.

The passionate pursuit of truth is anti-mimetic because it strives to reach objective values, not mimetic values. Leaders who embrace and model the pursuit of truth—and who increase its speed within the organization—inoculate themselves from some of the more volatile movements of mimesis that masquerade as truth. Want a test? Try reading newspapers at least a week out of date. The mimetic fluff is easier to spot.

In Lean Startup lingo, the first version of a product is called a minimum viable product (or MVP). The MVP is “that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.”18 (In the language of desire, the MVP corresponds to the minimal viable desires of customers.) After the MVP, you engage in continuous learning and improvement. The Lean Startup methodology has benefits. It saves idealistic entrepreneurs from heartache. It prevents wasted time and money, gets products to market faster, and opens up emergent possibilities for growth. That’s all good to a certain extent. An entrepreneur who does not give people things they want won’t be in business for long. But the Lean Startup technique is a model of entrepreneurship fundamentally based on immanent desire. It’s politics by polling, in which a candidate does whatever the polls tell them to do. This is not leading, but following. Sometimes it’s plain cowardice.

Engineering desires in robots or in humans raises serious questions about humanity’s future. Historian Yuval Noah Harari ends his book Sapiens: A Brief History of Humankind with these words: “But since we might soon be able to engineer our desires, too, the real question facing us is not ‘What do we want to become?,’ but ‘What do we want to want?’ Those who are not spooked by this question probably haven’t given it enough thought.” The question “What do we want to want?” is unsettling partly because, in a world of engineered desires, we have to wonder who is doing the engineering. But also because the question implies that

it’s possible to want to want something, yet not be capable of wanting it.

NYU Stern School of Business professor Scott Galloway thinks that each of the Big Four tech companies taps into a deep-seated need in humanity.8 Google is like a deity that answers our questions (read: prayers); Facebook satisfies our need for love and belonging; Amazon fulfills the need for security, allowing us instantaneous access to goods in abundance (the company was there for us during COVID-19) to ensure our survival; and Apple appeals to our sex drive and the associated need for status, signaling one’s attractiveness as a mate by associating with a brand that is innovative, forward-thinking, and costly to own. In many ways, the Big Four tech companies are serving people’s needs better than churches do.9 They’re addressing desires better, too. The vast majority of people are not thinking about mere survival; they are trying to figure out what to want next and how they can get it. The Big Four tech companies supply answers to both.

There are two approaches that people commonly take to escape from Cycle 1. The first approach, engineering desire, is the approach of Silicon Valley, authoritarian governments, and the Cult of Experts. The first two use intelligence and data to centrally plan a system in which people want things that other people want them to want—things that benefit a certain group of people. This approach poses a serious threat to human agency. It also lacks respect for the capability of people to freely desire what is best for themselves and the people they love. The Cult of Experts, with their “Follow These Five Steps” approach to happiness, lacks respect for human complexity. The alternative is the transformation of desire. The engineering approach is like extractive industrial farming, which uses pesticides and tills the land with large machinery, then measures success by seasonal yield, shelf life, and uniformity. The transformation approach is like regenerative farming, which can transform a barren piece of land into rich soil according to the laws and dynamics of the ecosystem. In our case, the ecosystem is one of human ecology—and desire is its lifeblood.

Authoritarian regimes can only stay in existence so long as they can control what people want. We normally think of these regimes as controlling what people can and cannot do through laws, regulations, policing, and penalties. But their real victory comes not when they have authority over people’s actions; rather, their victory comes when they have authority over their desires. They don’t want to keep prisoners in cells; they want those prisoners to learn to love their cells. When there is no desire for change, their authority is complete. The purpose of a “reeducation” camp is not about relearning how to write or read or interpret history, or even how to think; it’s fundamentally about the reeducation of desire. Russian scholars Catriona Kelly and Vadim Volkov have pointed out in their essay “Directed Desires: Kul’turnost’ and Consumption” that the transition to Soviet Russia came about through what they call directed desires. There was a subtle campaign to direct people to want certain things and reject others. The idea of kulturnost, or culturedness in English, began to emerge. It was a right way to live based on shared Russian cultural values.

There are two different ways of thinking that correspond, respectively, to engineering desire and transforming it: calculating thought and meditative thought. I draw these distinctions loosely from the work of philosopher Martin Heidegger.27 Calculating thought is constantly searching, seeking, plotting how to reach an objective: to get from Point A to Point B, to beat the stock market, to get good grades, to win an argument. According to psychiatrist Iain McGilchrist, it’s the dominant form of thought in our technological culture. It leads to the relentless pursuit of objectives—usually without having analyzed whether the objectives are worthy to begin with.

Meditative thought, on the other hand, is patient thought. It is not the same thing as meditation. Meditative thought is simply slow, nonproductive thought. It’s not reactionary. It’s the kind of thought that, upon hearing news or experiencing something surprising, doesn’t immediately look for solutions. Instead, it asks a series of questions that help the asker sink down further into the reality: What is this new situation? What is behind it? Meditative thought is patient enough to allow the truth to reveal itself.

“Desire is a contract that you make with yourself to be unhappy until you get what you want,” he said.36 Ravikant is drawing on the perennial understanding of numerous spiritual traditions about the link between desire and suffering: desire is always for something we feel we lack, and it causes us to suffer.

Posted in Review and Ideas | Tagged , , , , , | 1 Comment

Lisbon Day 11: Flying Over Lisbon, Belem Tower, Jerónimos Monastery, Sunset at Miradouro da Senhora do Monte, Back to Montreal

We woke up quite early today, and despite the cloudy morning the temperature was already above 25 degree Celsius. The plan was for my girlfriend to do her morning run while I will be flying my drone from Praça do Comércio, a large open space near the water that is a perfect take-off and landing zone.

Lisbon’s Southeast Area from the Sky
Praça do Comércio from the Sky

There were not many people early in the morning and it seems that workers have yet to come to their office by 8am, unlike in North America. I recorded several footages and also took photographs of the city from above, which highlights the uniformity of its buildings’ rooftop. Most of the cafés and restaurants around the square were still close, but luckily there was a small café on the northeast corner that was open already (Martinho da Arcada). I waited for my girlfriend to finish her run and had a glass of orange juice and my usual café con leche.

Lisbon’s Rooftop
Breakfast at Martinho da Arcada

My girlfriend finished her run half an hour after I sat, and since she was quite sweaty, we went back to the hotel so she can shower first. We then had a breakfast at Paul that is on Rua Augusta. Breakfast is our favorite time of the day. Few things are better than having a croissant and a cup of coffee on a patio while watching the world goes around.

Public Tram in Lisbon, Portugal

As this is our last day for the trip, we had few “must see” places left to visit. We took a slow walk to the pink street (Rua Nova do Carvalho) and also Rua De São Paulo to took some photographs of the iconic city tram. Unfortunately, the waiting line to ride the tram to the top of the street was quite long and after 15 minutes of waiting we gave up and opted instead to hike the same alley that the tram passes via staircase next to the building.

Rua De São Paulo in Lisbon, Portugal
Rua De São Paulo in Lisbon, Portugal

To save some time and energy, we took an Uber to go to Belem Tower and Jerónimos Monastery, both located 5.7 Km west from where we were – a solid 70-minute walking distance, according to Google. We did not get inside the Belem Tower due to the long line up, but instead sit on the staircase facing the ocean and listened to a local musician playing his violin, which was beautiful.

Belem Tower, Lisbon
Local Musician in Front of Belem Tower, Lisbon

The Jerónimos Monastery was erected in the early 1500s near the launch point of Vasco da Gama’s first journey, and its construction funded by a tax on the profits of the yearly Portuguese India Armadas. In 1880, da Gama’s remains were moved to new carved tombs in the nave of the monastery’s church, only a few meters away from the tombs of the kings Manuel I and John III, whom da Gama had served. It took us an hour to tour the complex, which we then proceeded to eat an ice cream at one of the cafés in Rua de Belém – where many souvenir shops are also located.

Jerónimos Monastery in Lisbon, Portugal
Jerónimos Monastery in Lisbon, Portugal

And after having disappointed by our dinner last night, our willingness to try new restaurants in Lisbon was diminished. On the other hand, I had been drooling over the grilled seafood we ate two days ago from Monte Mar in the Timeout market. This time, we sat on the counter table behind the restaurant rather than trying to find a seat on the crowded tables in the center of the market, and satisfied ourselves with platters of Mediterranean seafood.

Monte Mar in Lisbon’s Timeout Market

For the rest of the day, we wandered east to the section of the city we had not been. My girlfriend and I split halfway, as she still wanted to walk while I preferred to find a café and sit. At 7.30pm we met at a Starbucks near our hotel in Restauradores and went together to watch the sunset at Miradouro da Senhora do Monte, another open space area on top of the mountain that is perfect to enjoy the scenery of both the city and Tagus River.

Miradouro de São Pedro de Alcântara, Lisbon
Sunset from Miradouro da Senhora do Monte

It was dark when we descended from the mountain and the road in the small alleys were dimly lit. Nevertheless, walking around the old town area of Lisbon after the sunset has its own charm. The cobblestone road, quiet street, and the city’s colorful walls transport us to an era when life was simpler and perhaps, better.

The Small Street of Lisbon at Dark

We were not in the mood to have our dinner in a restaurant and knowing that we have to wake up at 3am – in six more hours – we went instead to a McDonalds nearby our hotel and took away our dinner to eat at the hotel’s room.

The next morning, we departed to the airport and things go relatively smooth compared to our earlier flights. We were 5 hours early in the airport and among the first in line to check our baggage. However, we again found that the airlines staff were mostly new employees that had not used to processing a Canadian permanent resident card, which cost us an hour of waiting on the counter while a more experienced employee was being called in. Despite that, we still had time to claim our tax refund and got some of our tax money back. And that, was the end of our Spain and Portugal vacation!

Posted in Spain & Portugal, Travel | Tagged , , , , , , , , | Leave a comment