Key Points from Book: Chip War

In today’s world, chips permeate all aspects of our life, from powering our smartphones, TVs, laptops, up to ballistic missiles. War is no longer decided by which side has more tanks, fighter planes, or infantry. Drones and smart-missiles that could lock in target with high precision are increasingly changing the balance of power in the battlefield. This made access to advanced chips a matter of national security that both the West and China could no longer take for granted. In October, the U.S. announced a new export control that prohibit the sale of advanced chips and equipment needed to make them to China, while also effectively banning U.S. citizens, residents, or green card holders to aid China develop its own semiconductor industry and catch up to the West. This matters greatly to China, who is still highly reliant on imports of advanced technology to power its industry, despite efforts to push for home-grown innovation. With a single company in Taiwan producing 92% of world’s most advanced chips, the geopolitical stakes could not be higher.

In his book, Chris Miller beautifully outlined the history of how we get to where we are today and various parties involved in developing advanced chips that power our world today. More interesting to me is the parallel between China today and Japan in the 1980-90s, when the country was one of U.S. main technology rival.

The United States still has a stranglehold on the silicon chips that gave Silicon Valley its name, though its position has weakened dangerously. China now spends more money each year importing chips than it spends on oil. These semiconductors are plugged into all manner of devices, from smartphones to refrigerators, that China consumes at home or exports worldwide. Armchair strategists theorize about China’s “Malacca Dilemma”—a reference to the main shipping channel between the Pacific and Indian Oceans—and the country’s ability to access supplies of oil and other commodities amid a crisis. Beijing, however, is more worried about a blockade measured in bytes rather than barrels. China is devoting its best minds and billions of dollars to developing its own semiconductor technology in a bid to free itself from America’s chip choke.

Apple makes precisely none of these chips. It buys most off-the-shelf: memory chips from Japan’s Kioxia, radio frequency chips from California’s Skyworks, audio chips from Cirrus Logic, based in Austin, Texas. Apple designs in-house the ultra-complex processors that run an iPhone’s operating system. But the Cupertino, California, colossus can’t manufacture these chips. Nor can any company in the United States, Europe, Japan, or China. Today, Apple’s most advanced processors—which are arguably the world’s most advanced semiconductors—can only be produced by a single company in a single building, the most expensive factory in human history, which on the morning of August 18, 2020, was only a couple dozen miles off the USS Mustin’s starboard bow.

America’s vast reserve of scientific expertise, nurtured by government research funding and strengthened by the ability to poach the best scientists from other countries, has provided the core knowledge driving technological advances forward. The country’s network of venture capital firms and its stock markets have provided the startup capital new firms need to grow—and have ruthlessly forced out failing companies. Meanwhile, the world’s largest consumer market in the U.S. has driven the growth that’s funded decades of R&D on new types of chips. Other countries have found it impossible to keep up on their own but have succeeded when they’ve deeply integrated themselves into Silicon Valley’s supply chains. Europe has isolated islands of semiconductor expertise, notably in producing the machine tools needed to make chips and in designing chip architectures. Asian governments, in Taiwan, South Korea, and Japan, have elbowed their way into the chip industry by subsidizing firms, funding training programs, keeping their exchange rates undervalued, and imposing tariffs on imported chips. This strategy has yielded certain capabilities that no other countries can replicate—but they’ve achieved what they have in partnership with Silicon Valley, continuing to rely fundamentally on U.S. tools, software, and customers.

the concentration of advanced chip manufacturing in Taiwan, South Korea, and elsewhere in East Asia isn’t an accident. A series of deliberate decisions by government officials and corporate executives created the far-flung supply chains we rely on today. Asia’s vast pool of cheap labor attracted chipmakers looking for low-cost factory workers. The region’s governments and corporations used offshored chip assembly facilities to learn about, and eventually domesticate, more advanced technologies. Washington’s foreign policy strategists embraced complex semiconductor supply chains as a tool to bind Asia to an American-led world. Capitalism’s inexorable demand for economic efficiency drove a constant push for cost cuts and corporate consolidation. The steady tempo of technological innovation that underwrote Moore’s Law required ever more complex materials, machinery, and processes that could only be supplied or funded via global markets. And our gargantuan demand for computing power only continues to grow.

MIT considered the Apollo guidance computer one of its proudest accomplishments, but Bob Noyce knew that it was his chips that made the Apollo computer tick. By 1964, Noyce bragged, the integrated circuits in Apollo computers had run for 19 million hours with only two failures, one of which was caused by physical damage when a computer was being moved. Chip sales to the Apollo program transformed Fairchild from a small startup into a firm with one thousand employees. Sales ballooned from $500,000 in 1958 to $21 million two years later. As Noyce ramped up production for NASA, he slashed prices for other customers. An integrated circuit that sold for $120 in December 1961 was discounted to $15 by next October. NASA’s trust in integrated circuits to guide astronauts to the moon was an important stamp of approval. Fairchild’s Micrologic chips were no longer an untested technology; they were used in the most unforgiving and rugged environment: outer space.

When U.S. defense secretary Robert McNamara reformed military procurement to cut costs in the early 1960s, causing what some in the electronics industry called the “McNamara Depression,” Fairchild’s vision of chips for civilians seemed prescient. The company was the first to offer a full product line of off-the-shelf integrated circuits for civilian customers. Noyce slashed prices, too, gambling that this would drastically expand the civilian market for chips. In the mid-1960s, Fairchild chips that previously sold for $20 were cut to $2. At times Fairchild even sold products below manufacturing cost, hoping to convince more customers to try them. Thanks to falling prices, Fairchild began winning major contracts in the private sector. Annual U.S. computer sales grew from 1,000 in 1957 to 18,700 a decade later. By the mid-1960s, almost all these computers relied on integrated circuits. In 1966, Burroughs, a computer firm, ordered 20 million chips from Fairchild—more than twenty times what the Apollo program consumed. By 1968, the computer industry was buying as many chips as the military. Fairchild chips served 80 percent of this computer market. Bob Noyce’s price cuts had paid off, opening a new market for civilian computers that would drive chip sales for decades to come. Moore later argued that Noyce’s price cuts were as big an innovation as the technology inside Fairchild’s integrated circuits.

California’s Santa Clara Valley had benefitted immensely from the space race, which provided a crucial early customer. Yet by the time of the first lunar landing, Silicon Valley’s engineers had become far less dependent on defense and space contracts. Now they were focused on more earthly concerns. The chip market was booming. Fairchild’s success had already inspired several top employees to defect to competing chipmakers. Venture capital funding was pouring into startups that focused not on rockets but on corporate computers.

By the mid-1960s, the earliest integrated circuits were old news, too big and power-hungry to be very valuable. Compared to almost any other any type of technology, semiconductor technology was racing forward. The size of transistors and their energy consumption was shrinking, while the computing power that could be packed on a square inch of silicon roughly doubled every two years. No other technology moved so quickly—so there was no other sector in which stealing last year’s design was such a hopeless strategy.

Meanwhile, the “copy it” mentality meant, bizarrely, that the pathways of innovation in Soviet semiconductors were set by the United States. One of the most sensitive and secretive industries in the USSR therefore functioned like a poorly run outpost of Silicon Valley. Zelenograd was just another node in a globalizing network—with American chipmakers at the center.

Sony had the benefit of cheaper wages in Japan, but its business model was ultimately about innovation, product design, and marketing. Morita’s “license it” strategy couldn’t have been more different from the “copy it” tactics of Soviet Minister Shokin. Many Japanese companies had reputations for ruthless manufacturing efficiency. Sony excelled by identifying new markets and targeting them with impressive products using Silicon Valley’s newest circuitry technology. “Our plan is to lead the public with new products rather than ask them what kind of products they want,” Morita declared. “The public does not know what is possible, but we do.”

Interdependence wasn’t always easy. In 1959, the Electronics Industries Association appealed to the U.S. government for help lest Japanese imports undermine “national security”—and their own bottom line. But letting Japan build an electronics industry was part of U.S. Cold War strategy, so, during the 1960s, Washington never put much pressure on Tokyo over the issue. Trade publications like Electronics magazine—which might have been expected to take the side of U.S. companies—instead noted that “Japan is a keystone in America’s Pacific policy…. If she cannot enter into healthy commercial intercourse with the Western hemisphere and Europe, she will seek economic sustenance elsewhere,” like Communist China or the Soviet Union. U.S. strategy required letting Japan acquire advanced technology and build cutting-edge businesses. “A people with their history won’t be content to make transistor radios,” President Richard Nixon later observed. They had to be allowed, even encouraged, to develop more advanced technology.

Fairchild was the first semiconductor firm to offshore assembly in Asia, but Texas Instruments, Motorola, and others quickly followed. Within a decade, almost all U.S. chipmakers had foreign assembly facilities. Sporck began looking beyond Hong Kong. The city’s 25-cent hourly wages were only a tenth of American wages but were among the highest in Asia. In the mid-1960s, Taiwanese workers made 19 cents an hour, Malaysians 15 cents, Singaporeans 11 cents, and South Koreans only a dime. Sporck’s next stop was Singapore, a majority ethnic Chinese city-state whose leader, Lee Kuan Yew, had “pretty much outlawed” unions, as one Fairchild veteran remembered. Fairchild followed by opening a facility in the Malaysian city of Penang shortly thereafter. The semiconductor industry was globalizing decades before anyone had heard of the word, laying the grounds for the Asia-centric supply chains we know today.

Taiwan and the U.S. had been treaty allies since 1955, but amid the defeat in Vietnam, America’s security promises were looking shaky. From South Korea to Taiwan, Malaysia to Singapore, anti-Communist governments were seeking assurance that America’s retreat from Vietnam wouldn’t leave them standing alone. They were also seeking jobs and investment that could address the economic dissatisfaction that drove some of their populations toward Communism. Minister Li realized that Texas Instruments could help Taiwan solve both problems at once.

After initially accusing Mark Shepherd of being an imperialist, Minister Li quickly changed his tune. He realized a relationship with Texas Instruments could transform Taiwan’s economy, building industry and transferring technological know-how. Electronics assembly, meanwhile, would catalyze other investments, helping Taiwan produce more higher-value goods. As Americans grew skeptical of military commitments in Asia, Taiwan desperately needed to diversify its connections with the United States. Americans who weren’t interested in defending Taiwan might be willing to defend Texas Instruments. The more semiconductor plants on the island, and the more economic ties with the United States, the safer Taiwan would be. In July 1968, having smoothed over relations with the Taiwanese government, TI’s board of directors approved construction of the new facility in Taiwan. By August 1969, this plant was assembling its first devices. By 1980, it had shipped its billionth unit.

Intel planned to dominate the business of DRAM chips. Memory chips don’t need to be specialized, so chips with the same design can be used in many different types of devices. This makes it possible to produce them in large volumes. By contrast, the other main type of chips—those tasked with “computing” rather than “remembering”—were specially designed for each device, because every computing problem was different. A calculator worked differently than a missile’s guidance computer, for example, so until the 1970s, they used different types of logic chips. This specialization drove up cost, so Intel decided to focus on memory chips, where mass production would produce economies of scale.

By the 1980s, consumer electronics had become a Japanese specialty, with Sony leading the way in launching new consumer goods, grabbing market share from American rivals. At first Japanese firms succeeded by replicating U.S. rivals’ products, manufacturing them at higher quality and lower price. Some Japanese played up the idea that they excelled at implementation, whereas America was better at innovation. “We have no Dr. Noyces or Dr. Shockleys,” one Japanese journalist wrote, though the country had begun to accumulate its share of Nobel Prize winners. Yet prominent Japanese continued to downplay their country’s scientific successes, especially when speaking to American audiences. Sony’s research director, the famed physicist Makoto Kikuchi, told an American journalist that Japan had fewer geniuses than America, a country with “outstanding elites.” But America also had “a long tail” of people “with less than normal intelligence,” Kikuchi argued, explaining why Japan was better at mass manufacturing.

The U.S. had supported Japan’s postwar transformation into a transistor salesman. U.S. occupation authorities transferred knowledge about the invention of the transistor to Japanese physicists, while policymakers in Washington ensured Japanese firms like Sony could easily sell into U.S. markets. The aim of turning Japan into a country of democratic capitalists had worked. Now some Americans were asking whether it had worked too well. The strategy of empowering Japanese businesses seemed to be undermining America’s economic and technological edge.

Sporck saw Silicon Valley’s internal battles as fair fights, but thought Japan’s DRAM firms benefitted from intellectual property theft, protected markets, government subsidies, and cheap capital.

Jerry Sanders saw Silicon Valley’s biggest disadvantage as its high cost of capital. The Japanese “pay 6 percent, maybe 7 percent, for capital. I pay 18 percent on a good day,” he complained. Building advanced manufacturing facilities was brutally expensive, so the cost of credit was hugely important. A next-generation chip emerged roughly once every two years, requiring new facilities and new machinery. In the 1980s, U.S. interest rates reached 21.5 percent as the Federal Reserve sought to fight inflation. By contrast, Japanese DRAM firms got access to far cheaper capital. Chipmakers like Hitachi and Mitsubishi were part of vast conglomerates with close links to banks that provided large, long-term loans. Even when Japanese companies were unprofitable, their banks kept them afloat by extending credit long after American lenders would have driven them to bankruptcy. Japanese society was structurally geared to produce massive savings, because its postwar baby boom and rapid shift to one-child households created a glut of middle-aged families focused on saving for retirement. Japan’s skimpy social safety net provided a further incentive for saving.

With this cheap capital, Japanese firms launched a relentless struggle for market share. Toshiba, Fujitsu, and others were just as ruthless in competing with each other, despite the cooperative image painted by some American analysts. Yet with practically unlimited bank loans available, they could sustain losses as they waited for competitors to go bankrupt. In the early 1980s, Japanese firms invested 60 percent more than their U.S. rivals in production equipment, even though everyone in the industry faced the same cutthroat competition, with hardly anyone making much profit. Japanese chipmakers kept investing and producing, grabbing more and more market share. Because of this, five years after the 64K DRAM chip was introduced, Intel—the company that had pioneered DRAM chips a decade earlier—was left with only 1.7 percent of the global DRAM market, while Japanese competitors’ market share soared.

1987, Nobel Prize−winning MIT economist Robert Solow, who pioneered the study of productivity and economic growth, argued that the chip industry suffered from an “unstable structure,” with employees job hopping between firms and companies declining to invest in their workers. Prominent economist Robert Reich lamented the “paper entrepreneurialism” in Silicon Valley, which he thought focused too much on the search for prestige and affluence rather than technical advances. At American universities, he declared, “science and engineering programs are foundering.” American chipmakers’ DRAM disaster was somewhat related to GCA’s collapsing market share. The Japanese DRAM firms that were outcompeting Silicon Valley preferred to buy from Japanese toolmakers, benefitting Nikon at the expense of GCA. However, most of GCA’s problems were homegrown, driven by unreliable equipment and bad customer service. Academics devised elaborate theories to explain how Japan’s huge conglomerates were better at manufacturing than America’s small startups. But the mundane reality was that GCA didn’t listen to its customers, while Nikon did. Chip firms that interacted with GCA found it “arrogant” and “not responsive.” No one said that about its Japanese rivals.

The oil embargoes of 1973 and 1979 had demonstrated to many Americans the risks of relying on foreign production. When Arab governments cut oil exports to punish America for supporting Israel, the U.S. economy plunged into a painful recession. A decade of stagflation and political crises followed. American foreign policy fixated on the Persian Gulf and securing its oil supplies. President Jimmy Carter declared the region one of “the vital interests of the United States of America.” Ronald Reagan deployed the U.S. Navy to escort oil tankers in and out of the Gulf. George H. W. Bush went to war with Iraq in part to liberate Kuwait’s oil fields. When America said that oil was a “strategic” commodity, it backed the claim with military force.

But in 1986, Japan had overtaken America in the number of chips produced. By the end of the 1980s, Japan was supplying 70 percent of the world’s lithography equipment. America’s share—in an industry invented by Jay Lathrop in a U.S. military lab—had fallen to 21 percent. Lithography is “simply something we can’t lose, or we will find ourselves completely dependent on overseas manufacturers to make our most sensitive stuff,” one Defense Department official told the New York Times. But if the trends of the mid-1980s continued, Japan would dominate the DRAM industry and drive major U.S. producers out of business. The U.S. might find itself even more reliant on foreign chips and semiconductor manufacturing equipment than it was on oil, even at the depths of the Arab embargo. Suddenly Japan’s subsidies for its chip industry, widely blamed for undermining American firms like Intel and GCA, seemed like a national security issue.

As America lurched from crisis to crisis, however, the aura around men like Henry Kissinger and Pete Peterson began to wane. Their country’s system wasn’t working—but Japan’s was. By the 1980s, Morita perceived deep problems in America’s economy and society. America had long seen itself as Japan’s teacher, but Morita thought America had lessons to learn as it struggled with a growing trade deficit and the crisis in its high-tech industries. “The United States has been busy creating lawyers,” Morita lectured, while Japan has “been busier creating engineers.” Moreover, American executives were too focused on “this year’s profit,” in contrast to Japanese management, which was “long range.” American labor relations were hierarchical and “old style,” without enough training or motivation for shop-floor employees. Americans should stop complaining about Japan’s success, Morita believed. It was time to tell his American friends: Japan’s system simply worked better.

What made The Japan That Can Say No truly frightening to Washington was not only that it articulated a zero-sum Japanese nationalism, but that Ishihara had identified a way to coerce America. Japan didn’t need to submit to U.S. demands, Ishihara argued, because America relied on Japanese semiconductors. American military strength, he noted, required Japanese chips. “Whether it be mid-range nuclear weapons or inter-continental ballistic missiles, what ensures the accuracy of weapons is none other than compact, high-precision computers,” he wrote. “If Japanese semiconductors are not used, this accuracy cannot be assured.” Ishihara speculated that Japan could even provide advanced semiconductors to the USSR, tipping the military balance in the Cold War.

For a professor-turned-entrepreneur like Irwin Jacobs, DARPA funding and Defense Department contracts were crucial in keeping his startups afloat. But only some government programs worked. Sematech’s effort to save America’s lithography leader was an abject failure, for example. Government efforts were effective not when they tried to resuscitate failing firms, but when they capitalized on pre-existing American strengths, providing funding to let researchers turn smart ideas into prototype products. Members of Congress would no doubt have been furious had they learned that DARPA—ostensibly a defense agency—was wining and dining professors of computer science as they theorized about chip design. But it was efforts like these that shrank transistors, discovered new uses for semiconductors, drove new customers to buy them, and funded the subsequent generation of smaller transistors.

The U.S., Europe, and Japan had booming consumer markets that drove chip demand. Civilian semiconductor markets helped fund the specialization of the semiconductor supply chain, creating companies with expertise in everything from ultra-pure silicon wafers to the advanced optics in lithography equipment. The Soviet Union barely had a consumer market, so it produced only a fraction of the chips built in the West. One Soviet source estimated that Japan alone spent eight times as much on capital investment in microelectronics as the USSR.

As Bill Perry watched the Persian Gulf War unfold, he knew laser-guided bombs were just one of dozens of military systems that had been revolutionized by integrated circuits, enabling better surveillance, communication, and computing power. The Persian Gulf War was the first major test of Perry’s “offset strategy,” which had been devised after the Vietnam War but never deployed in a sizeable battle.

Then in 1990 crisis hit. Japan’s financial markets crashed. The economy slumped into a deep recession. Soon the Tokyo stock market was trading at half its 1990 level. Real estate prices in Tokyo fell even further. Japan’s economic miracle seemed to screech to a halt. Meanwhile, America was resurgent, in business and in war. In just a few short years, “Japan as Number One” no longer seemed very accurate. The case study in Japan’s malaise was the industry that had been held up as exemplary of Japan’s industrial prowess: semiconductors. Morita, now sixty-nine years old, watched Japan’s fortunes decline alongside Sony’s slumping stock price. He knew his country’s problems cut deeper than its financial markets. Morita had spent the previous decade lecturing Americans about their need to improve production quality, not focus on “money games” in financial markets. But as Japan’s stock market crashed, the country’s vaunted long-term thinking no longer looked so visionary. Japan’s seeming dominance had been built on an unsustainable foundation of government-backed overinvestment. Cheap capital had underwritten the construction of new semiconductor fabs, but also encouraged chipmakers to think less about profit and more about output. Japan’s biggest semiconductor firms doubled down on DRAM production even as lower cost producers like Micron and South Korea’s Samsung undercut Japanese rivals.

Like the rest of the Soviet military leadership, he’d grown more pessimistic over time. As early as 1983, Ogarkov had gone so far as to tell American journalist Les Gelb—off the record—that “the Cold War is over and you have won.” The Soviet Union’s rockets were as powerful as ever. It had the world’s largest nuclear arsenal. But its semiconductor production couldn’t keep up, its computer industry fell behind, its communications and surveillance technologies lagged, and the military consequences were disastrous. “All modern military capability is based on economic innovation, technology, and economic strength,” Ogarkov explained to Gelb. “Military technology is based on computers. You are far, far ahead of us with computers…. In your country, every little child has a computer from age 5.”

When Chang was hired by Taiwan’s government in 1985 to lead the country’s preeminent electronics research institute, Taiwan was one of Asia’s leaders in assembling semiconductor devices—taking chips made abroad, testing them, and attaching them to plastic or ceramic packages. Taiwan’s government had tried breaking into the chipmaking business by licensing semiconductor manufacturing technology from America’s RCA and founding a chipmaker called UMC in 1980, but the company’s capabilities lagged far behind the cutting edge. Taiwan boasted plenty of semiconductor industry jobs, but captured only a small share of the profit, since most money in the chip industry was made by firms designing and producing the most advanced chips. Officials like Minister Li knew the country’s economy would keep growing only if it advanced beyond simply assembling components designed and fabricated elsewhere.

As early as the mid-1970s, while still at TI, Chang had toyed with the idea of creating a semiconductor company that would manufacture chips designed by customers. At the time, chip firms like TI, Intel, and Motorola mostly manufactured chips they had designed in-house. Chang pitched this new business model to fellow TI executives in March 1976. “The low cost of computing power,” he explained to his TI colleagues, “will open up a wealth of applications that are not now served by semiconductors,” creating new sources of demand for chips, which would soon be used in everything from phones to cars to dishwashers. The firms that made these goods lacked the expertise to produce semiconductors, so they’d prefer to outsource fabrication to a specialist, he reasoned. Moreover, as technology advanced and transistors shrank, the cost of manufacturing equipment and R&D would rise. Only companies that produced large volumes of chips would be cost-competitive.

Before TSMC, a couple of small companies, mostly based in Silicon Valley, had tried building businesses around chip design, avoiding the cost of building their own fabs by outsourcing the manufacturing. These “fabless” firms were sometimes able to convince a bigger chipmaker with spare capacity to manufacture their chips. However, they always had second-class status behind the bigger chipmakers’ own production plans. Worse, they faced the constant risk that their manufacturing partners would steal their ideas. In addition, they had to navigate manufacturing processes that were slightly different at each big chipmaker. Not having to build fabs dramatically reduced startup costs, but counting on competitors to manufacture chips was always a risky business model.

However, Mao’s radicalism made it impossible to attract foreign investment or conduct serious science. The year after China produced its first integrated circuit, Mao plunged the country into the Cultural Revolution, arguing that expertise was a source of privilege that undermined socialist equality. Mao’s partisans waged war on the country’s educational system. Thousands of scientists and experts were sent to work as farmers in destitute villages. Many others were simply killed. Chairman Mao’s “Brilliant Directive issued on July 21, 1968” insisted that “it is essential to shorten the length of schooling, revolutionize education, put proletarian politics in command…. Students should be selected from among workers and peasants with practical experience, and they should return to production after a few years study.” The idea of building advanced industries with poorly educated employees was absurd. Even more so was Mao’s effort to keep out foreign technology and ideas. U.S. restrictions prevented China from buying advanced semiconductor equipment, but Mao added his own self-imposed embargo. He wanted complete self-reliance and accused his political rivals of trying to infect China’s chip industry with foreign parts, even though China couldn’t produce many advanced components itself.

The Cultural Revolution began to wane as Mao’s health declined in the early 1970s. Communist Party leaders eventually called scientists back from the countryside. They tried picking up the pieces in their labs. But China’s chip industry, which had lagged far behind Silicon Valley before the Cultural Revolution, was now far behind China’s neighbors, too. During the decade in which China had descended into revolutionary chaos, Intel had invented microprocessors, while Japan had grabbed a large share of the global DRAM market. China accomplished nothing beyond harassing its smartest citizens. By the mid-1970s, therefore, its chip industry was in a disastrous state. “Out of every 1,000 semiconductors we produce, only one is up to standard,” one party leader complained in 1975. “So much is being wasted.”

If anyone could build a chip industry in China, it was Richard Chang. He wouldn’t rely on nepotism or on foreign help. All the knowledge needed for a world-class fab was already in his head. While working at Texas Instruments, he’d opened new facilities for the company around the world. Why couldn’t he do the same in Shanghai? He founded the Semiconductor Manufacturing International Corporation (SMIC) in 2000, raising over $1.5 billion from international investors like Goldman Sachs, Motorola, and Toshiba. One analyst estimated that half of SMIC’s startup capital was provided by U.S. investors. Chang used these funds to hire hundreds of foreigners to operate SMIC’s fab, including at least four hundred from Taiwan.

When Dutch engineer Frits van Hout joined ASML in 1984 just after completing his master’s degree in physics, the company’s employees asked whether he’d joined voluntarily or was forced to take the job. Beyond its tie with Philips, “we had no facilities and no money,” van Hout remembered. Building vast in-house manufacturing processes for lithography tools would have been impossible. Instead, the company decided to assemble systems from components meticulously sourced from suppliers around the world. Relying on other companies for key components brought obvious risks, but ASML learned to manage them. Whereas Japanese competitors tried to build everything in-house, ASML could buy the best components on the market. As it began to focus on developing EUV tools, its ability to integrate components from different sources became its greatest strength. ASML’s second strength, unexpectedly, was its location in the Netherlands. In the 1980s and 1990s, the company was seen as neutral in the trade disputes between Japan and the United States. U.S. firms treated it like a trustworthy alternative to Nikon and Canon. For example, when Micron, the American DRAM startup, wanted to buy lithography tools, it turned to ASML rather than relying on one of the two main Japanese suppliers, each of which had deep ties with Micron’s DRAM competitors in Japan.

The computer industry was designed around x86 and Intel dominated the ecosystem. So x86 defines most PC architectures to this day. Intel’s x86 instruction set architecture also dominates the server business, which boomed as companies built ever larger data centers in the 2000s and then as businesses like Amazon Web Services, Microsoft Azure, and Google Cloud constructed the vast warehouses of servers that create “the cloud,” on which individuals and companies store data and run programs. In the 1990s and early 2000s, Intel had only a small share of the business of providing chips for servers, behind companies like IBM and HP. But Intel used its ability to design and manufacture cutting-edge processor chips to win data center market share and establish x86 as the industry standard there, too. By the mid-2000s, just as cloud computing was emerging, Intel had won a near monopoly over data center chips, competing only with AMD. Today, nearly every major data center uses x86 chips from either Intel or AMD. The cloud can’t function without their processors.

Shortly after the deal to put Intel’s chips in Mac computers, Jobs came back to Otellini with a new pitch. Would Intel build a chip for Apple’s newest product, a computerized phone? All cell phones used chips to run their operating systems and manage communication with cell phone networks, but Apple wanted its phone to function like a computer. It would need a powerful computer-style processor as a result. “They wanted to pay a certain price,” Otellini told journalist Alexis Madrigal after the fact, “and not a nickel more…. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100× what anyone thought.” Intel turned down the iPhone contract. Apple looked elsewhere for its phone chips. Jobs turned to Arm’s architecture, which unlike x86 was optimized for mobile devices that had to economize on power consumption. The early iPhone processors were produced by Samsung, which had followed TSMC into the foundry business. Otellini’s prediction that the iPhone would be a niche product proved horribly wrong. By the time he realized his mistake, however, it was too late. Intel would later scramble to win a share of the smartphone business. Despite eventually pouring billions of dollars into products for smartphones, Intel never had much to show for it. Apple dug a deep moat around its immensely profitable castle before Otellini and Intel realized what was happening.

Even within the semiconductor industry, it was easy to find counterpoints to Grove’s pessimism about offshoring expertise. Compared to the situation in the late 1980s, when Japanese competitors were beating Silicon Valley in terms of DRAM design and manufacturing, America’s chip ecosystem looked healthier. It wasn’t only Intel that was printing immense profits. Many fabless chip designers were, too. Except for the loss of cutting-edge lithography, America’s semiconductor manufacturing equipment firms generally thrived during the 2000s. Applied Materials remained the world’s largest semiconductor toolmaking company, building equipment like the machines that deposited thin films of chemicals on top of silicon wafers as they were processed. Lam Research had world-beating expertise in etching circuits into silicon wafers. And KLA, also based in Silicon Valley, had the world’s best tools for finding nanometer-sized errors on wafers and lithography masks. These three toolmakers were rolling out new generations of equipment that could deposit, etch, and measure features at the atomic scale, which would be crucial for making the next generation of chips. A couple Japanese firms—notably, Tokyo Electron—had some comparable capabilities to America’s equipment makers. Nevertheless, it was basically impossible to make a leading-edge chip without using some American tools.

the history of the semiconductor industry didn’t suggest that U.S. leadership was guaranteed. America hadn’t outrun the Japanese in the 1980s, though it did in the 1990s. GCA hadn’t outrun Nikon or ASML in lithography. Micron was the only DRAM producer able to keep pace with East Asian rivals, while many other U.S. DRAM producers went bust. Through the end of the 2000s, Intel retained a lead over Samsung and TSMC in producing miniaturized transistors, but the gap had narrowed. Intel was running more slowly, though it still benefitted from its more advanced starting point. The U.S. was a leader in most types of chip design, though Taiwan’s MediaTek was proving that other countries could design chips, too. Van Atta saw few reasons for confidence and none for complacency. “The U.S. leadership position,” he warned in 2007, “will likely erode seriously over the next decade.” No one was listening.

By the 2000s, it was common to split the semiconductor industry into three categories. “Logic” refers to the processors that run smartphones, computers, and servers. “Memory” refers to DRAM, which provides the short-term memory computers need to operate, and flash, also called NAND, which remembers data over time. The third category of chips is more diffuse, including analog chips like sensors that convert visual or audio signals into digital data, radio frequency chips that communicate with cell phone networks, and semiconductors that manage how devices use electricity.

Unlike Samsung and Hynix, which produce most of their DRAM in South Korea, Micron’s long string of acquisitions left it with DRAM fabs in Japan, Taiwan, and Singapore as well as in the United States. Government subsidies in countries like Singapore encouraged Micron to maintain and expand fab capacity there. So even though an American company is one of the world’s three biggest DRAM producers, most DRAM manufacturing is in East Asia.

Every PC maker, from IBM to Compaq, had to use an Intel or an AMD chip for their main processor, because these two firms had a de facto monopoly on the x86 instruction set that PCs required. There was a lot more competition in the market for chips that rendered images on screens. The emergence of semiconductor foundries, and the driving down of startup costs, meant that it wasn’t only Silicon Valley aristocracy that could compete to build the best graphics processors. The company that eventually came to dominate the market for graphics chips, Nvidia, had its humble beginnings not in a trendy Palo Alto coffeehouse but in a Denny’s in a rough part of San Jose.

Nvidia’s first set of customers—video and computer game companies—might not have seemed like the cutting edge, yet the firm wagered that the future of graphics would be in producing complex, 3D images. Early PCs were a dull, drab, 2D world, because the computation required to display 3D images was immense.

Jacobs, whose faith in Moore’s Law was as strong as ever, thought a more complicated system of frequency-hopping would work better. Rather than keeping a given phone call on a certain frequency, he proposed moving call data between different frequencies, letting him cram more calls into available spectrum space. Most people thought he was right in theory, but that such a system would never work in practice. Voice quality would be low, they argued, and calls would be dropped. The amount of processing needed to move call data between frequencies and have it interpreted by a phone on the other end seemed enormous. Jacobs disagreed, founding a company called Qualcomm—Quality Communications—in 1985 to prove the point. He built a small network with a couple cell towers to prove it would work. Soon the entire industry realized Qualcomm’s system would make it possible to fit far more cell phone calls into existing spectrum space by relying on Moore’s Law to run the algorithms that make sense of all the radio waves bouncing around. For each generation of cell phone technology after 2G, Qualcomm contributed key ideas about how to transmit more data via the radio spectrum and sold specialized chips with the computing power capable of deciphering this cacophony of signals. The company’s patents are so fundamental it’s impossible to make a cell phone without them. Qualcomm soon diversified into a new business line, designing not only the modem chips in a phone that communicate with a cell network, but also the application processors that run a smartphone’s core systems. These chip designs are monumental engineering accomplishments, each built on tens of millions of lines of code.

fabless chip design firms were hungry for a credible competitor to TSMC, because the Taiwanese behemoth already had around half of the world’s foundry market. The only other major competitor was Samsung, whose foundry business had technology that was roughly comparable to TSMC’s, though the company possessed far less production capacity. Complications arose, though, because part of Samsung’s operation involved building chips that it designed in-house. Whereas a company like TSMC builds chips for dozens of customers and focuses relentlessly on keeping them happy, Samsung had its own line of smartphones and other consumer electronics, so it was competing with many of its customers. Those firms worried that ideas shared with Samsung’s chip foundry might end up in other Samsung products. TSMC and GlobalFoundries had no such conflicts of interest.

As Jobs introduced new versions of the iPhone, he began etching his vision for the smartphone into Apple’s own silicon chips. A year after launching the iPhone, Apple bought a small Silicon Valley chip design firm called PA Semi that had expertise in energy-efficient processing. Soon Apple began hiring some of the industry’s best chip designers. Two years later, the company announced it had designed its own application processor, the A4, which it used in the new iPad and the iPhone 4. Designing chips as complex as the processors that run smartphones is expensive, which is why most low- and midrange smartphone companies buy off-the-shelf chips from companies like Qualcomm. However, Apple has invested heavily in R&D and chip design facilities in Bavaria and Israel as well as Silicon Valley, where engineers design its newest chips. Now Apple not only designs the main processors for most of its devices but also ancillary chips that run accessories like AirPods. This investment in specialized silicon explains why Apple’s products work so smoothly. Within four years of the iPhone’s launch, Apple was making over 60 percent of all the world’s profits from smartphone sales, crushing rivals like Nokia and BlackBerry and leaving East Asian smartphone makers to compete in the low-margin market for cheap phones.

In the early 2010s, Nvidia—the designer of graphic chips—began hearing rumors of PhD students at Stanford using Nvidia’s graphics processing units (GPUs) for something other than graphics. GPUs were designed to work differently from standard Intel or AMD CPUs, which are infinitely flexible but run all their calculations one after the other. GPUs, by contrast, are designed to run multiple iterations of the same calculation at once. This type of “parallel processing,” it soon became clear, had uses beyond controlling pixels of images in computer games. It could also train AI systems efficiently. Where a CPU would feed an algorithm many pieces of data, one after the other, a GPU could process multiple pieces of data simultaneously. To learn to recognize images of cats, a CPU would process pixel after pixel, while a GPU could “look” at many pixels at once. So the time needed to train a computer to recognize cats decreased dramatically. Nvidia has since bet its future on artificial intelligence. From its founding, Nvidia outsourced its manufacturing, largely to TSMC, and focused relentlessly on designing new generations of GPUs and rolling out regular improvements to its special programming language called CUDA that makes it straightforward to devise programs that use Nvidia’s chips. As investors bet that data centers will require ever more GPUs, Nvidia has become America’s most valuable semiconductor company.

Whether it will be Nvidia or the big cloud companies doing the vanquishing, Intel’s near-monopoly in sales of processors for data centers is ending. Losing this dominant position would have been less problematic if Intel had found new markets. However, the company’s foray into the foundry business in the mid-2010s, where it tried to compete head-on with TSMC, was a flop. Intel tried opening its manufacturing lines to any customers looking for chipmaking services, quietly admitting that the model of integrated design and manufacturing wasn’t nearly as successful as Intel’s executives claimed. The company had all the ingredients to become a major foundry player, including advanced technology and massive production capacity, but succeeding would have required a major cultural change. TSMC was open with intellectual property, but Intel was closed off and secretive. TSMC was service-oriented, while Intel thought customers should follow its own rules. TSMC didn’t compete with its customers, since it didn’t design any chips. Intel was the industry giant whose chips competed with almost everyone.

Why, then, was Xi Jinping worried about digital security? The more China’s leaders studied their technological capabilities, the less important their internet companies seemed. China’s digital world runs on digits—1s and 0s—that are processed and stored mostly by imported semiconductors. China’s tech giants depend on data centers full of foreign, largely U.S.-produced, chips. The documents that Edward Snowden leaked in 2013 before fleeing to Russia demonstrated American network-tapping capabilities that surprised even the cyber sleuths in Beijing. Chinese firms had replicated Silicon Valley’s expertise in building software for e-commerce, online search, and digital payments. But all this software relies on foreign hardware. When it comes to the core technologies that undergird computing, China is staggeringly reliant on foreign products, many of which are designed in Silicon Valley and almost all of which are produced by firms based in the U.S. or one of its allies.

China’s problem isn’t only in chip fabrication. In nearly every step of the process of producing semiconductors, China is staggeringly dependent on foreign technology, almost all of which is controlled by China’s geopolitical rivals—Taiwan, Japan, South Korea, or the United States. The software tools used to design chips are dominated by U.S. firms, while China has less than 1 percent of the global software tool market, according to data aggregated by scholars at Georgetown University’s Center for Security and Emerging Technology. When it comes to core intellectual property, the building blocks of transistor patterns from which many chips are designed, China’s market share is 2 percent; most of the rest is American or British. China supplies 4 percent of the world’s silicon wafers and other chipmaking materials; 1 percent of the tools used to fabricate chips; 5 percent of the market for chip designs. It has only a 7 percent market share in the business of fabricating chips. None of this fabrication capacity involves high-value, leading-edge technology.

China was disadvantaged, however, by the government’s desire not to build connections with Silicon Valley, but to break free of it. Japan, South Korea, the Netherlands, and Taiwan had come to dominate important steps of the semiconductor production process by integrating deeply with the U.S. chip industry. Taiwan’s foundry industry only grew rich thanks to America’s fabless firms, while ASML’s most advanced lithography tools only work thanks to specialized light sources produced at the company’s San Diego subsidiary. Despite occasional tension over trade, these countries have similar interests and worldviews, so mutual reliance on each other for chip designs, tools, and fabrication services was seen as a reasonable price to pay for the efficiency of globalized production. If China only wanted a bigger part in this ecosystem, its ambitions could’ve been accommodated. However, Beijing wasn’t looking for a better position in a system dominated by America and its friends. Xi’s call to “assault the fortifications” wasn’t a request for slightly higher market share. It was about remaking the world’s semiconductor industry, not integrating with it. Some economic policymakers and semiconductor industry executives in China would have preferred a strategy of deeper integration, yet leaders in Beijing, who thought more about security than efficiency, saw interdependence as a threat. The Made in China 2025 plan didn’t advocate economic integration but the opposite. It called for slashing China’s dependence on imported chips. The primary target of the Made in China 2025 plan is to reduce the share of foreign chips used in China.

The most controversial example of technology transfer, however, was by Intel’s archrival, AMD. In the mid-2010s, the company was struggling financially, having lost PC and data center market share to Intel. AMD was never on the brink of bankruptcy, but it wasn’t far from it, either. The company was looking for cash to buy time as it brought new products to market. In 2013, it sold its corporate headquarters in Austin, Texas, to raise cash, for example. In 2016, it sold to a Chinese firm an 85 percent stake in its semiconductor assembly, testing, and packaging facilities in Penang, Malaysia, and Suzhou, China, for $371 million. AMD described these facilities as “world-class.” That same year, AMD cut a deal with a consortium of Chinese firms and government bodies to license the production of modified x86 chips for the Chinese market. The deal, which was deeply controversial within the industry and in Washington, was structured in a way that didn’t require the approval of CFIUS, the U.S. government committee that reviews foreign purchases of American assets. AMD took the transaction to the relevant authorities in the Commerce Department, who don’t “know anything about microprocessors, or semiconductors, or China,” as one industry insider put it. Intel reportedly warned the government about the deal, implying that it harmed U.S. interests and that it would threaten Intel’s business. Yet the government lacked a straightforward way to stop it, so the deal was ultimately waved through, sparking anger in Congress and in the Pentagon. Just as AMD finalized the deal, its new processor series, called “Zen,” began hitting the market, turning around the company’s fortunes, so AMD ended up not depending on the money from its licensing deal. However, the joint venture had already been signed and the technology was transferred. The Wall Street Journal ran multiple stories arguing that AMD had sold “crown jewels” and “the keys to the kingdom.” Other industry analysts suggested the transaction was designed to let Chinese firms claim to the Chinese government they were designing cutting-edge microprocessors in China, when in reality they were simply tweaking AMD designs. The transaction was portrayed in English-language media as a minor licensing deal, but leading Chinese experts told state-owned media the deal supported China’s effort to domesticate “core technologies” so that “we no longer can be pulled around by our noses.” Pentagon officials who opposed the deal agree that AMD scrupulously followed the letter of the law, but say they remain unconvinced the transaction was as innocuous as defenders claim. “I continue to be very skeptical we were getting the full story from AMD,” one former Pentagon official says. The Wall Street Journal reported that the joint venture involved Sugon, a Chinese supercomputer firm that has described “making contributions to China’s national defense and security” as its “fundamental mission.” AMD described Sugon as a “strategic partner” in press releases as recently as 2017, which was guaranteed to raise eyebrows in Washington.

Chipmakers jealously guard their critical technologies, of course. But almost every chip firm has non-core technology, in subsectors that they don’t lead, that they’d be happy to share for a price. When companies are losing market share or in need of financing, moreover, they don’t have the luxury of focusing on the long term. This gives China powerful levers to induce foreign chip firms to transfer technology, open production facilities, or license intellectual property, even when foreign companies realize they’re helping develop competitors. For chip firms, its often easier to raise funds in China than on Wall Street. Accepting Chinese capital can be an implicit requirement for doing business in the country.

The ties between Huawei and the Chinese state are well documented but explain little about how the company built a globe-spanning business. To understand the company’s expansion, it’s more helpful to compare Huawei’s trajectory to a different tech-focused conglomerate, South Korea’s Samsung. Ren was born a generation after Samsung’s Lee Byung-Chul, but the two moguls have a similar operating model. Lee built Samsung from a trader of dried fish into a tech company churning out some of the world’s most advanced processor and memory chips by relying on three strategies. First, assiduously cultivate political relationships to garner favorable regulation and cheap capital. Second, identify products pioneered in the West and Japan and learn to build them at equivalent quality and lower cost. Third, globalize relentlessly, not only to seek new customers but also to learn by competing with the world’s best companies. Executing these strategies made Samsung one of the world’s biggest companies, achieving revenues equivalent to 10 percent of South Korea’s entire GDP.

Huawei’s critics often allege that its success rests on a foundation of stolen intellectual property, though this is only partly true. The company has admitted to some prior intellectual property violations and has been accused of far more. In 2003, for example, Huawei acknowledged that 2 percent of the code in one of its routers was copied directly from Cisco, an American competitor. Canadian newspapers, meanwhile, have reported that the country’s spy agencies believe there was a Chinese-government-backed campaign of hacking and espionage against Canadian telecom giant Nortel in the 2000s, which allegedly benefitted Huawei. Theft of intellectual property may well have benefitted the company, but it can’t explain its success. No quantity of intellectual property or trade secrets is enough to build a business as big as Huawei. The company has developed efficient manufacturing processes that have driven down costs and built products that customers see as high-quality. Huawei’s spending on R&D, meanwhile, is world leading. The company spends several times more on R&D than other Chinese tech firms. Its roughly $15 billion annual R&D budget is paralleled by only a handful of firms, including tech companies like Google and Amazon, pharmaceutical companies like Merck, and carmakers like Daimler or Volkswagen. Even when weighing Huawei’s track record of intellectual property theft, the company’s multibillion-dollar R&D spending suggests a fundamentally different ethos than the “copy it” mentality of Soviet Zelenograd, or the many other Chinese firms that have tried to break into the chip industry on the cheap.

Beijing’s aim isn’t simply to match the U.S. system-by-system, but to develop capabilities that could “offset” American advantages, taking the Pentagon’s concept from the 1970s and turning it against the United States. China has fielded an array of weapons that systematically undermine U.S. advantages. China’s precision anti-ship missiles make it extremely dangerous for U.S. surface ships to transit the Taiwan Strait in a time of war, holding American naval power at bay. New air defense systems contest America’s ability to dominate the airspace in a conflict. Long-range land attack missiles threaten the network of American military bases from Japan to Guam. China’s anti-satellite weapons threaten to disable communications and GPS networks. China’s cyberwar capabilities haven’t been tested in wartime, but the Chinese would try to bring down entire U.S. military systems. Meanwhile, in the electromagnetic spectrum, China might try to jam American communications and blind surveillance systems, leaving the U.S. military unable to see enemies or communicate with allies.

Measured by the number of AI experts, China appears to have capabilities that are comparable to America’s. Researchers at MacroPolo, a China-focused think tank, found that 29 percent of the world’s leading researchers in artificial intelligence are from China, as opposed to 20 percent from the U.S. and 18 percent from Europe. However, a staggering share of these experts end up working in the U.S., which employs 59 percent of the world’s top AI researchers. The combination of new visa and travel restrictions plus China’s effort to retain more researchers at home may neutralize America’s historical skill at stripping geopolitical rivals of their smartest minds.

The battle for the electromagnetic spectrum will be an invisible struggle conducted by semiconductors. Radar, jamming, and communications are all managed by complex radio frequency chips and digital-analog converters, which modulate signals to take advantage of open spectrum space, send signals in a specific direction, and try to confuse adversaries’ sensors. Simultaneously, powerful digital chips will run complex algorithms inside a radar or jammer that assess the 289signals received and decide what signals to send out in a matter of milliseconds. At stake is a military’s ability to see and to communicate. Autonomous drones won’t be worth much if the devices can’t determine where they are or where they’re heading.

DARPA’s budget is a couple billion dollars per year, less than the R&D budgets of most of the industry’s biggest firms. Of course, DARPA spends a lot more on far-out research ideas, whereas companies like Intel and Qualcomm spend most of their money on projects that are only a couple years from fruition. However, the U.S. government in general buys a smaller share of the world’s chips than ever before. The U.S. government bought almost all the early integrated circuits that Fairchild and Texas Instruments produced in the early 1960s. By the 1970s, that number had fallen to 10−15 percent. Now it’s around 2 percent of the U.S. chip market. As a buyer of chips, Apple CEO Tim Cook has more influence on the industry than any Pentagon official today.

Commerce Secretary Penny Pritzker gave a high-profile address in Washington on semiconductors, declaring it “imperative that semiconductor technology remains a central feature of American ingenuity and a driver of our economic growth. We cannot afford to cede our leadership.” She identified China as the central challenge, condemning “unfair trade practices and massive, non-market-based state intervention” and cited “new attempts by China to acquire companies and technology based on their government’s interest—not commercial objectives,” an accusation driven by Tsinghua Unigroup’s acquisition spree. With little time left in the Obama administration, however, there wasn’t much Pritzker could do. Rather, the administration’s modest goal was to start a discussion that—it hoped—the incoming Hillary Clinton administration would carry forward. Pritzker also ordered the Commerce Department to conduct a study of the semiconductor supply chain and promised to “make clear to China’s leaders at every opportunity that we will not accept a $150 billion industrial policy designed to appropriate this industry.” But it was easy to condemn China’s subsidies. It was far harder to make them stop.

U.S. intelligence had voiced concerns about Huawei’s alleged links to the Chinese government for many years, though it was only in the mid-2010s that the company and its smaller peer, ZTE, started attracting public attention. Both companies sold competing telecom equipment; ZTE was state-owned, while Huawei was private but was alleged by U.S. officials to have close ties with the government. Both companies had spent decades fighting allegations that they’d bribed officials in multiple countries to win contracts. And in 2016, during the final year of the Obama administration, both were accused of violating U.S. sanctions by supplying goods to Iran and North Korea. The Obama administration considered imposing financial sanctions on ZTE, which would have severed the company’s access to the international banking system, but instead opted to punish the company in 2016 by restricting U.S. firms from selling to it. Export controls like this had previously been used mostly against military targets, to stop the transfer of technology to companies supplying components to Iran’s missile program, for example. But the Commerce Department had broad authority to prohibit the export of civilian technologies, too. ZTE was highly reliant on American components in its systems—above all, American chips. However, in March 2017, before the threatened restrictions were implemented, the company signed a plea deal with the U.S. government and paid a fine, so the export restrictions were removed before they’d taken force.

Publicly, semiconductor CEOs and their lobbyists urged the new administration to work with China and encourage it to comply with trade agreements. Privately, they admitted this strategy was hopeless and feared that state-supported Chinese competitors would grab market share at their expense. The entire chip industry depended on sales to China—be it chipmakers like Intel, fabless designers like Qualcomm, or equipment manufacturers like Applied Materials.

Three companies dominate the world’s market for DRAM chips today, Micron and its two Korean rivals, Samsung and SK Hynix. Taiwanese firms spent billions trying to break into the DRAM business in the 1990s and 2000s but never managed to establish profitable businesses. The DRAM market requires economies of scale, so it’s difficult for small producers to be price competitive. Though Taiwan never succeeded in building a sustainable memory chip industry, both Japan and South Korea had focused on DRAM chips when they first entered the chip industry in the 1970s and 1980s. DRAM requires specialized know-how, advanced equipment, and large quantities of capital investment. Advanced equipment can generally be purchased off-the-shelf from the big American, Japanese, and Dutch toolmakers. The know-how is the hard part. When Samsung entered the business in the late 1980s, it licensed technology from Micron, opened an R&D facility in Silicon Valley, and hired dozens of American-trained PhDs. Another, faster, method for acquiring know-how is to poach employees and steal files.

There’s a long history in the chip industry of acquiring rivals’ technology, dating back to the string of allegations about Japanese intellectual property theft in the 1980s. Jinhua’s technique, however, was closer to the KGB’s Directorate T. First, Jinhua cut a deal with Taiwan’s UMC, which fabricated logic chips (not memory chips), whereby UMC would receive around $700 million in exchange for providing expertise in producing DRAM. Licensing agreements are common in the semiconductor industry, but this agreement had a twist. UMC was promising to provide DRAM technology, but it wasn’t in the DRAM business. So in September 2015, UMC hired multiple employees from Micron’s facility in Taiwan, starting with the president, Steven Chen, who was put in charge of developing UMC’s DRAM technology and managing its relationship with Jinhua. The next month, UMC hired a process manager at Micron’s Taiwan facility named J. T. Ho. Over the subsequent year, Ho received a series of documents from his former Micron colleague, Kenny Wang, who was still working at the Idaho chipmaker’s facility in Taiwan. Eventually, Wang left Micron to move to UMC, bringing nine hundred files uploaded to Google Drive with him. Taiwanese prosecutors were notified by Micron of the conspiracy and started gathering evidence by tapping Wang’s phone. They soon accumulated enough evidence to bring charges against UMC, which had since filed for patents on some of the technology it stole from Micron. When Micron sued UMC and Jinhua for violating its patents, they countersued in China’s Fujian Province. A Fujian court ruled that Micron was responsible for violating UMC and Jinhua’s patents—patents that had been filed using material stolen from Micron. To “remedy” the situation, Fuzhou Intermediate People’s Court banned Micron from selling twenty-six products in China, the company’s biggest market. This was a perfect case study of the state-backed intellectual property theft foreign companies operating in China had long complained of. The Taiwanese naturally understood why the Chinese preferred not to abide by intellectual property rules, of course. When Texas Instruments first arrived in Taiwan in the 1960s, Minister K. T. Li had sneered that “intellectual property rights are how imperialists bully backward countries.” Yet Taiwan had concluded it was better to respect intellectual property norms, especially as its companies began developing their own technologies and had their own patents to defend.

In May 2020, the administration tightened restrictions on Huawei further. Now, the Commerce Department declared, it would “protect U.S. national security by restricting Huawei’s ability to use U.S. technology and software to design and manufacture its semiconductors abroad.” The new Commerce Department rules didn’t simply stop the sale of U.S.-produced goods to Huawei. They restricted any goods made with U.S.-produced technology from being sold to Huawei, too. In a chip industry full of choke points, this meant almost any chip. TSMC can’t fabricate advanced chips for Huawei without using U.S. manufacturing equipment. Huawei can’t design chips without U.S.-produced software. Even China’s most advanced foundry, SMIC, relies extensively on U.S. tools. Huawei was simply cut off from the world’s entire chipmaking infrastructure, except for chips that the U.S. Commerce Department deigned to give it a special license to buy.

Since then, Huawei’s been forced to divest part of its smartphone business and its server business, since it can’t get the necessary chips. China’s rollout of its own 5G telecoms network, which was once a high-profile government priority, has been delayed due to chip shortages. After the U.S. restrictions took place, other countries, notably Britain, decided to ban Huawei, reasoning that in the absence of U.S. chips the company would struggle to service its products.

It’s commonly argued that the escalating tech competition with the United States is like a “Sputnik moment” for China’s government. The allusion is to the United States’ fear after the launch of Sputnik in 1957 that it was falling behind its rival, driving Washington to pour funding into science and technology. China certainly faced a Sputnik-scale shock after the U.S. banned sales of chips to firms like Huawei.

Samsung and its smaller Korean rival SK Hynix benefit from the support of the Korean government but are stuck between China and the U.S., with each country trying to cajole South Korea’s chip giants to build more manufacturing in their countries. Samsung recently announced plans to expand and upgrade its facility for producing advanced logic chips in Austin, Texas, for example, an investment estimated to cost $17 billion. Both companies face scrutiny from the U.S. over proposals to upgrade their facilities in China, however. U.S. pressure to restrict the transfer of EUV tools to SK Hynix’s facility in Wuxi, China, is reportedly delaying its modernization—and presumably imposing a substantial cost on the company. South Korea isn’t the only country where chip companies and the government work as a “team,” to use President Moon’s phrase. Taiwan’s government remains fiercely protective of its chip industry, which it recognizes as its greatest source of leverage on the international stage. Morris Chang, now ostensibly fully retired from TSMC, has served as a trade envoy for Taiwan. His primary interest—and Taiwan’s—remains ensuring that TSMC retains its central role in the world’s chip industry. The company itself plans to invest over $100 billion between 2022 and 2024 to upgrade its technology and expand chipmaking capacity. Most of this money will be invested in Taiwan, though the company plans to upgrade its facility in Nanjing, China, and to open a new fab in Arizona. Neither of these new fabs will produce the most cutting-edge chips, however, so TSMC’s most advanced technology will remain in Taiwan.

The primary hope for advanced manufacturing in the United States is Intel. After years of drift, the company named Pat Gelsinger as CEO in 2021. Born in small-town Pennsylvania, Gelsinger started his career at Intel and was mentored by Andy Grove. He eventually left to take on senior roles at two cloud computing companies before he was brought back to turn Intel around. He’s set out an ambitious and expensive strategy with three prongs. The first is to regain manufacturing leadership, overtaking Samsung and TSMC. To do this, Gelsinger has cut a deal with ASML to let Intel acquire the first next-generation EUV machine, which is expected to be ready in 2025. If Intel can learn how to use these new tools before rivals, it could provide a technological edge. The second prong of Gelsinger’s strategy is launching a foundry business that will compete directly with Samsung and TSMC, producing chips for fabless firms and helping Intel win more market share. Intel’s spending heavily on new facilities in the U.S. and Europe to build capacity that potential future foundry customers will require.

If TSMC’s fabs were to slip into the Chelungpu Fault, whose movement caused Taiwan’s last big earthquake in 1999, the reverberations would shake the global economy. It would only take a handful of explosions, deliberate or accidental, to cause comparable damage. Some back-of-the-envelope calculations illustrate what’s at stake. Taiwan produces 11 percent of the world’s memory chips. More important, it fabricates 37 percent of the world’s logic chips. Computers, phones, data centers, and most other electronic devices simply can’t work without them, so if Taiwan’s fabs were knocked offline, we’d produce 37 percent less computing power during the following year.

After a disaster in Taiwan, in other words, the total costs would be measured in the trillions. Losing 37 percent of our production of computing power each year could well be more costly than the COVID pandemic and its economically disastrous lockdowns. It would take at least half a decade to rebuild the lost chipmaking capacity. These days, when we look five years out we hope to be building 5G networks and metaverses, but if Taiwan were taken offline we might find ourselves struggling to acquire dishwashers.

Neil Thompson and Svenja Spanuth, two researchers, have gone so far as to argue that we’re seeing a “decline of computers as a general purpose technology.” They think the future of computing will be divided between “ ‘fast lane’ applications that get powerful customized chips and ‘slow lane’ applications that get stuck using general-purpose chips whose progress fades.” It’s undeniable that the microprocessor, the workhorse of modern computing, is being partially displaced by chips made for specific purposes. What’s less clear is whether this is a problem. Nvidia’s GPUs are not general purpose like an Intel microprocessor, in the sense that they’re designed specifically for graphics and, increasingly, AI. However, Nvidia and other companies offering chips that are optimized for AI have made artificial intelligence far cheaper to implement, and therefore more widely accessible. AI has become a lot more “general purpose” today than was conceivable a decade ago, largely thanks to new, more powerful chips. The recent trend of big tech firms like Amazon and Google designing their own chips marks another change from recent decades. Both Amazon and Google entered the chip design business to improve the efficiency of the servers that run their publicly available clouds. Anyone can access Google’s TPU chips on Google’s cloud for a fee. The pessimistic view is to see this as a bifurcation of computing into a “slow lane” and a “fast lane.” What’s surprising though, is how easy it is for almost anyone to access the fast lane by buying an Nvidia chip or by renting access to an AI-optimized cloud.

About Journeyman

A global macro analyst with over four years experience in the financial market, the author began his career as an equity analyst before transitioning to macro research focusing on Emerging Markets at a well-known independent research firm. He read voraciously, spending most of his free time following The Economist magazine and reading topics on finance and self-improvement. When off duty, he works part-time for Getty Images, taking pictures from all over the globe. To date, he has over 1200 pictures over 35 countries being sold through the company.
This entry was posted in Review and Ideas and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s