The world is rapidly changing and many tasks that has historically deemed to be impossible to be automate are now either done completely by robots, or aided by it. With technological advancement only to accelerate, what is the value of human work and where is the trend heading? In his great and mind-blowing book, Daniel Susskind provides a clue on the answers. The key points below are an excerpt from the book and serves as a personal note to me and my friends.
A WORLD WITHOUT WORK: TECHNOLOGY, AUTOMATION, AND HOW WE SHOULD RESPOND by Daniel Susskind
It was John Maynard Keynes, the great British economist, who popularized the term “technological unemployment” almost fifty years before Leontief wrote down his worries, capturing in a pithy pairing of words the idea that new technologies might push people out of work. In what follows, I will draw on many of the economic arguments that have been developed since Keynes to try to gain a better look back at what happened in the past, and a clearer glimpse of what lies ahead. But I will also seek to go well beyond the narrow intellectual terrain inhabited by most economists working in this field. The future of work raises exciting and troubling questions that often have little to do with economics: questions about the nature of intelligence, about inequality and why it matters, about the political power of large technology companies, about what it means to live a meaningful life, about how we might live together in a world that looks very different from the one in which we have grown up. In my view, any story about the future of work that fails to engage with these questions as well is incomplete.
Even at the century’s end, tasks are likely to remain that are either hard to automate, unprofitable to automate, or possible and profitable to automate but which we will still prefer people to do.
Machines will not do everything in the future, but they will do more. And as they slowly, but relentlessly, take on more and more tasks, human beings will be forced to retreat to an ever-shrinking set of activities. It is unlikely that every person will be able to do what remains to be done; and there is no reason to imagine there will be enough demand for it to employ all those who are indeed able to do it.
A useful way of thinking about what this means is to consider the impact that automation has had already had on farming and manufacturing in many parts of the world. Farmers and factory workers are still needed: those jobs have not completely vanished. But the number of workers that is needed has fallen in both cases, sometimes precipitously—even though these sectors produce more output than ever before.
But it is still helpful in highlighting what should actually be worrying us about the future: not a world without any work at all, as some predict, but a world without enough work for everyone to do.
It is not a coincidence that, today, worries about economic inequality are intensifying at the exact same time that anxiety about automation is growing. These two problems—inequality and technological unemployment—are very closely related. Today, the labor market is the main way that we share out economic prosperity in society: most people’s jobs are their main, if not their only, source of income. The vast inequalities we already see in the labor market, with some workers receiving far less for their efforts than others, show that this approach is already creaking.
Technological unemployment, in a strange way, will be a symptom of that success. In the twenty-first century, technological progress will solve one problem, the question of how to make the pie large enough for everyone to live on. But, as we have seen, it will replace it with three others: the problems of inequality, power, and purpose.
From the outset, it seems, economic growth and automation anxiety were intertwined.
Yes, people did tend to find new work after being displaced by technology—but the way in which this happened was far from being gentle or benign. Take the Industrial Revolution again, that textbook moment of technological progress. Despite the Luddites’ fears, the unemployment rate in Britain remained relatively low, as we can see in Figure 1.2. But, at the same time, whole industries were decimated, with lucrative crafts like hand weaving and candle making turned into profitless pastimes. Communities were hollowed out and entire cities thrust into decline. It is noteworthy that real wages in Britain barely rose—a measly 4 percent rise in total from 1760 to 1820. Meanwhile food became more expensive, diets were poorer, infant mortality worsened, and life expectancy fell.21 People were, quite literally, diminished: a historian reports that average physical heights fell to their “lowest ever levels” on account of this hardship.22
Take the Industrial Revolution again, that textbook moment of technological progress. Despite the Luddites’ fears, the unemployment rate in Britain remained relatively low, as we can see in Figure 1.2. But, at the same time, whole industries were decimated, with lucrative crafts like hand weaving and candle making turned into profitless pastimes. Communities were hollowed out and entire cities thrust into decline. It is noteworthy that real wages in Britain barely rose—a measly 4 percent rise in total from 1760 to 1820. Meanwhile food became more expensive, diets were poorer, infant mortality worsened, and life expectancy fell.21
In the OECD—the Organisation for Economic Cooperation and Development, a club of several dozen rich countries—the average number of hours that people work each year has continuously fallen over the past fifty years. The decline has been slow, about forty-five hours a decade, but steady nonetheless.
Importantly, a large part of this decline appears to be associated with technological progress and the increases in productivity that came along with it. Germany, for instance, is among the most productive countries in Europe, and also the one where people work the fewest hours a year. Greece is among the least productive, and—contrary to what many might think—the one where people work the most hours a year.
Technological change may affect not only the amount of work, but also the nature of that work. How well-paid is the work? How secure is it? How long is the working day, or the working week? What sort of tasks does the work involve—is it the fulfilling sort of activity you leap out of bed in the morning to do, or the sort that keeps you hiding under the covers? The risk, in focusing on jobs alone, is not so much failing to see the proverbial forest for the trees, but failing to see all the different trees in the forest.
Yes, machines took the place of human beings in performing certain tasks. But machines didn’t just substitute for people; they also complemented them at other tasks that had not been automated, raising the demand for people to do that work instead. Throughout history, there have always been two distinct forces at play: the substituting force, which harmed workers, but also the helpful complementing force, which did the opposite.
new technologies may automate some tasks, taking them out the hands of workers, but make those same workers more productive at the tasks that remain for them to do in their jobs.
In all these cases, if productivity increases are passed on to customers via lower prices or better-quality services, then the demand for whatever goods and services are being provided is likely to rise, and the demand for human workers along with it. Through the productivity effect, then, technological progress complements human beings in a very direct way, increasing the demand for their efforts by making them better at the work that they do.
if productivity increases are passed on to customers via lower prices or better-quality services, then the demand for whatever goods and services are being provided is likely to rise, and the demand for human workers along with it. Through the productivity effect, then, technological progress complements human beings in a very direct way, increasing the demand for their efforts by making them better at the work that they do.
technological progress has made the pie far bigger. As previously noted, over the last few hundred years, economic output has soared.
Intuitively, growth like this is likely to have helped workers. As an economy grows, and people become more prosperous with healthier incomes to spend, the opportunities for work are likely to improve. Yes, some tasks might be automated and lost to machines. But as the economy expands, and demand for goods and services rises along with it, demand will also increase for all the tasks that are needed to produce them. These may include activities that have not yet been automated, and so displaced workers can find work involving them instead.
As an economy grows, and people become more prosperous with healthier incomes to spend, the opportunities for work are likely to improve. Yes, some tasks might be automated and lost to machines. But as the economy expands, and demand for goods and services rises along with it, demand will also increase for all the tasks that are needed to produce them. These may include activities that have not yet been automated, and so displaced workers can find work involving them instead.
Kenneth Arrow, a Nobel Prize–winning economist, likewise argued that historically, “the replacement of men by machines” has not increased unemployment. “The economy does find other jobs for workers. When wealth is created, people spend their money on something.”
If we think again of the economy as a pie, new technologies have not only made the pie bigger, but changed the pie, too. Take the British economy, for example. Its output, as we noted, is now more than a hundred times what it was three centuries ago. But that output, and the way it is produced, has also completely transformed. Five hundred years ago, the economy was largely made up of farms; three hundred years ago, of factories; today, of offices.
Again, it is intuitive to see how these changes might have helped displaced workers. At a certain moment, some tasks might be automated and lost to machines. But as the economy changes over time, demand will rise for other tasks elsewhere in the economy. And since some of these newly in-demand activities may, again, not have been automated, workers can find jobs involving them instead. To see this changing-pie effect in action, think about the United States. Here you can see displaced workers tumbling through a changing economy, time and again, into different industries and onto different tasks. A century ago, agriculture was a critical part of the American economy: back in 1900, it employed two in every five workers. But since then, agriculture has collapsed in importance and today it employs fewer than two in every hundred workers.37 Where did the rest of those workers go? Into manufacturing. Fifty years ago, that sector superseded agriculture: in fact, in 1970, manufacturing employed a quarter of all American workers. But then that sector also went into relative decline and today fewer than a tenth of American workers are employed in it.38 Where did these displaced factory workers go? The answer is the service sector, which now employs more than eight in ten workers.39 And there is nothing distinctly American about this story of economic transformation, either. Almost all developed economies have followed a similar path, and many less-developed economies are following it, too.40 In 1962, 82 percent of Chinese workers were employed in agriculture; today, that has fallen to around 31 percent, a larger and faster decline than the American one.41
puzzle was that in the twentieth century, there were prolonged periods where the reverse appeared to happen in the world of work. In some countries, there was huge growth in the number of high-skilled people pouring out of colleges and universities, yet their wages appeared to rise rather than fall compared to those without this education. How could this be? The skill-biased story provided an answer. The supply of high-skilled workers did grow, pushing their wages downward, but new technologies were skill-biased and so caused the demand for high-skilled workers to soar. The latter effect was so great that it overcame the former, so even though there were more educated people looking for work, the demand for them was so strong that the amount they were paid still went up.
Another way to see the skill-biased story at work is to look at how wages have changed over time for a variety of different levels of schooling. This is shown in Figure 2.3. As the charts show, people with more years of schooling not only tend to earn more at every point in the past half century, but the gap between them and those with less schooling has tended to grow over time as well. (For women, this story becomes clearer from the 1980s onward.)
This longer view suggests that technological change has in fact favored different types of workers at different moments in history, not always benefiting those who might have been considered skilled at that particular time. Take the nineteenth century, for example. As we saw in the previous chapter, when the Industrial Revolution got under way in Britain, new machines were introduced to the workplace, new production processes were set up, and so new tasks had to be done. But it turned out that those without the skills of the day were often best placed to perform these tasks. Technology, rather than being skill-biased, was “unskill-biased” instead.
These new machines were “de-skilling,” making it easier for less-skilled people to produce high-quality wares that would have required skilled workers in the past.
At the turn of the twenty-first century, then, the conventional wisdom among economists was that technological progress was sometimes skill-biased, at other times unskill-biased. In either case, though, many economists tended to imagine that this progress always broadly benefited workers. Indeed, in the dominant model used in the field, it was impossible for new technologies to make either skilled or unskilled workers worse off; technological progress always raised everyone’s wages, though at a given time some more than others. This story was so widely told that leading economists referred to it as the “canonical model.”
Starting in the 1980s, new technologies appeared to help both low-skilled and high-skilled workers at the same time—but workers with middling skills did not appear to benefit at all. In many economies, if you took all the occupations and arranged them in a long line from the lowest-skilled to the highest-skilled, over the last few decades you would have often seen the pay and the share of jobs (as a proportion of total employment) grow for those at either end of the line, but wither for those near the middle.
In many economies, if you took all the occupations and arranged them in a long line from the lowest-skilled to the highest-skilled, over the last few decades you would have often seen the pay and the share of jobs (as a proportion of total employment) grow for those at either end of the line, but wither for those near the middle.
This phenomenon is known as “polarization” or “hollowing out.” The traditionally plump midriffs of many economies, which have provided middle-class people with well-paid jobs in the past, are disappearing. In many countries, as a share of overall employment there are now more high-paid professionals and managers—as well as more low-paid caregivers and cleaners, teacher’s aides and hospital assistants, janitors and gardeners, waiters and hairdressers.17 But there are fewer middling-pay secretaries and administrative clerks, production workers and salespeople.18 Labor markets are becoming increasingly two-tiered and divided.
With time, it became clear that the level of education required by human beings to perform a given task—how “skilled” those people were—was not always a helpful indication of whether a machine would find that same task easy or difficult. Instead, what appeared to matter was whether the task itself was what the economists called “routine.” By “routine,” they did not mean that the task was necessarily boring or dull. Rather, a task was regarded as “routine” if human beings found it straightforward to explain how they performed it—if it relied on what is known as “explicit” knowledge, knowledge which is easy to articulate, rather than “tacit” knowledge, which is not.23
That was why labor markets around the world were being hollowed out, taking on hourglass figures. Technological change was eating away at the “routine” tasks clustered in the middle, but the “non-routine” tasks at either end were indigestible, left for human beings to undertake.
Technological progress, it appeared, was neither skill-biased nor unskill-biased, as the old stories had implied. Rather it was task-biased, with machines able to perform certain types of tasks but unable to perform others. This meant that the only workers to benefit from technological change would be those well placed to perform the “non-routine” tasks that machines could not handle. In turn, this explained why certain types of middling-skilled workers might not gain from new technology at all—if they found themselves stuck in jobs made up largely of “routine” tasks that machines could handle with ease.
The point is driven home by a 2017 study carried out by McKinsey & Company, which reviewed 820 different occupations. Fewer than 5 percent of these, they found, could be completely automated with existing technologies. On the other hand, more than 60 percent of the occupations were made up of tasks of which at least 30 percent could be automated.30 In other words, very few jobs could be entirely done by machines, but most could have machines take over at least a significant part of them.
some of today’s greatest pragmatist triumphs have grown out of earlier purist attempts to copy human beings. For instance, many of the most capable machines today rely on what are known as “artificial neural networks,” which were first built decades ago in an attempt to simulate the workings of the human brain.27 Today, though, there is little sense that these networks should be judged according to how closely they imitate human anatomy; instead, they are evaluated entirely pragmatically, according to how well they perform whatever tasks they are set.
In sum, both the theologians and the AI scientists believed that remarkable capabilities could only ever emerge from something that resembled human intelligence. In the words of the philosopher Daniel Dennett, both thought that competence could only emerge from comprehension, that only an intelligent process could create exceptionally capable machines.43 Today, though, we know that the religious scholars were wrong. Humans and human capabilities were not created through the top-down efforts of something more intelligent than us, molding us to look like it. In 1859, Charles Darwin showed that the reverse was true: the creative force was a bottom-up process of unconscious design. Darwin called this “evolution by natural selection,” the simplest account of it only requiring you to accept three things: first, that there are slight variations between living beings; second, that some of these variations might be favorable for their survival; and third, that these variations are passed on to others. There was no need for an intelligent designer, directly shaping events; these three facts alone could explain all appearances of design in the natural world. The variations might be tiny, the advantages ever so slight, but these changes, negligible at any instant, would—if you left the world to run for long enough—accumulate over billions of years to create dazzling complexity. As Darwin put it, even the most “complex organs and instincts” were “perfected, not by means superior to, though analogous with, human reason, but by the accumulation of innumerable slight variations, each good for the individual possessor.”
The pragmatist revolution in AI requires us to make a similar reversal in how we think about where the abilities of man-made machines come from. Today, the most capable systems are not those that are designed in a top-down way by intelligent human beings. In fact, just as Darwin found a century before, remarkable capabilities can emerge gradually from blind, unthinking, bottom-up processes that do not resemble human intelligence at all.
The ancient Greek poet Archilochus once wrote: “The fox knows many things, but the hedgehog knows one big thing.” Isaiah Berlin, who found this mysterious line in the surviving scraps of Archilochus’s poetry, famously used it as a metaphor to distinguish between two types of human being: people who know a little about a lot (the foxes) and people who know a lot about a little (the hedgehogs).20 In our setting, we can repurpose that metaphor to think about human beings and machines. At the moment, machines are prototypical hedgehogs, each of them designed to be very strong at some extremely specific, narrowly defined task—think of Deep Blue and chess, or AlphaGo and go—but hopeless at performing a range of different tasks. Human beings, on the other hand, are proud foxes, who might now find themselves thrashed by machines at certain undertakings, but can still outperform them at a wide spread of others.
For many AI researchers, the intellectual holy grail is to build machines that are foxes rather than hedgehogs. In their terminology, they want to build an “artificial general intelligence” (AGI), with wide-ranging capabilities, rather than an “artificial narrow intelligence” (ANI), which can only handle very particular assignments.
In short, when thinking about the future of work, we should be wary not of one omnipotent fox, but an army of industrious hedgehogs.
Economists had thought that to accomplish a task, a computer had to follow explicit rules articulated by a human being—that machine capabilities had to begin with the top-down application of human intelligence. That may have been true in the first wave of AI. But as we have seen, it is no longer the case. Machines can now learn how to perform tasks themselves, deriving their own rules from the bottom up. It does not matter if human beings cannot readily explain how they drive a car or recognize a table; machines no longer need those human explanations. And that means they are able to take on many “non-routine” tasks that were once considered to be out of their reach.
The idea that such machines are uncovering hitherto hidden human rules, plunging deeper into people’s tacit understanding of the world, still supposes that it is human intelligence that underpins machine capability. But that misunderstands how second-wave AI systems operate. Of course, some machines may indeed stumble upon unarticulated human rules, thereby turning “non-routine” tasks into “routine” tasks. But far more significant is that many machines are also now deriving entirely new rules, unrelated to those that human beings follow. This is not a semantic quibble, but a serious shift. Machines are no longer riding on the coattails of human intelligence.
If machines do not need to copy human intelligence to be highly capable, the vast gaps in science’s current understanding of intelligence matter far less than is commonly supposed. We do not need to solve the mysteries of how the brain and mind operate to build machines that can outperform human beings. And if machines do not need to replicate human intelligence to be highly capable, there is no reason to think that what human beings are currently able to do represents a limit on what future machines might accomplish. Yet this is what is commonly supposed—that the intellectual prowess of human beings is as far as machines can ever reach.45 Quite simply, it is implausible in the extreme that this will be the case.
We can think of this general trend, where machines take on more and more tasks that were once performed by people, as “task encroachment.”9 And the best way to see it in action is to look at the three main capabilities that human beings draw on in their work: manual, cognitive, and affective capabilities. Today, each of these is under increasing pressure.
First, take the capabilities of human beings that involve dealing with the physical world, such as performing manual labor and responding to what we see around us. Traditionally, this physical and psychomotor aptitude was put to economic use in agriculture. But over the last few centuries, that sector has become increasingly automated.
That human alternative, not perfection, should be the benchmark for judging the usefulness of these diagnostic machines.
At times, the encroachment of machines on tasks that require cognitive capabilities in human beings can be controversial. Consider the military setting: there are now weapons that can select targets and destroy them without relying on human deliberation. This has triggered a set of United Nations meetings to discuss the rise of so-called “killer robots.”56 Or consider the unsettling field of “synthetic media,” which takes the notion of tweaking images with Photoshop to a whole new level. There are now systems that can generate believable videos of events that never happened—including explicit pornography that the participants never took part in, or inflammatory speeches by public figures that they never delivered. At a time when political life is increasingly contaminated by fake news, the prospects for the misuse of software like this are troubling.
There are systems that can outperform human beings in distinguishing between a genuine smile and one of social conformity, and in differentiating between a face showing real pain and fake pain. And there are also machines that can do more than just read our facial expressions. They can listen to a conversation between a woman and a child and determine whether they are related, and tell from the way a person walks into a room if they are about to do something nefarious.62 Another machine can tell whether a person is lying in court with about 90 percent accuracy—whereas human beings manage about 54 percent, only slightly better than what you might expect from a complete guess.63 Ping An, a Chinese insurance company, uses a system like this to tell whether loan applicants are being dishonest: people are recorded as they answer questions about their income and repayment intentions, and a computer evaluates the video to check whether they are telling the truth.
Economists often label tasks according to the particular capabilities that human beings use to perform them. They talk, for instance, about “manual tasks,” “cognitive tasks,” and “interpersonal tasks,” rather than about “tasks which require manual, cognitive, or interpersonal capabilities when performed by human beings.” But that way of thinking is likely to lead to an underestimation of quite how far machines can encroach in those areas. As we have seen time and again, machines can increasingly perform various tasks without trying to replicate the particular capabilities that human beings happen to use for them. Labeling tasks according to how humans do them encourages us to mistakenly think that machines could only do them in the same way.
The Rise and Fall of American Growth is magisterial and yet, in a sense, self-contradictory. It argues with great care that growth was “not a steady process” in the past, yet it concludes that a steady process is exactly what we face in the future: a steady process of decline in economic growth, with ever fewer unexpected innovative bursts and technological breakthroughs of the kind that drove our economies onward in the past. Given the scale of investment in technology industries today—many of our finest minds, operating in some of the most prosperous institutions—it seems entirely improbable that there will be no more comparable developments in years to come.
The general lesson here is that, in thinking about whether or not it is efficient to use a machine to automate a task, what matters is not only how productive that machine is relative to the human alternative, but also how expensive it is relative to the human alternative. If labor is very cheap in a particular place, it may not make economic sense to use a pricey machine, even if that machine turns out to be very productive indeed.
Perhaps the most interesting implications of relative costs are the international ones. In part, these cost variations between countries can explain why new technologies have been adopted so unevenly around the world in the past. A big puzzle in economic history, for instance, is why the Industrial Revolution was British, rather than, say, French or German. Robert Allen, an economic historian, thinks relative costs are responsible: at the time, the wages paid to British workers were much higher than elsewhere, while British energy prices were very low. Installing new machines that saved on labor and used readily available cheap fuel thus made economic sense in Britain, whereas it did not in other countries.
countries that are aging faster tend to invest more in automation. One study found that a 10 percent increase in the ratio of workers above fifty-six to those between twenty-six and fifty-five was associated with 0.9 more robots per thousand workers.
there is still work to be done by human beings: the problem is that not all workers are able to reach out and take it up.
“Frictions” in the labor market prevent workers from moving freely into whatever jobs might be available. (If we think of the economy as a big machine, it is as if there is sand or grit caught up in its wheels, stopping its smooth running.) Today, there are already places where this is happening. Take men of working age in the United States, for instance. Since World War II, their participation in the labor market has collapsed: one in six are now out of work, more than double the rate of 1940.4 What happened to them? The most compelling answer is that these men fell into frictional technological unemployment. In the past, many of them would have found well-paid work in the manufacturing sector. Yet technological progress means that this sector no longer provides sufficient work for them all to do: in 1950, manufacturing employed about one in three Americans, but today it employs fewer than one in ten.5 Plenty of new jobs have been created in other sectors as the US economy changed and grew—since 1950, it has expanded about fourfold—but critically, many of these displaced men were not able to take up that work. For a variety of reasons, it lay out of their reach.
human beings are likely to find this race with technology ever harder, because its pace is accelerating. Literacy and numeracy are no longer enough to keep up, as they were when workers first made the move from factories to offices at the turn of the twentieth century. Ever higher qualifications are required. Notably, while workers with a college degree have been outperforming those with only a high school education, those with postgraduate qualifications have seen their wages soar far more,
a third of Americans with degrees in STEM subjects (science, technology, engineering, and math) are now in roles that do not require those qualifications.18 And when economists took all the jobs performed by US college graduates and examined the tasks that make them up, they found a collapse in the “cognitive task intensity” of these roles from 2000 onward—a “great reversal in the demand for skills.”
There is a common fantasy that technological progress must make work more interesting—that machines will take on the unfulfilling, boring, dull tasks, leaving behind only meaningful things for people to do. They will free us up, it is often said, to “do what really makes us human.” (The thought is fossilized in the very language we use to talk about automation: the word robot comes from the Czech robota, meaning drudgery or toil.) But this is a misconception. We can already see that a lot of the tasks that technological progress has left for human beings to do today are the “non-routine” ones clustered in poorly paid roles at the bottom of the labor market, bearing little resemblance to the sorts of fulfilling activities that many imagined as being untouched by automation. There is no reason to think the future will be any different.
On the face of it, Americans appear to be remarkably mobile: about half of households change their address every five years, and the proportion of people living in a different state from the one where they were born has risen to one-third.32 But there are two important caveats. First, this is not the case everywhere. Europeans, for instance, are far more immovable: 88.3 percent of Italian men aged between sixteen and twenty-nine still live at home.33 And second, those who do move tend to be better educated as well. In the United States, almost half of college graduates move out of their birth states by the time they are thirty, but only 17 percent of high school dropouts do so.
Some workers, rather than dropping out of the labor market because they lack the right skills, dislike the available jobs, or live in the wrong place, will instead pursue whatever work does remain for them to do. And when this happens—where workers find themselves stranded in a particular corner of the labor market but still want a job—the outcome will not be technological unemployment, with people unable to find work at all, but a sort of technological overcrowding, with people packing into a residual pool of whatever work remains within their reach. Rather than directly cause a rise in joblessness, this could have three harmful effects on the nature of the work. The first is that, as people crowd in, there will be downward pressure on wages. Curiously, whereas technological unemployment is a controversial idea in economics, such downward pressure is widely accepted.36 At times it can be puzzling that economists tend to make such a hard distinction between no work and lower-paid work. The two are treated as unrelated phenomena—the former regarded as impossible, the latter as entirely plausible. In practice, the relationship between the two is far less straightforward. It seems reasonable to think that as more people jostle for whatever work remains for them to do, wages will fall. It also seems reasonable to think that these wages might fall so low in whatever corner of the labor market a worker is confined to that it will no longer be worth their while to take up that work at all. If that happens, the two phenomena become one. This is not an unlikely possibility: in 2016, 7.6 million Americans—about 5 percent of the US workforce—who spent at least twenty-seven weeks of the year in the labor force still remained below the poverty line.
It is sometimes said, in a positive spirit, that new technologies make it easier for people to work flexibly, to start up businesses, become self-employed, and to have a more varied career than their parents or grandparents. That may be true. But for many, this “flexibility” feels more like instability. A third of the people who are on temporary contracts in the UK, for instance, would prefer a permanent arrangement; almost half on zero-hour contracts want more regular work and job security.
Parts of our economic life already feel two-tiered in the way that Meade imagined: many of those fast-growing jobs in Figure 6.2, for instance, from retail sales to restaurant serving, involve the provision of low-paid services to the wealthy. But these “hangers-on” need not be all be “immiserated,” as Meade expected. In rich corners of cities like London and New York it is possible to find odd economic ecosystems full of strange but reasonably well-paid roles that rely almost entirely on the patronage of the most prosperous in society: bespoke spoon carvers and children’s playdate consultants, elite personal trainers and star yoga instructors, craft chocolatiers and artisanal cheesemakers. The economist Tyler Cowen put it well when he imagined that “making high earners feel better in just about every part of their lives will be a major source of job growth in the future.”41 What is emerging is not just an economic division, where some earn much more than others, but a status division as well, between those who are rich and those who serve them.
As task encroachment continues, human capabilities will become irrelevant in this fashion for more and more tasks. Take sat-nav systems. Today these make it easier for taxi drivers to navigate unfamiliar roads, making them better at the wheel. At the moment, therefore, they complement human beings. But this will only be true as long as human beings are better placed than machines to steer a vehicle from A to B. In the coming years, this will no longer the case: eventually, software is likely to drive cars more efficiently and safely than human beings can. At that point, it will no longer matter how good people are at driving: for commercial purposes, that ability will be as amusingly quaint as our productivity at hand-fashioning candles or cotton thread.
Kasparov’s experiences in chess led him to declare that “human plus machine” partnerships are the winning formula not only in chess, but across the entire economy.8 This is a view held by many others as well. But AlphaZero’s victory shows that this is wrong. Human plus machine is stronger only as long as the machine in any partnership cannot do whatever it is that the human being brings to the table. But as machines become more capable, the range of contributions made by human beings diminishes, until partnerships like these eventually just dissolve. The “human” in “human plus machine” becomes redundant.
We live in the Age of Labor, and if new tasks have to be done it is likely that human beings will be better placed to do them. But as task encroachment continues, it becomes more and more likely that a machine will be better placed instead. And as that happens, a growing demand for goods may mean not more demand for the work of human beings, but merely more demand for machines.
It is true that people in the future are likely to have different wants and needs than we do, perhaps even to demand things that are unimaginable to us today. (In the words of Steve Jobs, “consumers don’t know what they want until we’ve shown them.”)15 Yet it is not necessarily true that this will lead to a greater demand for the work of human beings. Again, this will only be the case if human beings are better placed than machines to perform the tasks that have to be done to produce those goods. As task encroachment continues, though, it becomes more and more likely that changes in demand for goods will not turn out to be a boost in demand for the work of human beings, but of machines.
We imagine that when human beings become more productive at a task, they will be better placed than a machine to perform it; that when the economic pie gets bigger, human beings will be better placed to perform the freshly in-demand tasks; that when the economic pie changes, human beings will be better placed to carry out whatever new tasks have to be done.
a fallacy that people are often accused of committing when they seem to forget about the helpful side of technological progress, the complementing force.28 The idea is an old one, first identified back in 1892 by David Schloss, a British economist.29 Schloss was taken aback when he came across a worker who had begun to use a machine to make washers, the small metal discs used when tightening screws, and who appeared to feel guilty about being more productive. When asked why he felt that way, the worker replied: “I know I am doing wrong. I am taking away the work of another man.” Schloss came to see this as a typical attitude among workmen of the time. It was, he wrote, a belief “firmly entertained by a large section of our working-classes, that for a man … to do his level best—is inconsistent … with loyalty to the cause of labour.” He called this the “theory of the Lump of Labour”: it held “that there is a certain fixed amount of work to be done, and that it is best, in the interests of the workmen, that each man shall take care not to do too much work, in order that thus the Lump of Labour may be spread out thin over the whole body of workpeople.”30 Schloss called this way of thinking “a noteworthy fallacy.” The error with it, he pointed out, is that the “lump of work” is in fact not fixed. As the worker became more productive, and the price of the washers made by him fell, demand for them would increase. The lump of work to be divided up would get bigger, and there would actually be more for his colleagues to do. Today, this fallacy is cited in discussions about all types of work. In its most general terms, it is used to argue that there is no fixed lump of work in the economy to be divided up between people and machines; instead, technological progress raises the demand for work performed by everyone in the economy. In other words, it is a version of the point that economists make about the two fundamental forces of technological progress: machines may substitute for workers, leaving less of the original “lump of work” for human beings, but they complement workers as well, increasing the size of the “lump of work” in the economy overall.
It may be right that technological progress increases the overall demand for work. But it is wrong to think that human beings will necessarily be better placed to perform the tasks that are involved in meeting that demand. The lump of labor fallacy involves mistakenly assuming that the lump of work is fixed. But the LOLFF involves mistakenly assuming that that growth in the lump of work has to involve tasks that human beings—not machines—are best placed to perform.
On average, one more robot per thousand workers meant about 5.6 fewer jobs in the entire economy, and wages that were about 0.5 percent lower across the whole economy as well. And all this was happening in 2007, more than a decade ago, before most of the technological advances described in the preceding pages.
hunter-gatherers did not pursue solitary lives of the kind that Rousseau imagined. Instead, they lived together in tribes that sometimes numbered a few hundred people, sharing the literal fruits (and meats) of their labor within their band of fellow foragers—some of whom, inevitably, were more successful in their foraging efforts than others.4 There is no forest that lets human beings retreat into perfect solitude and self-sufficiency, nor has there ever been. All human societies, small and large, simple and complex, poor and affluent, have had to figure out how best to share their unevenly allocated prosperity with one another.
technological progress does have a role in making the economic pie bigger, but the growing power of these supermanagers also allows them to take a much bigger slice of it. Forty years ago, the CEOs of America’s largest firms earned about 28 times more than an average worker; by 2000, that ratio stood at an astounding 376 times.
labor income accounting for about two-thirds of the pie and income from traditional capital making up the remaining third.27 Keynes called this “one of the most surprising, yet best-established, facts in the whole range of economic statistics” and “a bit of a miracle.” Nicholas Kaldor, one of the giants of early work on economic growth, included this phenomenon among his six “stylized facts.” Just as mathematicians build up their arguments up from indubitable axioms, he believed, so economists should build their stories around these six unchanging facts—and they did. The most popular equation in economics dealing with how inputs combine to produce outputs, the Cobb-Douglas production function, is built around the fact that the capital-labor ratio was thought to be fixed.
In the two decades since 1995, across twenty-four countries, productivity rose on average by 30 percent, but pay by only 16 percent.31 Instead of going to workers, the extra income has increasingly gone to owners of traditional capital. This “decoupling” of productivity and pay, as it sometimes known, is particularly clear in the United States, as seen in Figure 8.6. Until the early 1970s, productivity and pay in the United States were almost perfect twins, growing at a similar rate. But as time went on, the former continued upward while the latter stalled, causing them to diverge.
The OECD is quoted as saying that technology was directly responsible for up to 80 percent of the decline from 1990 to 2007, encouraging firms to shift toward using more traditional capital relative to labor.33 The IMF puts it at a more modest 50 percent in developed economies over a slightly longer period, a finding that fits with the work of other economists.34 But once you look at the explanations offered by the IMF for the rest of the decline, technological progress often has a role to play there as well. Part of this decline in the labor share, for instance, is thought to be explained by globalization, the increasingly free movement of goods, services, and capital around the world. The IMF believes that this explains another 25 percent.35 But what is actually responsible for this globalization? Technological progress, in large part. After all, it is falling transportation and communication costs that have made globalization possible.
beneath the headline story of growing inequality around the world lie three distinct trends. First, human capital is less and less evenly distributed, with people’s different skills getting rewarded to very different degrees; the part of the economic pie that goes to workers as a wage is being served out in an increasingly imbalanced way. Second, human capital is becoming less and less valuable relative to traditional capital; that part of the pie that goes to workers as a wage is also shrinking relative to the part that goes to owners of traditional capital. And third, traditional capital itself is distributed in an extraordinarily uneven fashion, an inequality that has been growing more and more pronounced in recent decades.
Inequality, then, is not inevitable. And the same is true for the economic imbalances that technological unemployment would bring about. We have the power to shape and constrain these economic divisions—if we want to.
a college degree in the United States has an average annual return of more than 15 percent, leaving stocks (about 7 percent) and bonds, gold, and real estate (less than 3 percent) trailing far behind.3 Education also does more than just help individuals: it is responsible for thrusting entire economies forward as well.
All we really know with any confidence is that machines will be able to do more in the future than they can today. Unfortunately, this is not particularly useful for deciding what people should be learning to do. But that uncertainty is unavoidable. And so we are left with just our simple rule for the moment: do not prepare people for tasks that we know machines can already do better, or activities that we can reasonably predict will be done better by machines very soon.
People will have to grow comfortable with moving in and out of education, repeatedly, throughout their lives. In part, we will have to constantly reeducate ourselves because technological progress will force us to take on new roles, and we will need to train for them. But we will also need to do it because it is nearly impossible right now to predict exactly what those roles will be. In that sense, embracing lifelong learning is a way of insuring ourselves against the unknowable demands that the working world of the future might make on us.
The question of whether universities are “just selecting for talented people who would have done well anyway … isn’t analyzed very carefully,” Thiel complains.24 In fact, though, many economists have spent large portions of their lives thinking specifically about this issue. The problem is so popular that it has its own name: “ability bias,” a particular case of what’s known in econometrics as “omitted variable bias.” (In this case, the omitted variable is a person’s innate ability: if higher-ability people are more likely than others to go to university in the first place, then attributing their greater financial success to their education alone leaves out a significant part of the story.) Economists have developed a tool kit of techniques to address this omission, and their sense—contrary to Thiel’s—is that even once ability bias is accounted for, universities still appear to have a positive impact. Talented people might earn more than others in any case, but education helps them earn even more than they would otherwise.
It would be nice to think that as human beings we are all infinitely malleable, entirely capable of learning whatever it is that is required of us. And you might argue that the difficulty of education is no reason to avoid it. After all, did President Kennedy not say that we do important things “not because they are easy, but because they are hard”?28 The thrust of Kennedy’s comment may be right. But we have to temper our idealism with realism. If “hard” turns out to mean impossible, then inspirational rallying cries to reeducate and retrain are not helpful.
In calling for a Big State, however, I mean something different: not using the state to make the pie bigger, as the planners tried and failed to do, but rather to make sure that everyone gets a slice. Put another way, the role for the Big State is not in production but in distribution.
A more practical difficulty is that the idea of taxing traditional capital is very ambiguous, far more so than taxing labor. Recently, public discussion has veered toward so-called robot taxes. Bill Gates is partly responsible for this, having caused a stir with his views on the subject. “Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed,” he said in a recent interview. “If a robot comes in to do the same thing, you’d think that we’d tax the robot at a similar level.”
And perhaps most important, we must remember that technological progress (of which robots are a part) drives economic growth—it makes the economic pie bigger in the first place. That is why Larry Summers calls the robot tax “protectionism against progress.”17 A robot tax might mean fewer robots and more workers, but it might also mean a smaller pie as well.
The wide range of support for the UBI disguises the fact that key details of it are subject to uncertainty and disagreement. For instance, how are payments made? UBI supporters often argue that payment in cash is a “fundamental” part of their proposal, but in practice there are other reasonable ways to make people more prosperous.34 One approach, for instance, is to make important things available in society at no cost: rather than just give people cash, the state in effect makes certain purchases on their behalf. Already in the United States, about forty million people use the Supplemental Nutrition Assistance Program, or “food stamps,” to receive basic sustenance for free, worth about $1,500 a year.35 In England, health care and primary and secondary education are free for everyone who wants them, each worth thousands of pounds per year.36 Add up such initiatives, and you end up with a sort of UBI—though one that the state has already spent for you. And if the income payments do get made in cash, how generous should they be? The UBI says “basic.” But what does that mean? Some economists think it implies a minimal payment, not very much at all. John Kenneth Galbraith, for instance, said that introducing “a minimum income essential for decency and comfort” is the right thing to do.37 Friedrich Hayek similarly spoke of “a certain minimum income for everyone.”38 Today’s prominent UBI advocates often agree. Annie Lowrey, author of Give People Money, makes the case for “just enough to live on and not more”; Chris Hughes, author of Fair Shot, argues for $500 a month.39 But there are others who feel differently. Philippe Van Parijs, today’s leading UBI scholar, wants to use UBI to build a “truly free” society, where people are not tied down by what they earn. That is a far loftier goal than what is envisaged by Galbraith and Hayek—and a far more expensive one, too.
As we approach a world with less work, this sort of struggle over who counts as a member of the community will intensify. The Native American experience shows that dealing with questions of citizenship is likely to be fractious. The instinct in some tribes was to pull up the drawbridge—a reaction we can see in other settings, too. Consider the financial crisis in 2007 and its aftermath. As economic life got harder, the rhetoric toward immigrants in many countries hardened as well: they were said to be “taking our jobs,” “running down our public services.” There was a collective impulse to narrow the boundaries of the community, to restrict membership, to tighten the meaning of ours. In much the same way, support for so-called welfare chauvinism—a more generous welfare state, made available to fewer people—is on the rise. In Europe, for example, a survey found “both rising support for redistribution for ‘natives’ and sharp opposition to migration and automatic access to benefits for new arrivals.”
UBI advocates argue that universal payments remove any stigma associated with claiming support. If everyone receives the payments, nobody can be labeled by society as a “scrounger” and no individual will feel ashamed to have to claim theirs. As Van Parijs puts it, “There is nothing humiliating about benefits given to all as a matter of citizenship.”
The UBI fails to take account of these responses. It solves the distribution problem, providing a way to share out material prosperity more evenly; but it ignores this contribution problem, the need to make sure that everyone feels their fellow citizens are in some way giving back to society. As the political theorist Jon Elster put it, the UBI “goes against a widely accepted notion of justice: it is unfair for able-bodied people to live off the labor of others. Most workers would, correctly in my opinion, see the proposal as a recipe for exploitation of the industrious by the lazy.”
There are two reasons why sharing out capital might be attractive. The first is that it would reduce the need for the Big State to act as an income-sharing state. If more people owned valuable capital, income would flow more evenly across society of its own accord. The second reason is that such sharing would also help to narrow economic divisions in society. If the underlying distribution of capital stays the same, and the state only shares out income, then profound economic imbalances will remain. If left unresolved, such divisions could turn into noneconomic strife: ruptures of class and power, differences in status and respect.56 By sharing out valuable capital, and directly attacking the economic imbalances, the state could try to stop this from happening.
the state’s labor-supporting efforts should be focused primarily on changing the actual incentives that employers face, forcing closer alignment between their interests and those of the society of which they are a part.
For Schumpeter, economics was all about innovation. He called it the “outstanding fact in the economic history of capitalist society.” His argument for monopolies is that, were it not for the prospect of handsome profits in the future, no entrepreneur would bother to innovate in the first place. Developing a successful new product comes at a serious cost, in both effort and expense, and the possibility of securing monopoly power is the main motivator for trying at all. It acts as the “baits that lure capital on to untried trails.”22 Moreover, monopoly profits are not simply a consequence of innovation, but a means of funding further innovation. Substantial research and development very often draws on the deep pockets established by a company’s past commercial successes.
In the twentieth century, our main preoccupation was with the economic power of large companies. But in the twenty-first, we will increasingly have to worry about this political power as well.
From this viewpoint, the threat of technological unemployment has another face to it. It will deprive people not only of income, but also of significance; it will hollow out not just the labor market, but also the sense of purpose in many people’s lives.1 In a world with less work, we will face a problem that has little to do with economics at all: how to find meaning in life when a major source of it disappears.
Take Alfred Marshall, another giant of economic history. He proclaimed that “man rapidly degenerates unless he has some hard work to do, some difficulties to overcome,” and that “some strenuous exertion is necessary for physical and moral health.” To him, work was not simply about an income, but the way to achieve “the fullness of life.”3
Jahoda and her colleagues wanted to know what the impact of such widespread worklessness would be. Their methods were unconventional: to collect data on residents without making them realize they were being watched, the researchers embedded themselves in everyday village life. (Their various enterprises included a clothes cleaning and repair service, parent support classes, a free medical clinic, and courses in pattern design and gymnastics.) What they found was striking: growing apathy, a loss of direction in life, and increasing ill will to others. People borrowed fewer library books: 3.23 books on average per resident in 1929, but only 1.6 in 1931. They dropped out of political parties and stopped turning up to cultural events: in only a few years, the athletic club saw membership fall by 52 percent and the glee club by 62 percent. Unemployment benefits required that claimants do no informal work; in those years, Marienthal saw a threefold increase in anonymous denunciations of others for breaking that rule, yet almost no change at all in the total number of complaints that were judged well-founded. Researchers watching at a street corner even noted a physical change: men without work walked more slowly in the street and stopped more frequently.
Work matters not just for a worker’s own sense of meaning; it has an important social dimension as well, allowing people to show others that they live a purposeful life, and offering them a chance to gain status and social esteem.
For those with a job, the connection between work and meaning is wonderful: in return for their efforts, they get both an income and a sense of purpose. But for the unemployed, this link may become instead a source of further discomfort and distress. If work offers a path toward a meaningful life, the jobless may feel that their existence is meaningless; if work provides status and social esteem, they may feel out of place and deflated. This may partly explain why the unemployed often feel depressed and shamed, and why their suicide rate is about two and a half times the rate of those in work.10 A prevailing political philosophy of our time, the idea of meritocracy, does little to help.11 This is the notion that work goes to those who somehow deserve it, due to their talents or effort. Yet if work signifies merit, then those without it might feel meritless. Michael Sandel once quipped that in feudal times, at least those at the top knew that their economic fortunes were a fluke of birth, the simple brute luck of being born into the right family—whereas today, the most fortunate imagine they actually merit their positions, that being born with the right talents and abilities (and, often, supportive and prosperous parents) has nothing to do with luck at all.12 An unpleasant corollary is that the less fortunate now often think they merit their bad luck as well.
Aristotle, likewise, wrote that “citizens must not lead the life of artisans or tradesmen, for such a life is ignoble and inimical to excellence.”22 He believed that meaning could only come through leisure, and that the only purpose of work is to pay for leisure time: “We work in order to enjoy leisure, just as we make war in order to enjoy peace.”23 In fact, the Greek word for “work,” ascholia, literally means “the absence of leisure,” schole; for the Greeks, leisure came first, the opposite from how many think today.24
Work is a source of meaning for some people at the moment not because work itself is special, but because our jobs are where we spend the majority of our lives. We can only find meaning in what we actually do—and freed up to spend our lives differently, we will find meaning elsewhere instead.
Today, that sense of value is overwhelmingly shaped by the market mechanism: a thing’s value is the price that someone is willing to pay for it, and a worker’s worth is the wage that they receive. For all its flaws, there is something extraordinary about the inexorable simplifying power of this mechanism. In the white heat of the market, the clash between people’s infinite desires and the hard reality of satisfying them gets boiled down to a single number: a price.