Previous technological innovation has always delivered more long-run employment, not less. But things can change.
IN 1930, when the world was “suffering…from a bad attack of economic pessimism”, John Maynard Keynes wrote a broadly optimistic essay, “Economic Possibilities for our Grandchildren”. It imagined a middle way between revolution and stagnation that would leave the said grandchildren a great deal richer than their grandparents. But the path was not without dangers.
One of the worries Keynes admitted was a “new disease”: “technological unemployment…due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” His readers might not have heard of the problem, he suggested—but they were certain to hear a lot more about it in the years to come.
For the most part, they did not. Nowadays, the majority of economists confidently wave such worries away. By raising productivity, they argue, any automation which economises on the use of labour will increase incomes. That will generate demand for new products and services, which will in turn create new jobs for displaced workers. To think otherwise has meant being tarred a Luddite—the name taken by 19th-century textile workers who smashed the machines taking their jobs.
For much of the 20th century, those arguing that technology brought ever more jobs and prosperity looked to have the better of the debate. Real incomes in Britain scarcely doubled between the beginning of the common era and 1570. They then tripled from 1570 to 1875. And they more than tripled from 1875 to 1975. Industrialisation did not end up eliminating the need for human workers. On the contrary, it created employment opportunities sufficient to soak up the 20th century’s exploding population. Keynes’s vision of everyone in the 2030s being a lot richer is largely achieved. His belief they would work just 15 hours or so a week has not come to pass.
When the sleeper wakes
Yet some now fear that a new era of automation enabled by ever more powerful and capable computers could work out differently. They start from the observation that, across the rich world, all is far from well in the world of work. The essence of what they see as a work crisis is that in rich countries the wages of the typical worker, adjusted for cost of living, are stagnant. In America the real wage has hardly budged over the past four decades. Even in places like Britain and Germany, where employment is touching new highs, wages have been flat for a decade. Recent research suggests that this is because substituting capital for labour through automation is increasingly attractive; as a result owners of capital have captured ever more of the world’s income since the 1980s, while the share going to labour has fallen.
At the same time, even in relatively egalitarian places like Sweden, inequality among the employed has risen sharply, with the share going to the highest earners soaring. For those not in the elite, argues David Graeber, an anthropologist at the London School of Economics, much of modern labour consists of stultifying “bullshit jobs”—low- and mid-level screen-sitting that serves simply to occupy workers for whom the economy no longer has much use. Keeping them employed, Mr Graeber argues, is not an economic choice; it is something the ruling class does to keep control over the lives of others.
Be that as it may, drudgery may soon enough give way to frank unemployment. There is already a long-term trend towards lower levels of employment in some rich countries. The proportion of American adults participating in the labour force recently hit its lowest level since 1978, and although some of that is due to the effects of ageing, some is not. In a recent speech that was modelled in part on Keynes’s “Possibilities”, Larry Summers, a former American treasury secretary, looked at employment trends among American men between 25 and 54. In the 1960s only one in 20 of those men was not working. According to Mr Summers’s extrapolations, in ten years the number could be one in seven.
This is one indication, Mr Summers says, that technical change is increasingly taking the form of “capital that effectively substitutes for labour”. There may be a lot more for such capital to do in the near future. A 2013 paper by Carl Benedikt Frey and Michael Osborne, of the University of Oxford, argued that jobs are at high risk of being automated in 47% of the occupational categories into which work is customarily sorted. That includes accountancy, legal work, technical writing and a lot of other white-collar occupations.
Answering the question of whether such automation could lead to prolonged pain for workers means taking a close look at past experience, theory and technological trends. The picture suggested by this evidence is a complex one. It is also more worrying than many economists and politicians have been prepared to admit.
The lathe of heaven
Economists take the relationship between innovation and higher living standards for granted in part because they believe history justifies such a view. Industrialisation clearly led to enormous rises in incomes and living standards over the long run. Yet the road to riches was rockier than is often appreciated.
In 1500 an estimated 75% of the British labour force toiled in agriculture. By 1800 that figure had fallen to 35%. When the shift to manufacturing got under way during the 18th century it was overwhelmingly done at small scale, either within the home or in a small workshop; employment in a large factory was a rarity. By the end of the 19th century huge plants in massive industrial cities were the norm. The great shift was made possible by automation and steam engines.
Industrial firms combined human labour with big, expensive capital equipment. To maximise the output of that costly machinery, factory owners reorganised the processes of production. Workers were given one or a few repetitive tasks, often making components of finished products rather than whole pieces. Bosses imposed a tight schedule and strict worker discipline to keep up the productive pace. The Industrial Revolution was not simply a matter of replacing muscle with steam; it was a matter of reshaping jobs themselves into the sort of precisely defined components that steam-driven machinery needed—cogs in a factory system.
The way old jobs were done changed; new jobs were created. Joel Mokyr, an economic historian at Northwestern University in Illinois, argues that the more intricate machines, techniques and supply chains of the period all required careful tending. The workers who provided that care were well rewarded. As research by Lawrence Katz, of Harvard University, and Robert Margo, of Boston University, shows, employment in manufacturing “hollowed out”. As employment grew for highly skilled workers and unskilled workers, craft workers lost out. This was the loss to which the Luddites, understandably if not effectively, took exception.
With the low-skilled workers far more numerous, at least to begin with, the lot of the average worker during the early part of this great industrial and social upheaval was not a happy one. As Mr Mokyr notes, “life did not improve all that much between 1750 and 1850.” For 60 years, from 1770 to 1830, growth in British wages, adjusted for inflation, was imperceptible because productivity growth was restricted to a few industries. Not until the late 19th century, when the gains had spread across the whole economy, did wages at last perform in line with productivity.
Along with social reforms and new political movements that gave voice to the workers, this faster wage growth helped spread the benefits of industrialisation across wider segments of the population. New investments in education provided a supply of workers for the more skilled jobs that were by then being created in ever greater numbers. This shift continued into the 20th century as post-secondary education became increasingly common.
Claudia Goldin, an economist at Harvard University, and Mr Katz have written that workers were in a “race between education and technology” during this period, and for the most part they won. Even so, it was not until the “golden age” after the second world war that workers in the rich world secured real prosperity, and a large, property-owning middle class came to dominate politics. At the same time communism, a legacy of industrialisation’s harsh early era, kept hundreds of millions of people around the world in poverty, and the effects of the imperialism driven by European industrialisation continued to be felt by billions.
The impacts of technological change take their time appearing. They also vary hugely from industry to industry. Although in many simple economic models technology pairs neatly with capital and labour to produce output, in practice technological changes do not affect all workers the same way. Some find that their skills are complementary to new technologies. Others find themselves out of work.
Take computers. In the early 20th century a “computer” was a worker, or a room of workers, doing mathematical calculations by hand, often with the end point of one person’s work the starting point for the next. The development of mechanical and electronic computing rendered these arrangements obsolete. But in time it greatly increased the productivity of those who used the new computers in their work.
Many other technical innovations had similar effects. New machinery displaced handicraft producers across numerous industries, from textiles to metalworking. At the same time it enabled vastly more output per person than craft producers could ever manage.
Player piano
For a task to be replaced by a machine, it helps a great deal if, like the work of human computers, it is already highly routine. Hence the demise of production-line jobs and some sorts of book-keeping, lost to the robot and the spreadsheet. Meanwhile work less easily broken down into a series of stereotyped tasks—whether rewarding, as the management of other workers and the teaching of toddlers can be, or more of a grind, like tidying and cleaning messy work places—has grown as a share of total employment.
But the “race” aspect of technological change means that such workers cannot rest on their pay packets. Firms are constantly experimenting with new technologies and production processes. Experimentation with different techniques and business models requires flexibility, which is one critical advantage of a human worker. Yet over time, as best practices are worked out and then codified, it becomes easier to break production down into routine components, then automate those components as technology allows.
If, that is, automation makes sense. As David Autor, an economist at the Massachusetts Institute of Technology (MIT), points out in a 2013 paper, the mere fact that a job can be automated does not mean that it will be; relative costs also matter. When Nissan produces cars in Japan, he notes, it relies heavily on robots. At plants in India, by contrast, the firm relies more heavily on cheap local labour.
Even when machine capabilities are rapidly improving, it can make sense instead to seek out ever cheaper supplies of increasingly skilled labour. Thus since the 1980s (a time when, in America, the trend towards post-secondary education levelled off) workers there and elsewhere have found themselves facing increased competition from both machines and cheap emerging-market workers.
Such processes have steadily and relentlessly squeezed labour out of the manufacturing sector in most rich economies. The share of American employment in manufacturing has declined sharply since the 1950s, from almost 30% to less than 10%. At the same time, jobs in services soared, from less than 50% of employment to almost 70%. It was inevitable, therefore, that firms would start to apply the same experimentation and reorganisation to service industries.
A new wave of technological progress may dramatically accelerate this automation of brain-work. Evidence is mounting that rapid technological progress, which accounted for the long era of rapid productivity growth from the 19th century to the 1970s, is back. The sort of advances that allow people to put in their pocket a computer that is not only more powerful than any in the world 20 years ago, but also has far better software and far greater access to useful data, as well as to other people and machines, have implications for all sorts of work.
The case for a highly disruptive period of economic growth is made by Erik Brynjolfsson and Andrew McAfee, professors at MIT, in “The Second Machine Age”, a book to be published later this month. Like the first great era of industrialisation, they argue, it should deliver enormous benefits—but not without a period of disorienting and uncomfortable change. Their argument rests on an underappreciated aspect of the exponential growth in chip processing speed, memory capacity and other computer metrics: that the amount of progress computers will make in the next few years is always equal to the progress they have made since the very beginning. Mr Brynjolfsson and Mr McAfee reckon that the main bottleneck on innovation is the time it takes society to sort through the many combinations and permutations of new technologies and business models.
A startling progression of inventions seems to bear their thesis out. Ten years ago technologically minded economists pointed to driving cars in traffic as the sort of human accomplishment that computers were highly unlikely to master. Now Google cars are rolling round California driver-free no one doubts such mastery is possible, though the speed at which fully self-driving cars will come to market remains hard to guess.
Brave new world
Even after computers beat grandmasters at chess (once thought highly unlikely), nobody thought they could take on people at free-form games played in natural language. Then Watson, a pattern-recognising supercomputer developed by IBM, bested the best human competitors in America’s popular and syntactically tricksy general-knowledge quiz show “Jeopardy!” Versions of Watson are being marketed to firms across a range of industries to help with all sorts of pattern-recognition problems. Its acumen will grow, and its costs fall, as firms learn to harness its abilities.
The machines are not just cleverer, they also have access to far more data. The combination of big data and smart machines will take over some occupations wholesale; in others it will allow firms to do more with fewer workers. Text-mining programs will displace professional jobs in legal services. Biopsies will be analysed more efficiently by image-processing software than lab technicians. Accountants may follow travel agents and tellers into the unemployment line as tax software improves. Machines are already turning basic sports results and financial data into good-enough news stories.
Jobs that are not easily automated may still be transformed. New data-processing technology could break “cognitive” jobs down into smaller and smaller tasks. As well as opening the way to eventual automation this could reduce the satisfaction from such work, just as the satisfaction of making things was reduced by deskilling and interchangeable parts in the 19th century. If such jobs persist, they may engage Mr Graeber’s “bullshit” detector.
Being newly able to do brain work will not stop computers from doing ever more formerly manual labour; it will make them better at it. The designers of the latest generation of industrial robots talk about their creations as helping workers rather than replacing them; but there is little doubt that the technology will be able to do a bit of both—probably more than a bit. A taxi driver will be a rarity in many places by the 2030s or 2040s. That sounds like bad news for journalists who rely on that most reliable source of local knowledge and prejudice—but will there be many journalists left to care? Will there be airline pilots? Or traffic cops? Or soldiers?
There will still be jobs. Even Mr Frey and Mr Osborne, whose research speaks of 47% of job categories being open to automation within two decades, accept that some jobs—especially those currently associated with high levels of education and high wages—will survive (see table). Tyler Cowen, an economist at George Mason University and a much-read blogger, writes in his most recent book, “Average is Over”, that rich economies seem to be bifurcating into a small group of workers with skills highly complementary with machine intelligence, for whom he has high hopes, and the rest, for whom not so much.
And although Mr Brynjolfsson and Mr McAfee rightly point out that developing the business models which make the best use of new technologies will involve trial and error and human flexibility, it is also the case that the second machine age will make such trial and error easier. It will be shockingly easy to launch a startup, bring a new product to market and sell to billions of global consumers (see article). Those who create or invest in blockbuster ideas may earn unprecedented returns as a result.
In a forthcoming book Thomas Piketty, an economist at the Paris School of Economics, argues along similar lines that America may be pioneering a hyper-unequal economic model in which a top 1% of capital-owners and “supermanagers” grab a growing share of national income and accumulate an increasing concentration of national wealth. The rise of the middle-class—a 20th-century innovation—was a hugely important political and social development across the world. The squeezing out of that class could generate a more antagonistic, unstable and potentially dangerous politics.
The potential for dramatic change is clear. A future of widespread technological unemployment is harder for many to accept. Every great period of innovation has produced its share of labour-market doomsayers, but technological progress has never previously failed to generate new employment opportunities.
The productivity gains from future automation will be real, even if they mostly accrue to the owners of the machines. Some will be spent on goods and services—golf instructors, household help and so on—and most of the rest invested in firms that are seeking to expand and presumably hire more labour. Though inequality could soar in such a world, unemployment would not necessarily spike. The current doldrum in wages may, like that of the early industrial era, be a temporary matter, with the good times about to roll.
These jobs may look distinctly different from those they replace. Just as past mechanisation freed, or forced, workers into jobs requiring more cognitive dexterity, leaps in machine intelligence could create space for people to specialise in more emotive occupations, as yet unsuited to machines: a world of artists and therapists, love counsellors and yoga instructors.
Such emotional and relational work could be as critical to the future as metal-bashing was in the past, even if it gets little respect at first. Cultural norms change slowly. Manufacturing jobs are still often treated as “better”—in some vague, non-pecuniary way—than paper-pushing is. To some 18th-century observers, working in the fields was inherently more noble than making gewgaws.
But though growth in areas of the economy that are not easily automated provides jobs, it does not necessarily help real wages. Mr Summers points out that prices of things-made-of-widgets have fallen remarkably in past decades; America’s Bureau of Labour Statistics reckons that today you could get the equivalent of an early 1980s television for a twentieth of its then price, were it not that no televisions that poor are still made. However, prices of things not made of widgets, most notably college education and health care, have shot up. If people lived on widgets alone— goods whose costs have fallen because of both globalisation and technology—there would have been no pause in the increase of real wages. It is the increase in the prices of stuff that isn’t mechanised (whose supply is often under the control of the state and perhaps subject to fundamental scarcity) that means a pay packet goes no further than it used to.
So technological progress squeezes some incomes in the short term before making everyone richer in the long term, and can drive up the costs of some things even more than it eventually increases earnings. As innovation continues, automation may bring down costs in some of those stubborn areas as well, though those dominated by scarcity—such as houses in desirable places—are likely to resist the trend, as may those where the state keeps market forces at bay. But if innovation does make health care or higher education cheaper, it will probably be at the cost of more jobs, and give rise to yet more concentration of income.
The machine stops
Even if the long-term outlook is rosy, with the potential for greater wealth and lots of new jobs, it does not mean that policymakers should simply sit on their hands in the mean time. Adaptation to past waves of progress rested on political and policy responses. The most obvious are the massive improvements in educational attainment brought on first by the institution of universal secondary education and then by the rise of university attendance. Policies aimed at similar gains would now seem to be in order. But as Mr Cowen has pointed out, the gains of the 19th and 20th centuries will be hard to duplicate.
Boosting the skills and earning power of the children of 19th-century farmers and labourers took little more than offering schools where they could learn to read, write and do algebra. Pushing a large proportion of college graduates to complete graduate work successfully will be harder and more expensive. Perhaps cheap and innovative online education will indeed make new attainment possible. But as Mr Cowen notes, such programmes may tend to deliver big gains only for the most conscientious students.
Another way in which previous adaptation is not necessarily a good guide to future employment is the existence of welfare. The alternative to joining the 19th-century industrial proletariat was malnourished deprivation. Today, because of measures introduced in response to, and to some extent on the proceeds of, industrialisation, people in the developed world are provided with unemployment benefits, disability allowances and other forms of welfare. They are also much more likely than a bygone peasant to have savings. This means that the “reservation wage”—the wage below which a worker will not accept a job—is now high in historical terms. If governments refuse to allow jobless workers to fall too far below the average standard of living, then this reservation wage will rise steadily, and ever more workers may find work unattractive. And the higher it rises, the greater the incentive to invest in capital that replaces labour.
Everyone should be able to benefit from productivity gains—in that, Keynes was united with his successors. His worry about technological unemployment was mainly a worry about a “temporary phase of maladjustment” as society and the economy adjusted to ever greater levels of productivity. So it could well prove. However, society may find itself sorely tested if, as seems possible, growth and innovation deliver handsome gains to the skilled, while the rest cling to dwindling employment opportunities at stagnant wages.
AT THE DUSSELDORF airport, robotic valet parking is now reality. You step out of your car. You press a button on a touch screen. And then a machine lifts your car off the ground, moving all three tons of it into a kind of aerial parking bay. Built by a German company called Serva Transport, the system saves you time. It saves garage space, thanks to those carefully arranged parking spots. And it’s a sign of so many things to come.
But the one thing it doesn’t do, says J.P. Gownder, an analyst with the Boston-based tech research firm Forrester, is steal jobs. In fact, it creates them. Before installing the robotic system, the airport already used automatic ticket machines, so the system didn’t replace human cashiers. And now, humans are needed to maintain and repair all those robotic forklifts. “These are not white-collar jobs,” Gownder tells WIRED. “This is the evolution of the repair person. It’s harder to fix a robot than it is to fix a vending machine.”
Gownder uses the Dusseldorf parking garage as a way of showing that the coming revolution in robotics and artificial intelligence may not squeeze the human workforce as much as some pundits have feared. In a widely cited study from 2013, Oxford professors Carl Frey and Michael Osbourne say that machines could replace about 47 percent of our jobs over the next 20 years, but in a new report released today, Gownder takes a more conservative view. Drawing on government employment data and myriad interviews with businesses, academics, and, yes, pundits, Gownder predicts that new automation will cause a net loss of only 9.1 million U.S. jobs by 2025. The horizon of his study is much closer, but his numbers are well under the roughly 70 million jobs that Frey and Osbourne believe to be in danger of vaporization.
“While these technologies are both real and important, and some jobs will disappear because of them, the future of jobs overall isn’t nearly as gloomy as many prognosticators believe,” Gownder writes in the report. “In reality, automation will spur the growth of many new jobs—including some entirely new job categories.”
AI Versus Humanity
Yes, the revolution is coming. Gownder points to a robot at the ALoft hotel in San Francisco delivers towels and toothpaste and other stuff. At Vanguard Plastics in Connecticut, a machine called Baxter is manufacturing goods in ways machines never could in the past. The likes of Google and Amazon are pushing even further into this area with everything from warehouse drones to self-driving cars.
Perhaps more importantly, the giants of the `net are rapidly advancing the art of artificial intelligence, teaching online services to recognize images, understand natural language, and even carry on conversations—the kinds of artificial intelligence that will empower robots to tackle ever-more complex tasks. Using the AI that Google and Facebook use to identify photos on the ‘net, researchers have already built machines that can, says, teach themselves to screw on a bottle cap.
“Today’s technology is different than what we’ve seen in the past,” says Martin Ford, the author of the recent book Rise of the Robots: Technology and the Threat of a Jobless Future. “The technology is taking on cognitive tasks. We know have machines and algorithms that can, at least in a limited sense, think.”
As this tech evolves, concern is certainly warranted, not only because of how these technologies will affect the workforce but because, some argue, smarter robots could wind up becoming more harmful robots. After seeing the latest artificial intelligence in action, Elon Musk, the founder of electric car company Tesla and the space exploration outfit SpaceX, worries that such AI may turn on humans in more direct ways, so much so that he has donated millions to efforts that seek ways of keeping AI “beneficial to humanity.” But Gownder rightly points out that such technology is still in the early stages of development—and that it still requires much help from humans.
‘Job Transformation, Not Job Replacement’
Humans must build these machines and program them and repair them. But they must also train them. This is true of “deep learning” AI, and it’s true of robots like Baxter. Baxter must be programmed to perform certain tasks, and that involves physically moving his limbs back and forth.
IBM is touting the arrival of Watson, a broad collection of online tools that use artificial intelligence to help diagnose disease, among other things, and so many others are exploring similar work. But whatever the message from IBM, such tools operate alongside humans, not in lieu of them. “Watson is like a robotic colleague,” says Gownder. “It’s job transformation, not job replacement.”
Andrew Moore, the dean of the school of computer science at Carnegie Mellon University who previously worked in AI and robotics at Google, agrees. He says that he has seen no evidence that this technology is stealing jobs—and that, as time goes on, it will likely create an enormous number of jobs.
“Technology does change the mix of jobs. You’re going to see doctors taking more of the role that involves the personal interaction with patients and less of the role of trying to keep huge amounts of evidence in there head. The nurse may become more prestigious than the doctor,” Moore says. “But if you look around, there are also new kinds of creatives roles being produced across the market. There are so many jobs that didn’t exist just a few years ago.”
This is the larger message of Gownder’s report. Robotics and AI will change the way we work, but it won’t necessarily take away our work. Today’s warnings over the rise of AI, he says, are reminiscent of that handwringing over so many other technological advances in the past—and after all these centuries, the workforce is still there.
It should be said, however, that Gownder’s study only looks so far down the road. And as Ford says, even Gownder’s rather conservative estimate—9.1 million jobs lost—is still rather significant. Robotics and AI will continue to progress—at an unprecedented rate—and though Gownder believes the doomsayers have overblown the threat of widespread automation, he too sees reason for concern—and for continued to debate. “The rate of change matters,” Gownder says. “We must keep our eyes open.”
Big Data and Big Pressure
Nothing about life at Amazon seems fun after that big New York Times report on the corporation’s brutality toward its own white-collar workers. But unpack the account a little bit and it turns out to be documenting two very different kinds of misery. The first, the one that has been most broadly discussed, suggests that Amazon managers habitually treat their employees as if they have no existence outside of work. This leads, in the Times’ depiction, to mundane abuses and despair. One ex-employee worked such long hours that her fiancé got in the habit of driving to the Amazon campus and calling her until she agreed to come home with him. (“That’s when the ulcer started,” the woman said.) “Nearly everyone I worked with, I saw cry at their desks,” one former executive in the books department says. Some of the abuses were less mundane. The paper found a woman named Elizabeth Willet, who was given lousy performance grades because she left work at 5 p.m. to take care of a newborn child; another was given low performance ratings after she returned from thyroid cancer; yet another felt compelled to leave on a business trip a day after miscarrying. “From where you are in life, trying to start a family,” she told the Times that her manager told her, “I don’t know if this is the right place for you.”
Rough stuff. But throughout the Times' account, another menace keeps creeping in — less vivid, but heavier with existential weight: data. In Amazon’s warehouses, we learn, workers “are monitored by sophisticated electronic systems to ensure that they are packing enough boxes.” In the white-collar jobs that are the story’s real subject, the company is exacting in similar ways. “The company is running a continual performance improvement algorithm on its staff,” a former marketer on the Kindle team explains. Before regular performance reviews, Amazon workers are given “printouts, sometimes up to 50 or 60 pages long,” that measure their performance on many different metrics. (It’s a little amazing that Amazon is still printing this stuff out on paper.) The totality of this measurement, the Times suggests, means not that Amazon is unique but merely that the company has been “quicker in responding to changes that the rest of the work world is now experiencing: data that allows individual performance to be measured continuously.”
Which is the line at which the average white-collar Times reader is meant to experience a sense of imminent collapse and dread. Obnoxious as most of the abuse is, as many lines as it crosses, a reader who works at another company can chalk it up to a particular sick corporate culture, located in Seattle and presided over by a megalomaniac. You can reassure yourself that you have a kinder boss and a more decent set of rules. But “continual improvement algorithms” are innovations, the Times explains, the kind that are now arriving in “the rest of the work world” and just happened to come to Amazon first. The real villain of the Times piece isn’t Bezos or his senior executives. Instead, it’s Taylorism for the professional class, in the guise of data. “Data,” a senior Amazon executive tells the Times, “is incredibly liberating.”
It’s sort of perfect that, of all American executives, Bezos has to confront this report. Mark Zuckerberg was in particular the great symbol of the last tech epoch — in which a kind of know-nothing idealism still held, in which the cultural shock was that social experiences were now mediated through smartphones. But for the transition to the next epoch — drones, intelligent robots, the "Internet of Things," a phase in which technology mediates our experience of the physical world — Bezos fits best. The industry’s professed idealism is fading, and the distinction between Silicon Valley and the rest of American business has collapsed. So now the big public questions about Silicon Valley look pretty familiar because they are extensions of questions that have long existed around American corporations: of how much power one company should be allowed to have, of how much influence a company ought to have over individual choices, of how we weigh the magnificent efficiencies capitalism brings against its human brutalities. What is interesting about Bezos is how familiar a corporate type he is, down to the shiny Jack Welch dome.
Bezos made public the intra-company memo that he sent out after the Times story was published, and it was intelligently calibrated and said quite a bit about the need for corporate human decency: “Our tolerance for any such lack of empathy needs to be zero.” He didn’t say anything about data.
The Amazon story has now already been through a couple of media cycles, and the presiding mood is calmer, less outraged than it was in the hours after it went live over the weekend. You could probably find “similar anecdotes coming from ex-employees at Goldman, Skadden, Bain, or various fast-growing startups in Silicon Valley,” wrote Alison Griswold at Slate, “and they would probably be non-stories.” (It isn’t surprising that the Amazon story would arouse vastly differing opinions within the media — a field which maybe more dramatically than any other has transitioned from a system in which prestige was a matter of craft and jobs were protected by unions, to one under which everyone’s productivity was immediately knowable in the form of clicks.) Matt Yglesias pointed out that unlike Amazon’s blue-collar workers, the executives whose abuses were detailed in the story were likely well-paid and had the option of leaving for another good job. On Twitter, Josh Barro, of the Times’ own Upshot, compared Amazon employees to triathletes: “Triathlons are, objectively, awful. And yet some people derive perverse joy from them. Who are we to argue with their life choices?” Plenty of people pointed out that working at Apple doesn’t seem like a real picnic either.
But the destabilizing parts of the Times story didn’t really have much to do with how much more or less brutal Amazon is toward its competitors, but with naming the costs of a vision by which a firm not only operates within the market but also itself operates as a highly monitored internal market for talent and success. What’s notable about the Amazon version of this vision isn’t only that it is brutal but also that it works.
In his satirical Silicon Valley novel The Circle, Dave Eggers has his naïve hero frantically competing with her co-workers to up her internal Participation Rank score, staying up late into the night to prop up her ranking by authoring “zings” (the novel’s version of Facebook “likes”), “your comments on others’ zings, your comments on other Circlers’ profiles, your photos posted, attendance at Circle events comments and photos posted about those events,” each of these incremental efforts noted by an algorithm and scored. Most people know what they think about mean bosses, and misogynistic ones. They even might know what they think about the suggestion, made by the LinkedIn founder Reid Hoffman, that we ought to see our relationship with our employer not as a permanent state but as a “tour of duty,” lasting a few years. But I don’t think many professionals really know what they think about the experience of a company as an aggressively monitored internal market for productivity, or how to know whether such an experience is necessary to deliver the signal corporate triumph of the Times story, in which Amazon delivers an Elsa doll to the door of a customer who could not find one anywhere in New York City in exactly 23 minutes. Which is to say, they don't know what to make of a rising vision of work in which, as Louis C.K. once put it, “everything’s amazing and nobody’s happy.”