This paper was originally written for the course "Artificial Intelligence & the Future of Work." A version was published in the Spring 2018 edition of the Georgetown Public Policy Review.


AI, Labor, and Incentivizing Automation: An Historical Perspective

 

When we think about the role of artificial intelligence and automation in the workplace of today and in the future, it’s important to situate our analysis within the wider context of historical technological development. AI, while disruptive, is not necessarily wholly unprecedented in this context, and while preparing for its future effects we can look to the past for guidance in how to conceptualize its impact. In this paper, I will first examine technology through a historical lens, including previous fears of automation and technological unemployment, as well as its actual effects. I will focus on automation during and since the Industrial Revolution, looking at the economic arguments for why it unfolded the way it did. This section will provide a framework for thinking about the economic effects of technology by exploring the four mechanisms through which technology impacts employment. The next section will focus on AI and automation in their current forms, providing a foundation for thinking about what they are, how they’re used, and some of the contemporary dialogue around their usage. The final section looks to the future, analyzing the potential effects of AI and automation on work based on the historical analytical framework developed throughout the paper. It explores the kinds of jobs that are at risk according to the tasks framework, the potential for labor market polarization, and the overall economic effect.

In using a historical lens, this paper looks at the overall narrative of technology’s impact on the economy to conceptualize the future. I argue that AI’s disruptive economic potential hinges on further sustained investment in research and development, which in turn is driven by labor market wages. Using historical evidence from the Industrial Revolution and the IT revolution, I show that the decision to develop automated technology has often been guided by cost-saving practices in response to concentrations of relatively higher-wage workers. Following this trend, R&D for AI will continue to be strong if there is a concentration of wealth in any particular field or type of job. Essentially, any “human advantage” we develop over automation will incentivize the automation of said skill, once again lowering the cost and displacing labor. If this is the case, widespread automation will fundamentally alter how human labor operates, giving rise to the need for a renegotiation of the social contract. While outside the scope of this paper, this trend necessitates the creation of a mechanism to re-distribute the wealth generated by automation in order to maintain a stable, just, and fair society.
 

I. Historical Conceptions of Automation, Innovation, & Technological Unemployment
 

Needless to say, for as long as technology has been around—and this stretches all the way back to the beginning—it has exerted influence on employment, as well as economics more generally. Particularly since the first Industrial Revolution, technological innovation has had a disruptive effect on how economies are organized and how labor operates, creating widespread fears about the fate of working people. The classic example of the printing press goes back even further than the Industrial Revolution—this new device would mean the end of employment for many people whose jobs required writing and re-printing by hand. But we now know, in the long run, the printing press was instrumental in subsequent economic growth, and very few would argue for the return to hand-lettering. Eventually this field died out, but many other fields popped up, spurred by opportunities brought by the printing press. The economy as a whole, and the well-being of its individual members, was undoubtedly improved by this invention, even though it created great anxiety in the short term.

We don’t even need to go that far back to find examples of this kind of dynamic. The Industrial Revolution, beginning around the middle of the 18th century in Europe, fundamentally changed the kinds of jobs available to workers. In England, the epicenter of the transformation, wages were relatively high, giving owners of capital an incentive to develop technology that could substitute for the high cost of labor. According to Robert Allen, the adoption of technologies like the spinning jenny, which partially automated the process of weaving, were due to England having high wages relative to capital costs (Allen). This meant that the return on research and development was high, and demonstrates how the process of technological innovation is often driven by considerations of labor and capital. This process is also shaped by particularly important actors, interests, and influences that are present near a technology’s adoption. For example, as David Noble points out, the decision to use numerically-controlled machines, which favors the skills of programmers, over tape-driven machines, which favors the skills of machinists, was part of a larger effort to centralize decision-making authority (Noble).

During any given economic change, whether driven by technology or not, there are going to be those who are hurt in the short term. Knowledge about certain industries and types of labor are passed down through generations, creating a kind of stability that becomes upended when new forms of labor emerge. And while new forms have, historically, always emerged, the process is painful. As Joel Mokyr points out, technological change can have a number of effects on labor, including the destruction of traditional labor hierarchies, changes in physical work environments, relocation of jobs and thus breaking up of families and communities, and, of course, unemployment (Mokyr). The natural response to these changes is resistance, which can come in many forms. This includes non-market means such as tariffs and regulations, extra-legal actions like strikes and demonstrations, as well as violent actions such as the physical destruction of technology as most famously practiced by the Luddites. But, over time, as cheaper capital displaced labor, goods and services dropped in price, which raised everyone’s real income, increasing demands for yet more goods and services (which create new jobs). In this way, there is no “lump of labor,” but merely an ever-shifting labor market that changes to meet new demands as they arise. In the meantime, technology had actually enriched the experience of everyday people (Haldane).

In order to understand our current position, it is helpful to look at the kinds of conversations around technological and economic change that have occurred throughout history. According to Daniel Akst, in the mid-20th century high unemployment rates made people fear the consequences of automation, which disproportionately hurt less-skilled workers. The idea was that manufacturers who lost their jobs to automation (as well as trade) would quickly rebound and be re-situated in a new field—but this did not usually happen, as these workers typically just dropped out of the workforce (Akst). As Wassily Leontief pointed out, the idea that humans will always be indispensable to productivity is as misguided as someone of an earlier time thinking that horses were indispensable to agricultural labor. Just because humans have always been part of labor, it doesn’t follow that we always will be. If computers are able to replace non-routine tasks, we may go the way of horses. Leontief is also skeptical of re-training programs, believing that labor re-placement programs must take into account the skills and tasks that will be valuable for a long time, not necessarily just what’s currently valuable (since these could also soon be automated) (Leontief).

Similarly, in “The Triple Revolution,” a group of social activists and technologists addressed a letter to President Lyndon B. Johnson highlight what they saw as a threat to human labor from automation and self-regulating machines. In particular, the authors saw the cybernation revolution as bringing about a system of essentially unlimited productivity that could increasingly be produced by machines and exclude humans. Fearing the negative effects this would have on the labor market, and the potential social ills that could follow, the signatories encouraged the government to adopt a policy mechanism through which the wealth produced through automation could be re-distributed back to the rest of the economy, rather than being tied up in capital and its owners (Pauling et al).

Not everyone was quite as gloomy as these thinkers. Writing famously in “Economic Possibilities for our Grandchildren,” John Maynard Keynes was optimistic that technological developments like automation would actually free humanity from the bonds of labor. He believed that, as machines could produce more and more wealth without human labor, we would begin working less and instead spend our time pursuing fulfilling things like art and philosophy (Keynes). Theoretically, in Keynes’ vision, the government would have adopted a mechanism such as the signatories of “The Triple Revolution” suggested, so that all of society could reap the benefits of economic wealth. But Keynes also believed that being freed from the need to worry about employment and finances would bring a change to the code of morals and values we develop as a society, focusing on self-actualization and fulfillment rather than on material possession.

In general, it is helpful to think of four general ways that technology impacts employment. The two direct effects include technology substituting for labor, subsequently raising productivity and lowering prices; and by expanding the sectors that are the source of technological innovation, increasing the demand for labor in these fields. The two indirect effects include technology complementing certain labor, which leads to improvement in these sectors that then expand and increase labor demand; and technology lowering the costs of production, and prices, which allows consumers to shift some of their spending to other discretionary goods and services, which also increases demand of (new) labor (Stewart et al). In thinking about how automation has already impacted and will continue to impact employment, this framework, based on historical experience, provides a useful guide.
 

II. Contemporary Paradigms of AI & Automation


For all the popular panic about the advent of artificial intelligence (and its more media-friendly manifestation, robots), there is surprisingly little agreement on what exactly it is. Does any machine that completes a task that would have been done by a human count as artificially intelligent? Or only the humanoid ones that mimic human characteristics, like in the movies? Even defining “intelligence” by itself is an impossible task, let alone agreeing on what constitutes intelligence when exhibited by a non-human or non-carbon-based life form. One framework for conceptualizing how AI operates deals with acting humanly, thinking humanly, thinking rationally, and acting rationally. AI can be modeled on any of these paradigms. The Turing Test—one of the more well-known AI concepts—accurately captures the acting humanly paradigm, which is concerned with whether or not AI can convince a human that the machine itself is actually human, or if the human subject is able to tell it is a machine. In order to pass the Turing Test, the AI must be able to master natural language processing, which allows it to successfully and seamlessly speak the given language; possess knowledge representation, so that it can record what it takes in; demonstrate automated reasoning, using its stored information to develop new thoughts; and include machine learning, so that it can learn from new information, data, patterns, and circumstances.

So where does this all leave us? For the sake of convenience, this paper will adopt Nils J. Nilsson’s definition of AI, which is “…that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment” (Nilsson). While relatively broad and vague, this definition allows us to account for a diversity of AI manifestations, including both those that are programmed to perform a specific task as well as those that possess “general” intelligence. It also allows space for AI that is not necessarily programmed to merely imitate human intelligence, in its different forms, but to develop and express its own. This was the vision of John McCarthy, one of the founding fathers of the field of artificial intelligence.

Since the beginning of its development, AI has been based on a few different systems of thinking: namely, symbol systems, expert systems, and machine learning. The first efforts at creating AI relied on symbol systems, in which AI would string together logical calculations (based on numbers, letters, and other “symbols”) to reach a conclusion. As problems got more complex, however, symbol systems showed limitations—there were just too many things to compute to make this a viable method. The next efforts utilized expert systems, which took symbol systems but then consulted with experts on particular topics, who would narrow down the possible calculations, making the computing process much easier. But this method was limited because each type of machine required expensive, specialized programming for its particular field, and there was hardly any advantage over just using a real person. Finally, the system that has come to dominate AI research is machine learning, in which the programmer feeds a large number of relevant examples to the machine, and in turn it learns to build models of thinking based on these examples. Instead of the programmer explicitly telling the AI what to think or how to solve a problem, the programmer uses data to “teach” it. With innovations in processing, as well as the recent accumulation of vast amounts of data, machine learning has proven to be the most promising paradigm for how to program AI.

Fundamentally, there are two different types of AI that are produced: narrow (also sometimes called weak) AI, and human-level (also sometimes called strong) AI. The former are programmed only to complete a specific task—an example of this would be a self-driving car, a computer program that plays chess, or a Roomba. The latter, which are much more difficult to develop and thus rarer, are meant to resemble, to varying degrees, the kind of holistic capabilities of humans. They are supposed to be able to “think,” in a sense, and to be able to learn and develop based on inputting new information. When thinking about how AI enters the workplace and effects productivity and employment, both types of AI are relevant. So far, automation has mainly been preoccupied with narrow AI—machines that are developed to complete a particular task at a lower cost relative to labor than when the task is done by a human. Human-level AI has the potential to radically alter the human economic landscape, which will be discussed in the next section.

But even the narrow AI already in use has had a dramatic effect on the workplace. In the last 30 years, wages have stagnated, inequality has risen, and employment growth has primarily been concentrated among low-skill (and thus low-wage) jobs. This is, in large part, a function of how the kinds of technologies adopted in workplaces has changed the tasks workers perform, and in turn the demand for human labor. This is known as the tasks framework. In fact, there’s a correlation between the adoption of computer-based technologies and an increased use of college-educated labor. Thus far, AI has been very good at imitating “routine” tasks, or those that follow explicit commands, but not as good at “non-routine” tasks, which are not yet well enough understood to program. Many of the routine tasks AI has assumed from human labor were concentrated in low-skill, low-wage jobs, meaning that the jobs still available to humans required higher levels of education and skill (Autor et al).

But what happens when middle- and higher-skill tasks are able to be automated? This began happening in the 2000s, as AI has started being able to do the jobs traditionally occupied by middle- and upper-level professionals. This began a process of deskilling, in which higher-skill workers were unable to find work appropriate to their skill level, so had to move down the occupational ladder, exerting downward pressure on middle-skill workers, and so forth. Additionally, it is possible the “maturity” of the IT sector has led to a decrease in demand for skills. This means that, once the capital of the IT sector is in place (which it is), the only new demand for high-skill cognitive tasks is in maintaining the capital. There would therefore be a decrease in skills demand relative to the initial investment stage (which saw the increased skills demand of the 1990s). This may also explain why there are less high-skill cognitive jobs, other than the fact that they are becoming automated (Beaudry et al).

Another way to think about this is to compare it to the emergence of new technologies like the spinning jenny and weaving machines during the Industrial Revolution, as mentioned earlier. These technologies were developed and introduced in order to cut down on costs by replacing labor with machines, since human wages were relatively high. The same was true of workers during the end of the 20th century and beginning of the 21st: as the supply of skilled labor increased starting in the 1970s, wages increased as well, incentivizing the development of automated machines that could complete these same tasks at a lower cost (Acemoglu). The labor market polarization and unemployment we’ve seen thus far may just be the beginning. If AI continues to develop in the ways and at the pace at which it has, we’re talking about a seriously disrupted economy. This final section will look at some of the potential outcomes of such trends.
 

III. Looking Ahead: Work in an Automated World
 

The most commonly cited statistic regarding jobs and automation is from an analysis Carl Frey and Michael Osborne did on which tasks are most at risk for automation. In talking to experts, the two determined that about 47% of US employment is at risk. This analysis has been criticized for assuming that task structures are the same across all jobs within a field, but regardless the number has become central to argument that automation is bringing mass technological unemployment (Frey and Osborne). But what kind of tasks will be automated? According to a Bank of America report, automation will most heavily impact aerospace and defense, automobiles and transportation, finance, healthcare, industrials, domestic services, and agriculture and mining. They estimate that 45% of manufacturing tasks will be automated by 2025, and that the development of these technologies will bring large initial job-creating investments (as well as subsequent labor cost savings). These technologies bring not only labor-saving capabilities, but also improve the productivity growth, as the machines are able to complete the tasks more efficiently and/or effectively than humans. The report also estimates that there is a 50% chance of full human-centered AI (high-level machine learning) by the years 2040-2050 (Bank of America Merrill Lynch).

As discussed earlier, automation is increasingly being applied to skills and tasks that are not routine, or that are highly cognitive. This includes programs that can craft news stories and other pieces of writing, automated stock market trading algorithms, machines that can analyze the events of a game and summarize them, and other machine-learning based applications. This means that it may have an increasingly large impact on white collar jobs, as well as middle class clerical jobs. The use of cloud computing technology and software as a service (Saas) means that many organizations are getting rid of their IT departments, instead outsourcing them to centralized companies like Amazon, Google, and Microsoft. Some of these tasks are even done by machine learning software, decreasing the overall need for IT employees. Additionally, a number of high-skill jobs, such as lawyers, radiologists, computer programmers, and tax preparers, are beginning to be offshored to places with cheaper labor costs (Ford).

According to Alan Blinder, about 30-40 million jobs in the US—that’s about a quarter of the workforce—could be offshored (Blinder). As Brynjolffson and McAfee demonstrated, offshoring is the first step before automation, meaning there’s a good chance these jobs are ripe for automation in the near future (Brynjolffson and McAfee).

This trend supports the notion that there is an incentive to automate jobs that are high-skill and thus high-wage. After the growth of high-skilled workers since 1970, organizations were interested in developing technologies to replace the jobs that followed this boom, and so have begun focusing on automating these jobs. What, then, happens to these white collar workers? If they fall down the occupation ladder, they either push out lower-skilled workers or create a concentration of workers at low-skill occupations—creating just another opportunity for disruption through automation of these tasks. Of course, AI is not magic. Just because we want to automate something, doesn’t mean we have the technological capability to do so. But the amount of investment in R&D, and the pace at which it is conducted, strongly influences the capabilities that come out of it. If AI generates greater wealth, this means even more incentive to automate—in order to replace the labor that produced it in the first place. This leads to even more R&D (at least proportional, so that investment is not costlier than what is eventually saved), which in turn increases the probability of developing AI that will replace more workers. This is a kind of feedback loop in which human labor will always lose out to the joint forces of large capital and automated technology.

As has been established, this trend is not new. It drove the innovation of weaving machines during the Industrial Revolution, and has impacted demand for skills in the wake of the IT revolution. When this happened in the past, overall economic productivity grew, labor re-oriented, and living standards increased. But will this time be different? Of course, this is impossible to know for certain. So far, the data does not support the notion that we’ve yet begun experiencing effects of mass unemployment from automation, but this does not preclude it from being a possibility in the future. Any kind of “singularity” or massive disruption in economics and the labor force would require huge investments of capital into research and development of AI technologies. In order to justify these kinds of investments, there would need to be some kind of incentive—historically, relatively high wages have provided this incentive, as companies realized automation could help them save on labor. Thus, there would need to be some kind of accumulation or concentration of wealth among labor (in a sector, in types of jobs, in a skill-level) that makes the investment into high-level AI worth it. The extra wealth generated by current automation could potentially provide this starting point (given that it is distributed and not tied up in capital or the owners of capital), creating a kind of cycle of wealth production, distribution, extraction, and then re-investment.

Essentially, to see the kind of strong AI of the popular imagination, the kind that will fundamentally alter the economic landscape by producing exorbitant wealth and possibly reducing the role of humans in the labor force, there will need to be massive investment in research and development. This will only occur if there is ample incentive for companies to make this investment. Historically, this has occurred when there has been a concentration of wealth in a segment of labor that owners of capital want to break up. The same may be true for the application of AI. What would set this development apart from previous technological unemployment would be the level of capabilities the machines possessed, and how quickly the transformation would occur. With machine learning technology that can continuously build on its own skill-set, it may be possible that, at some point, humans are irrelevant to the process of labor—just as horses became irrelevant to agriculture.

We have already seen negative effects of automation in terms of income and wealth inequality. The benefits that AI has brought so far have accrued primarily to elite, wealthy owners of capital, and has harmed middle- and lower-class laborers. This in and of itself is an argument for a policy mechanism to re-distribute the wealth generated by AI, or to develop programs that more accurately train workers for available, better-paying jobs. I would go a step further and say that it is also important because of some of the indirect effects. Ensuring that additionally generated wealth is spread throughout the workforce and society in a more egalitarian way will also prevent the incentivizing of potentially-destructive forms of AI. Of course, smaller-scale AI development will take place, as has already been the case. This is a good thing. It is what drives the economy forward, what increases productivity, and what generates wealth overall. This incrementalism is more desirable, for the security of society, than disruptive AI. This is, of course, not a failsafe. Any kind of wealth returned back to labor provides the potential for disruption through automation. But, as is the case with most of future forecasting, it is important to identify fault lines and mitigate potential risk.

 

 

Works Cited

Acemoglu, Daron. “Technology and Inequality.” NBER Reporter: Winter 2003.

Akst, Daniel. “What Can We Learn from Past Anxiety Over Automation?” The Wilson Quarterly, The Wilson Center. 2013. http://wilsonquarterly.com/quarterly/summer-2014-where-have-all-the-jobs-gone/theres-much-learn-from-past-anxiety-over-automation/

Allen, Robert. “The Industrial Revolution in Miniature: The Spinning Jenny in Britain, France, and India.” Oxford University Department of Economics Working Paper 375. 2007.

Autor, David, Frank Levy, and Richard Murnane. “The Skill Content of Recent Technological Change: An Empirical Exploration.” The Quarterly Journal of Economics, 2003.

Bank of America Merrill Lynch. “Robot Revolution—Global Robot & AI Primer.” Thematic Investing, 2015.

Beaudry, Paul, David Green, and Benjamin Sand. “The Great Reversal in the Demand for Skill and Cognitive Tasks.” NBER Working Paper No. 18901, 2013.

Blinder, Alan. “How Many U.S. Jobs Might Be Offshorable?” CEPS Working Paper No. 142, 2007.

Brynjolfsson, Erik and Andrew McAfee. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton & Company, 2014.

Frey, Carl and Michael Osborne. “The Future of Employment: How Susceptible are Jobs to Computerisation?” Oxford University Press, 2013.

Ford, Martin. Rise of the Robots: Technology and the Threat of a Jobless Future. Basic Books, 2015.

Haldane, Andrew. “Labour’s Share.” Speech at Trades Union Congress, London. 2015.

Keynes, John Maynard. “Economic Possibilities for our Grandchildren.” R. & R. Clark, Limited, Edinburgh. 1930.

Leontief, Wassily. “National Perspective: The Definition of Problems and Opportunities.” The Long-Term Impact of Technology on Employment and Unemployment. National Academy of Engineering Symposium. 1983.

Mokyr, Joel. “Technological Inertia in Economic History.” The Journal of Economic History, Vol. 52, No. 2, Cambridge University Press. 1992.

Nilsson, Nils. The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge University Press, 2010.

Noble, David. “Social Choice in Machine Design.” Case Studies on the Labor Process, Monthly Review Press, New York. 1979.

Pauling, Linus et al. “The Triple Revolution.” The Ad Hoc Committee on the Triple Revolution. 1964.

Stewart, Ian, Debapratim De, and Alex Cole. “Technology and people: The great job-creating machine.” Deloitte LLP. 2015.