This paper was written for the course "Governing Emerging Technologies."
Ethics & Autonomous Systems: Autonomous Vehicles
It’s rather revealing that in our current corporatist paradigm, “disruption” has become the new jargon du jour. Not only has it taken on a life of its own—it has become a mantra, an ideology. It has moved beyond a Marxian feature of capitalist systems, a logical outgrowth of creative destruction, to embody a value in and of itself, fueled by Silicon Valley VC money and manifested in $400 “self-squeezing” juicers and “bodegas” that are merely glorified vending machines (Ohlheiser). But it’s easy to poke fun at these duds of the tech world, ignoring the giants whose innovations actually do fundamentally alter the fabric of society. Juicero and Bodega are merely following the logics of the system pioneered by the likes of Uber, Google, Facebook, and Amazon, whose very bottom lines rely on disrupting traditional business models and re-making them technologically in their own images. Conveniently, this is also supposedly good for society at large, which benefits from things like better medication, more convenient travel, easier financial transactions, cheaper products, and so on. But it’s easy to lose sight of this deeper purpose, worshipping disruption at the expense of the progress it’s meant to deliver. When you value disruption in and of itself, you neglect to ask the hard questions about a technology. And when you fail to ask these hard questions—who will be impacted and how, what large-scale changes may develop in response, why is this being made—you risk serious harm, even catastrophe, later on.
This is not a categorical indictment of disruption but a rejection of its osmosis into the public consciousness as a desirable value in and of itself. The truth of the matter is “disruptive” technologies require a significant amount of ethical planning and regulation. This is particularly true of automated and autonomous systems for, by their very nature, they require their human programmers to make decisions about how the technology itself will make decisions. Even the decision to defer “judgment” in a given situation to a program or system constitutes an ethical choice that is decided by the human beforehand; so whether it’s ultimately a human or machine that “writes” the decision, the mere capability of making such a program in the first place situates the designers in a position of making a conscious, ethical choice.
Nowhere is this more visible today than the design of autonomous vehicles (AVs), more popularly known as self-driving cars. AVs require their designers to make very difficult ethical choices about how the vehicles should behave in specific circumstances—including the decision not to build this into the algorithm but to let the vehicle “decide” for itself, which, as discussed, is also an ethical choice. There are also a number of peripheral questions about how the use of AVs exerts pressure on other aspects of society. AVs have the potential to disrupt many factors of everyday life—economically, socially, culturally, and politically. Some of these disruptions are welcome—a probable reduction in accidents, for example; some of them less so—rapid, widespread job loss; and others more ambiguous—how does this effect liability, or change infrastructure? But these issues exist, and will be addressed, whether disruption evangelists care about them or not.
I. Levels of Autonomy in AVs
Of course, AVs aren’t one uniform technology, but rather include a range of capabilities, both currently possible and theoretical. The National Highway Traffic Safety Administration, an agency within the Department of Transportation, has adopted the Society of Automotive Engineer’s framework for thinking about automated driving systems, which ranks them in a tiered system from 0-5. This paper will also adopt this system of categorization, as it is useful for thinking about different potential applications of AV technology. It is also necessary from a regulatory and ethical standpoint, as different levels of autonomy bring with them different issues and concerns.
At the most basic level, vehicles with a score of 0 have no automation—the human is responsible for “all aspects of the driving task, even when enhanced by warning or intervention systems.” Vehicles with a score of 1 have driver assistance, in which the vehicle can execute certain specific tasks related to either steering or acceleration/deceleration. Vehicles with a score of 2 have partial automation, meaning they can have both steering and acceleration/deceleration completed automatically, with the human driver completing the other tasks. All of these thus far describe vehicles in which a human driver is monitoring the driving environment; the remaining levels place the automated system in charge of this. At level 3, vehicles have conditional automation, meaning the AV has control over steering, acceleration/deceleration, and monitoring of the driving environment; if need be, the human must intervene. At level 4, the vehicle has high automation, meaning it has all of the above skills, and if the human is called to intervene but does not, the AV will still make a decision. Finally, at level 5 a vehicle has full automation, which means the AV can drive in all conditions and situations without any need to even ask for a human to intervene. When talking about the benefits, costs, and ethical questions raised by AVs, it’s important to remember that these are not uniform but vary by level of autonomy of the AV in question (“Federal Automated Vehicles Policy”).
II. Benefits of AVs
AVs, if implemented properly, promise to bring a number of benefits that would be considered desirable even across political and socioeconomic lines. Of course, they vary across levels of autonomy, but for the sake of this paper, they will be considered in aggregate. Most importantly, AVs would likely greatly reduce the number of car accidents, as they would remove humans, the key “failure point,” from the equation. The Association for Safe International Road Travel estimates that nearly 1.3 million people die in road crashes each year, with an additional 20-50 million injured or disabled. That’s over 37,000 killed, with 2.35 million injured, in the U.S. alone (“Road Crash Statistics”). Most accidents (94%) occur because of human fallibility—either drivers are distracted, angry, drunk, or just not following the rules of the road—making their removal from the process likely to reduce accidents (“Federal Automated Vehicles Policy”). Not only would this lower the human death and injury toll, it would also save a significant amount of money. Less accidents means less money spent on car repair, healthcare, insurance, and police and emergency services (Bollier).
AVs would also save money by reducing overall transportation costs. With the job of human driver removed (which is, by itself, arguably positive or negative), those using transportation—whether taxis, ride hailing services, buses, or trains—won’t have to subsidize this salary. They would also lower costs by improving traffic conditions and cutting down on congestion. Less accidents means less bottlenecks, and if AVs are universally adopted, they could be networked to maximize efficiency on the road by anticipating and easily responding to the behavior of others. This could also cut down on commute and transportation times, giving people more time to do other things they’d rather be doing than driving in traffic (one could also imagine this lowering stress levels). Additionally, AVs could improve mobility for people who are often excluded from transportation, including the elderly, children, and those with certain disabilities (Bollier).
Many of these benefits also have second-order benefits—for example, more efficient and safer transportation systems save resources, which is more environmentally-friendly. And a transportation system in which safety is built into the AV, with a dramatic reduction in accidents, means that vehicles can be designed to be lighter, more flexible, more efficient, and greener. Not only are some of the changes brought by AVs inherently good, but by building preventative safety measures into transportation systems (rather than “curative,” or dealing with them after-the-fact), it’s possible to build a healthier holistic system. Again, it’s important to emphasize that these benefits come from the various levels of autonomy possible in AVs, but tend to be skewed towards more advanced systems.
III. Ethical issues of AVs
That being said, AVs’ disruptive potential also raises numerous ethical issues that must be addressed in regulatory contexts. Again, it must be noted that each level of automation brings with it different challenges—but rather than focusing on differentiating between these levels, this paper will look holistically at the issues that arise under the umbrella of autonomous vehicles. Because the issues generally grow as the technology progresses, many of those mentioned will be more applicable to higher-level AVs than lower-level. They will also be divided into four general categories: direct safety issues, indirect safety issues, questions of liability, and large-scale/systemic issues.
First, there are some clear questions of direct safety that the programmers of AVs must take into account. At the current stage of development, it’s highly probable that AVs will cause far less accidents than human-controlled vehicles, so this specific aspect of safety is not a problem. But it gets trickier when you get into the specifics of how AVs should behave in particular dangerous situations. Because AVs are programmed ahead of time, they don’t have the level of spontaneity that humans possess—meaning that in an accident, they will behave precisely as they are designed to. The question is what’s the right way to program them in these instances. In the case of an accident, should the AV behave in a way that maximizes the chance that the passenger survives? Or should it risk the passenger for the sake of the passengers in the other vehicles? What about pedestrians? Does this equation change when the number of potential people saved changes—for example, four pedestrians versus one passenger? Does it change depending on the type of people involved—for example, should it prioritize saving those who are young, elderly, or pregnant?
These seem like impossible questions. But if AV developers can build these decisions into their algorithms, there will be pressure for them to do so. Unsurprisingly, when surveyed, ordinary people mostly supported the use of AVs that risked the passenger for the sake of others when it came to other peoples’ AVs, but much preferred their own AVs to prioritize saving the passenger. In fact, they wouldn’t use an AV that sacrificed its own passenger, even though they would prefer others to use this variety. This provides a social dilemma that may prevent the adoption of AV technology—it’s impossible to design systems that meet these consumer preferences (Bonnefon et al). Regardless, there are ethical decisions to be made (most likely at the level of designer/company) regarding how the AV should respond—even deciding not to explicitly program this into an AV, instead letting it develop its own decision, is an ethical choice.
Another direct safety issue revolves around situations in which following the rules of the road—which is typically an aspect of AVs that makes them preferable—is actually more dangerous. If a police car or ambulance needs to get through, the AV would need to be programmed to understand this specific context and respond accordingly. The same is true for situations in which an AV must react to someone else breaking the rules—say, swerving out of the way to avoid someone, or backing up to avoid getting backed into—by slightly breaking the rules themselves (though only in a safe manner). Essentially, AVs need to be able to understand dynamic social factors. Of course, these scenarios can also be incorporated into the design of the systems, and this may be more of an issue in contexts in which there are both human and machine drivers. In fact, such contexts are where safety issues are most prominent, as the different driving styles of humans and machines might become a problem. Yes, it may be the case that thus far accidents involving AVs have all been the fault of the other (human) driver—but the accident occurred nonetheless. However, it is worth pointing out that the accidents reported with AVs have overwhelmingly been minor, and it’s possible that over time, humans conform their driving styles to be more similar to those of AVs (rather, the proper rules of the road).
There are also a number of second-order or indirect safety issues that arise with the use of AVs. First, AVs are vulnerable to hacking from nefarious outside actors. Just as other networked technologies like computers and personal devices are vulnerable, so too are AVs. But rather than having your data and sensitive information stolen from personal devices, a hacked AV could result in serious accidents. A hacker, with the right capabilities, could manipulate vehicles to ignore their basic programming and instead behave erratically. This risk only intensifies as more vehicles become autonomous, and could be especially troublesome in a completely networked system of AVs without human drivers (imagine the chaos that would ensue from erratic AVs and blocked roads in the aftermath of a terrorist attack or natural disaster). As is the case with any automated critical infrastructure, cybersecurity concerns are a valid point of contention (Bollier).
Similarly, there is a question of who owns the data that is produced and collected by AVs. Is it the owner of the vehicle, or the designers, who own the algorithm? If the AV uses geolocation to navigate its user around, and thus has records of where they have been, it most likely contains personal information (or can be used to ascertain sensitive information). Who gets access to this information? If the AV uses sensors, cameras, voice recognition, and other techniques to navigate on the road, it is likely that it will, advertently or inadvertently, capture data that may be personal in nature. Again, who owns and/or has access to this information? Will it be shared with third-party companies that could target users for marketing? Is it vulnerable to hacking from external actors? In the result of an accident, or if the vehicle somehow becomes involved in a criminal investigation, is the AV’s geolocation (and other) data subject to search and seizure, or is it protected under the Fourth Amendment? The existing federal privacy regulation for the most part does not address the issues that arise from AVs, meaning there are new questions that would have to be answered either pre-emptively by designers or post hoc by regulators (Navetta et al).
Additionally, there is the concern that the use of autopilot technologies can actually diminish user skill. In the case of less-than-full automation, meaning there is the potential for scenarios in which humans must still intervene, humans may actually be less equipped to deal with the situation because they have come to rely on the AV. This phenomenon has been demonstrated with airplane pilots who “forget” how to deal with emergency situations because they’ve grown used to autopilot mode. It’s certainly possible this could also happen with AVs, and could prove particularly problematic if drivers expect the AV to be able to handle itself and thus don’t pay attention (or are inebriated, or are in an emergency situation and thus cannot function). Again, this concern is applicable only to AVs that are not fully automated but still rely on occasional human intervention. And finally, there is the practical question of just when AVs will be considered “safe enough” by the public (or by regulating bodies). It’s clear that, in terms of reducing accidents, they’re already preferable to human drivers. But they can still be improved—what is this threshold for adoption? And, more fundamentally, is this threshold different for regulating bodies than it is for consumers? At what point will AVs gain the trust of the general population? This key element is required in order for AVs to be successful—otherwise, nobody will be willing to use them (whether purchasing for their own personal use, calling a self-driving Uber, or hopping on an autonomous bus).
There is also the tricky question of legal liability. If an AV gets into an accident, and it is determined that it is the fault of the vehicle, who is responsible for damages? Clearly the AV itself can’t be, even if it’s technically “at fault,” so this responsibility must be shifted onto a human. But should it be onto the individual user who owns or “operates” the AV, or the designers who created the algorithm? If users are held liable, they will be less likely to adopt the technology; if the designers/companies are held liable, they will be less likely to develop them in the first place. While AVs are generally much safer than human drivers, there still needs to be a precedent of liability, especially when accidents involve a combination of human and machine drivers. And it gets even messier with non-full automation—if an AV prompts a human passenger to intervene but they do not, and an accident occurs, is it the fault of the vehicle or the human? (Bollier) It’s possible to foresee alternatives to individual legal liability if a whole region or jurisdiction were to adopt AVs—for example, a city-funded trust that covered damages from AV accidents the very few times they occurred—but in a mixed human/machine driving world, it’s much more complex.
As with any “disruptive” technology, the issues AV raises extend beyond just safety and logistics. AVs are guaranteed to have large-scale, systemic impact on a number of systems we take for granted as part of everyday life. First and foremost, the adoption of AVs is sure to create rapid, widespread unemployment for the many workers whose livelihoods are based on operating vehicles, whether for taxi companies, trucking companies, ride-hailing apps, public and private busses, or some other automotive function. There are 3.5 million truckers in the US alone who, if AVs were adopted in their industry, would lose their jobs—not to mention the additional 5.2 million non-drivers in the trucking industry whose jobs rely on this function (Bollier). This will also effect the millions of service jobs built around trucking—rest stops, motels, and restaurants who primarily serve truckers. Trucking is one of the largest categories of employment in the US, but there are millions of other driving-based jobs that would also be effected. Yes, new jobs with new functions may develop over time for these newly-unemployed people to fill—but the rapidity with which huge numbers of people lose jobs will certainly have adverse economic effects. And any political disruption and insecurity associated with vast numbers of resentful, unemployed people leaving the workforce—increases in crime and violence, strains on families, and so on—will need to be addressed.
It’s also worth remembering Langdon Winner, who asks the eternally relevant question of “do artifacts have politics?” In this case, it is important to ask what kinds of politics AVs embody, and what forms of political organization they presuppose or facilitate (Winner). Most basically, AVs require a high degree of regulation, theoretically from an empowered and centralized government, though this is often the case with emerging technologies (whether the regulation occurs or not). But is it possible that AVs, in order to function optimally, would require a level of oversight, regulation, and organization that could be abused in other domains? To protect the AVs from hacking or manipulation, they would require strong security apparatuses, which could present new privacy issues. In order to fully maximize the usefulness, efficiency, and safety of AVs, it’s possible that governments, especially given the imminent environmental disaster we’re likely to face in the coming years, might require their jurisdictions institute machine-only driving. When every driver on a road is an AV, it’s possible to network them in ways that maximize their efficiency and safety, as the vehicles can communicate with one another and operate as a system. But this kind of regulation necessitates a strong central authority that may encourage, in general, more authoritarian forms of governance over democratic ones. When the stakes are higher, and the need for standardization and centralized decision-making greater, opportunities to transform democratic norms into more authoritarian ones present themselves more readily.
And what about the massive, wealthy companies that are able to innovate in this space? What kind of power do AVs give to Google, Uber, and Amazon? As previously mentioned, it’s unclear what happens to the data produced and used by AVs—in the hands of these companies, there could be serious violations of personal privacy. Or, imagine Google being contracted to develop a fleet of AVs for a city. Depending on how diffused AVs were in this particular city, Google could essentially be in control of the entire transportation system (governing traffic on both the information superhighway and the physical one, as it were). Or say Uber was given this opportunity. Could they apply their rating system to the overall transportation system, in partnership with other businesses or the government, so that users below a certain rating were unable to go certain places? Yes, this is speculative—but the point is that allowing private companies that already control significant aspects of our lives and hold troves of personal data to take on even more aspects runs the risk of giving up to these companies more than we bargained for.
There are also the ripple effects that many of these changes, even if small or incremental, have throughout various systems in society. AV deployment would change energy use, as their increased efficiency either decreases demand (in saved usage) or increases it (due to more people now using vehicles, and more often); traffic patterns, as the driving habits and styles shift towards those of machines; urban design, as roadways and public spaces are redesigned to accommodate AVs; real estate markets, as development patterns adapt to changes in mobility; and insurance, as AVs cause much fewer, and less costly, accidents. Such changes might not be as drastic as whole economic sectors collapsing, but they exert pressure in various forms on these institutions. And finally, though there are no shortage of valid arguments against America’s deeply-ingrained car culture, the question remains of what kind of culture shock will ensue if AVs become the dominant (or only) mode of transportation. Though not the most pressing ethical question, technological disruption also brings rapid changes of identity that can be alienating and confusing.
IV. Conclusion
The point in raising these issues is not to say that technology is bad, or that change is always bad. It’s not even to say that companies and/or governments must address every single one of these issues—that is probably impossible. But when it comes to the big questions—how will we re-absorb millions of displaced workers effectively back into the economy?—or the inevitable questions—should an AV risk its passenger’s life or a pedestrian’s?—there is no other choice. “Disruption” doesn’t have to be a boogeyman, but it certainly is no idol. By definition it re-writes the rules of how society operates, and thus “disruptors” should expect that the institutions that make up everyday life will respond to its effects. AVs promise to do just this—fundamentally alter the nature of significant parts of our economic, political, social, and cultural systems. Some of these changes will be welcome, some less so, and many others somewhere in between. By understanding how these systems work, and how AVs are likely to interact with them, it is possible to anticipate the effects they are likely to have.
Works Cited
Bollier, David. “Artificial Intelligence Comes of Age: The Promise and Challenge of Integrating AI Into Cars, Healthcare and Journalism.” A Report on the Inaugural Aspen Institute Roundtable on Artificial Intelligence, The Aspen Institute Communications and Society Program, 2017.
Bonnefon, Jean-François, Azim Shariff, and Iyad Rahwan. “The social dilemma of autonomous vehicles.” Science, 23 May 2016.
“Federal Automated Vehicles Policy: Accelerating the Next Revolution In Roadway Safety.” National Highway Traffic Administration, U.S. Department of Transportation, Sept.2016.
Navetta, David, Boris Segalis, and Kris Kleiner. “The Privacy Implications of Autonomous Vehicles.” Data Protection Report, Norton Rose Fulbright Blog Network, 17 July 2017, https://www.dataprotectionreport.com/2017/07/the-privacy-implications-of-autonomous-vehicles/.
Ohlheiser, Abby. “A Guide to the Things Silicon Valley ‘Invented’ That Already Existed.” The Washington Post, WP Company, 14 Sept. 2017, www.washingtonpost.com/news/the-intersect/wp/2017/09/14/a-guide-to-the-things-silicon-valley-invented-that-already-existed/?utm_term=.c3378c4495b8.
“Road Crash Statistics.” Association for Safe International Road Travel, 2017, http://asirt.org/initiatives/informing-road-users/road-safety-facts/road-crash-statistics.
Winner, Langdon. “Do Artifacts Have Politics?” Daedalus, Vol. 109, No. 1, Modern Technology: Problem or Opportunity?, 1980, pp.121-136.