Mention self-driving cars to a group of friends at a social event, and you are sure to elicit a slew of opinions from all sides about a wide variety of related issues, including environmental concerns and the foundations of a universal morality. That’s because the complete automation of cars would usher in a whole new era of transportation which could change habits, landscapes, and risk. The rules will have to be rewritten. And we’re not yet sure who the authors of those rules are going to be.
Of course, there are plenty of discussions to be had about the logistics—safety, infrastructure, and technology. But the introduction of driverless cars into mainstream society also demands answers to some important moral questions that we have never had to grapple with in such a practical sense before. Who should a vehicle protect—the driver and passengers, or pedestrians and people in other vehicles? What is the environmental impact of driverless cars? How might they change behavior that could worsen quality of life? Engineers working on driverless cars may be more interested in other potentially problematic aspects of the technology, such as the likelihood of a hacker gaining access to the system and putting passengers at risk. But could that be because engineers are not in an appropriate position to answer the questions?
Researchers posed the safety prioritization question to participants in a study published in an issue of Science in June of 2016. The vast majority of the study’s respondents said that driverless cars should prioritize the safety of the public—meaning that they would prefer the car to put passengers’ lives at risk instead of endangering pedestrians. On the surface, this seems to be the most logical solution.
But when these same participants were asked if they would consider buying a car that didn’t place their safety as the top priority, many of them claimed that they would not. Of course, this response is also understandable. Why would anyone spend tens of thousands of dollars on a product that clearly puts them at risk? Presently, this discussion is left largely to engineers who are funded by corporate interests in the automobile industry, and it’s clear what their preference should be. A company is not going to make cars that people are uninterested in buying.
According to Bryant Walker Smith at the Center for Internet and Society at Stanford Law School, 90 percent of all car accidents are due to human error. If that’s true, it would suggest that driverless cars, by automatically reacting to massive streams of data about car locations, speeds, and trajectories, might be able to drastically reduce the number of vehicular injuries and fatalities we face on our roads today. Are we morally obligated to introduce this technology into the market to prevent most accidents if that means entering into a gray area for others in the process? If so, in order to gain widespread adoption, it seems like individual cars would have to prioritize the safety of the vehicle’s passengers, even if that means putting others at risk.
This is likely to have the most impact on the people with the least leverage in decision-making. Pedestrians and cyclists are potential losers in the debate over how to program driving algorithms, yet they will not be able to sway automakers whose products they don’t use.
This isn’t the only moral question self-driving cars raise. The environmental aspect of this technology also presents a slew of issues, largely because it is not yet clear what the environmental impact of driverless cars will be. According to research conducted by the US Department of Energy, driverless car technology could reduce transportation-related energy consumption by up to 90 percent—or increase it by up to 200 percent. That’s far too large of a margin to make a well-informed decision about whether we should pursue this kind of technology.
The environmental impact of driverless cars may largely hinge on government policies. For example, in crowded cities like New York, Chicago, and Philadelphia where parking is scarce, it may cost less for car owners to simply let their cars drive around the block while they run their errands instead of having to find and pay for a place to park. Would this be allowed? It’s become clear that enacting worldwide (and even country or state-wide) regulations to mitigate climate change is proving increasingly difficult—just look at the Paris Climate Agreement.
There are other matters to take into account. How will driverless cars affect our lives on a day-to-day basis? On the surface, it may seem like they will add a measure of convenience. After all, if you don’t have to pay attention to the road, you can use your time in your vehicle to read, spend time with friends, or get work done. But this may also encourage drivers to live farther away from work and school, fracturing communities even more deeply than they already are, and increasing overall energy use tremendously. If sustainable, vibrant cities rely on density and walkability, then self-driving cars could be their undoing.
When it comes to automated vehicles, the questions we should be asking are larger and more significant than those regarding safety and feasibility. The conversation needs more perspective and a wider scope. Who benefits from this technology? Who is harmed? What are all of the imaginable outcomes? How does it change relationships, cityscapes, energy use?
These questions are far outside the responsibility and the expertise of any engineer working on driverless cars, regardless of how thoughtful or intelligent they may be. Ethicists, philosophers, and other thinkers outside the realm of science and technology need to be a part of the conversation. Are Ford or Toyota going to pay the head of a philosophy department to apply Kant’s categorical imperative to the context of driverless vehicles? To say that’s doubtful is a laughably huge understatement. But until someone does, we risk forging a future that no one really wants.