In a recent study, “Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?” Jean-Francois Bonnefon, Azim Shariff, and Iyad Rahwan surveyed people regarding their feelings about autonomous vehicles (AVs) and what kinds of decisions they would prefer those machines to make. Unsurprisingly, participants thought that AVs should make decisions that save the most lives, unless of course they themselves were the collateral damage.
This study is not the only experiment in ethics that is currently underway. The moment we put these vehicles on the roadways, releasing code-driven automobiles into the wild, we began a massive experiment in ethics. We often think of ethics as a philosophical discourse, removed from everyday life. In fact, this is the assumption that undergirds the study conducted by Bonnefon, Shariff, and Rahwan: they posed a number of hypotheticals and then drew broader conclusions about cultural anxieties surrounding AVs. But the construction of AVs and the writing of the code that determines how they operate is itself an experiment in ethics, one that writes an ethical program directly into computational machines.
Brett Rose of TechCrunch argues that the approach of Bonnefon, Shariff, and Iyad Rahwan is limited, and I would tend to agree, but not for the same reasons. Rose sees the discussion of ethical algorithms as a waste of finite resources:
“Don’t get me wrong, these hypotheticals are fascinating thought experiments, and may well invoke fond memories of Ethics 101’s trolley experiment. But before we allow the fear of such slippery-slope dilemmas to de-rail mankind’s progress, we need to critically examine the assumptions such models make, and ask some crucial questions about their practical value.”
For Rose, these models assume that we consciously weigh all of the possibilities prior to making a decision. From everyday experience (and especially behind the wheel of a car), we know this isn’t the case. Rose’s solution? Physics and hard data should always trump the squishiness of ethics:
“Even if a situation did arise in which an AV had to decide between potential alternatives, it should do so not based on an analysis of the costs of each potential choice, information that cannot be known, but rather based on a more objective determination of physical expediency.
This should be done by leveraging the computing power of the vehicle to consider a vast expanse of physical variables unavailable to human faculties, ultimately executing the maneuver that will minimize catastrophe as dictated by the firm laws of physics, not the flexible recommendations of ethics. After all, if there is time to make some decision, there is time to mitigate the damage of the incident, if not avoid it entirely.” (emphasis mine)
This attempt to separate physics from ethics is built on a dream (one that we are especially likely to find on sites like TechCrunch) that computational machines allow for an escape from the sloppy inefficiencies of the human (and, perhaps, even the humanities).
In Ethical Programs: Hospitality and the Rhetorics of Software, I argue that our dealings with software happen at multiple levels of rhetorical exchange. We argue about software, discussing how it can or should function. But we also argue with software, by using computation to make arguments, and in software, becoming immersed in computational environments that shape and constrain possibilities. The paper by Bonnefon, Shariff, and Rahwan sits primarily at the level of arguing about software, and it risks treating ethics as something rational and deliberative, which we know leaves out most of the ethical decisions we make each day. We decide in the face of the undecidable; we execute ethical programs (computational or otherwise) without the luxury of walking through all of the possibilities.
Rose's argument is that arguing about software is a waste of resources and that we should instead focus our time and energy on “mankind's progress.” Such an approach ignores that an AV’s algorithm will have to make decisions about what data to include or exclude (and how to prioritize that data) as it determines whether to accelerate, brake, or turn. Rose argues that AV algorithms should be focused on “physical expediency” rather than the endless regress of ethics. But his very use of the word “should” is a signal that this approach is not confined to the “firm laws of physics” but is instead already caught up in ethical questions.