Experiments in Ethics: The Ethical Programs of Self-Driving Cars

In a recent study, “Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?” Jean-Francois Bonnefon, Azim Shariff, and Iyad Rahwan surveyed people regarding their feelings about autonomous vehicles (AVs) and what kinds of decisions they would prefer those machines to make. Unsurprisingly, participants thought that AVs should make decisions that save the most lives, unless of course they themselves were the collateral damage.

This study is not the only experiment in ethics that is currently underway. The moment we put these vehicles on the roadways, releasing code-driven automobiles into the wild, we began a massive experiment in ethics. We often think of ethics as a philosophical discourse, removed from everyday life. In fact, this is the assumption that undergirds the study conducted by Bonnefon, Shariff, and Rahwan: they posed a number of hypotheticals and then drew broader conclusions about cultural anxieties surrounding AVs. But the construction of AVs and the writing of the code that determines how they operate is itself an experiment in ethics, one that writes an ethical program directly into computational machines.

Brett Rose of TechCrunch argues that the approach of Bonnefon, Shariff, and Iyad Rahwan is limited, and I would tend to agree, but not for the same reasons. Rose sees the discussion of ethical algorithms as a waste of finite resources:

“Don’t get me wrong, these hypotheticals are fascinating thought experiments, and may well invoke fond memories of Ethics 101’s trolley experiment. But before we allow the fear of such slippery-slope dilemmas to de-rail mankind’s progress, we need to critically examine the assumptions such models make, and ask some crucial questions about their practical value.”

For Rose, these models assume that we consciously weigh all of the possibilities prior to making a decision. From everyday experience (and especially behind the wheel of a car), we know this isn’t the case. Rose’s solution? Physics and hard data should always trump the squishiness of ethics:

“Even if a situation did arise in which an AV had to decide between potential alternatives, it should do so not based on an analysis of the costs of each potential choice, information that cannot be known, but rather based on a more objective determination of physical expediency.

This should be done by leveraging the computing power of the vehicle to consider a vast expanse of physical variables unavailable to human faculties, ultimately executing the maneuver that will minimize catastrophe as dictated by the firm laws of physics, not the flexible recommendations of ethics. After all, if there is time to make some decision, there is time to mitigate the damage of the incident, if not avoid it entirely.” (emphasis mine)

This attempt to separate physics from ethics is built on a dream (one that we are especially likely to find on sites like TechCrunch) that computational machines allow for an escape from the sloppy inefficiencies of the human (and, perhaps, even the humanities).

In Ethical Programs: Hospitality and the Rhetorics of Software, I argue that our dealings with software happen at multiple levels of rhetorical exchange. We argue about software, discussing how it can or should function. But we also argue with software, by using computation to make arguments, and in software, becoming immersed in computational environments that shape and constrain possibilities. The paper by Bonnefon, Shariff, and Rahwan sits primarily at the level of arguing about software, and it risks treating ethics as something rational and deliberative, which we know leaves out most of the ethical decisions we make each day. We decide in the face of the undecidable; we execute ethical programs (computational or otherwise) without the luxury of walking through all of the possibilities.

Rose's argument is that arguing about software is a waste of resources and that we should instead focus our time and energy on “mankind's progress.” Such an approach ignores that an AV’s algorithm will have to make decisions about what data to include or exclude (and how to prioritize that data) as it determines whether to accelerate, brake, or turn. Rose argues that AV algorithms should be focused on “physical expediency” rather than the endless regress of ethics. But his very use of the word “should” is a signal that this approach is not confined to the “firm laws of physics” but is instead already caught up in ethical questions.

Comments

Hi, Jim: I’m intrigued by your statement that the construction of AV’s and the writing of the code are experiments in ethics. And l appreciate your prompting us to reflect on the quality of those experiments.

Thought Experiments (TE)—like the Trolley Problem referenced in Bonnefon, Shariff and Rahwan’s work--I’m a big fan. With the help of student clicker technology, TE’s have supported attempts to make large lectures more interactive: if you were the trolley driver whose vehicle’s brakes have failed, would you mow down 5 people on track A or mow down 1 person on track B? Insert input from audience…. What if you were not the trolley driver but a bystander next to the track switch? Insert input from audience….

In the right hands, TE's make the How-to-Be-We almost magical. It’s as if the philosopher—following the example of Bullwinkle Moose--pulls concepts (duty, responsibility, rights…) unexpectedly out of her hat—which, in this case, is the audience engaged in the experiment.

Your post prompts me also to think that there is a machinic quality to thought experiments, which is well worth exploring. Thought experiments generate procedures, and—as philosophers like Foot and Thomson use/develop the Trolley Experiment—they are tools (if not situational surrogates) that help a certain kind of philosopher to track down intuitions and thematize principles that support those intuitions. TEs promise that there is some WE out/in there and that this WE is more or less in agreement with itself once we have properly identified the principles that inform our (principled) behavior. These principles can then be collected/grouped as concepts, and/or used to test principles generated by other means. And one TE generates and is substituted by others: Foot, for example, pairs the Trolley Problem with a Transplant Problem (1 person has organs; 5 people need them).

I do have concerns about TE’s as a mode of ethical inquiry: when connections are only mapped as principles, it’s too easy to exchange the scene of the trolley with the scene of the hospital with the scene of the driver with the scene of the manufacturer. We then wonder whether a consumer-self would buy this AV. Would a consumer-self be willing to pay for an AV that might well decide that it’s better to effect the death of the 1 consumer-self behind the wheel rather than hitting 5 pedestrians. Well, there’s the marketing pitch: “Buy inFinitude®’s new AV. It’s for the Greater Good.”

The basic, if unspoken, action of the Trolley Experiment is that there are conditions under which we can/should (or most people think we can/should) limit our responsibility to others. In particular, the trolley problem is set up to determine whether or when we think our responsibilities to more others outweighs our responsibility to fewer others. Even when some Individual Burdened with that Awful Decision (IBAD) has made the choice to redirect or not redirect the trolley, there are still limits to this person’s responsibility to these others. For example, IBAD would not be responsible for providing CPR or counseling or even alerting those who might provide those services. IBAD would not be required to provide shelter, clothing, food, health care, or an education to any of those whom IBAD has “saved” or whom IBAD has not let die. We are hospitable, but there are limits.

So, let’s try to map more than IBAD’s non-responsiveness. That’s one of the many helpful paths your post invokes for me. I think you're asking us to chart more: for example, chart the machines that are responsive to, are sometimes arguing with, AV. Rather than using TE to identify the coordinates for producing and substituting narratives/scenes, then layering those coordinates onto other situational planes, I wonder if there is a way to map or follow the connections and see how and where the thought experiments emerge and with what they do or don’t connect.

If we did, I suspect, we would see other machines at work, producing th-oughts. And I'm not certain there would be much of a place for us to identify with one of the characters or agency positions in a TE. We would become part of the work. When the machinic opens onto a more inclusive mapping of the potentiality for machinic response, the work is both the input and the output for the experiment, as David Hockney said of Tim’s Vermeer.

Do you think the machinic orients us toward a new way of mapping the US that is more than a chart of IBAD’s non-responsiveness?

 

Bonnefon, Jean-François, Azim Shariff, and Iyad Rahwan. “Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?” http://arxiv.org/abs/1510.03346

Foot, Philippa. “The Problem of Abortion and the Doctrine of the Double Effect.” Oxford Review 5(1967): 5-15.

Thomson, Judith Jarvis. “The Trolley Problem.” The Yale Law Journal. 94.6(1985): 1395-1415.

Thanks for this excellent response, David. Here are some (disconnected?) thoughts:
 
"there is a machinic quality to thought experiments"
 
Yes, this is exactly what I'm thinking. There is also a machinic quality to any ethical decision, even if those machinations are not entirely understandable. We often associate "machinic" with "knowable" or "predictable." But I'm always reminded of Turing's quote: "Machines take me by surprise with great frequency." So, the idea is to attempt to understand how these procedures work, even though those attempts will always be experimental and partial. Further, those experiments would have to grant that there is no final answer to these questions of responsibility, especially if we grant that we're talking about response-ability...the ability to respond, the exposedness to others that exposes us before we get to decide what we are or are not "required" to do. Which gets us to this...
 
"For example, IBAD would not be responsible for providing CPR or counseling or even alerting those who might provide those services. IBAD would not be required to provide shelter, clothing, food, health care, or an education to any of those whom IBAD has “saved” or whom IBAD has not let die. We are hospitable, but there are limits."
 
As you note, there are limits to responsibility and hospitality, and those limits are exactly what points back to the infinite question of hospitality. Am I (as the IBAD) required to provide CPR, food, clothing, shelter, etc. Perhaps not, but I am respons-able to these responsibilities. This means that I am affected by these exigencies, even though I will inevitably draw lines at some point. In my book, I label the identification of these limits as "ethical programs," procedures for writing the laws of hospitality in the face of the infinite Law of hospitality (which welcomes others before I ever get to choose). 
 
But as you note, even saying that the IBAD writes these ethical programs doesn't quite work, because the ethical situation involves so many other machines (the IBAD is just one machine...or one component in the machine):
 
"Rather than using TE to identify the coordinates for producing and substituting narratives/scenes, then layering those coordinates onto other situational planes, I wonder if there is a way to map or follow the connections and see how and where the thought experiments emerge and with what they do or don’t connect."
 
Yes. Perfect. Ethical programs are both computational and linguistic, they involve humans and machines, and they emerge in specific rhetorical situations. This means mapping the specific actors and networks without falling back on any easily identifiable, generalizable program.
 

 

All of this suggests to me that Autonomous Vehicles will never really exist as "autonomous," because there will always be a massive assemblage of people, objects, machines, etc at the scenes of ethical decision. The dream that AV will solve all of these problems is perhaps associated with the dream of a perfectly theorizable ethical program, one that solves all of the problems ahead of time. Such a dream sees ethics as arhetorical, and I think it is bound to fail.

Add new comment

Log in or register to add a comment.