The Ethics Matrix

Suggested time: 35 min

In the video, you learned about algorithmic morality. In this activity we will dive deeper into this topic by exploring a real-world situation where a woman was killed after being hit by a self-driving car during a test-drive by Uber. In a report on the incident, The National Transportation Safety Board blamed Uber for its poor safety culture and took issues with the rules that govern the testing of self-driving cars on public roads. But it also noted that there were methamphetamines found in the victim’s system, which may have impaired her ability to react to an approaching vehicle.

Read the following story of the incident:

Elaine's Story:

Elaine Herzberg did not know that she was part of an experiment. She was walking her bicycle across the road at 10 p.m. on a dark desert night in Tempe, Arizona. Having crossed three lanes of a four-lane highway, Herzberg was run down by a Volvo SUV traveling at 38 miles per hour. She was pronounced dead at 10:30 p.m.

The next day, the officer in charge of the investigation rushed to blame the pedestrian. Police Chief Sylvia Moir told a local newspaper, “It’s very clear it would have been difficult to avoid this collision… she came from the shadows right into the roadway… the driver said it was like a flash.” According to the rules of the road, Herzberg should not have been there. Had she been at the crosswalk just down the road, things would probably have turned out differently.

Rafaela Vasquez was behind the wheel, but she wasn’t driving. The car, operated by Uber, was in autonomous mode. Vasquez’s job was to monitor the computer that was doing the driving and take over if anything went wrong. A few days after the crash, the police released a video from a camera on the rear-view mirror. It showed Vasquez looking down at her knees in the seconds before the crash and for almost a third of the 21-minute journey that led up to it. Data taken from her phone suggested that she had been watching an episode of The Voice rather than the road. Her colleagues’ investigation calculated that, had Vasquez been looking at the road, she would have been able to stop more than 40 feet before impact.

Drivers and pedestrians make mistakes all the time. More than 90% of crashes are blamed on human error. The police report concluded that the crash had been caused by human frailties on both sides: Herzberg should not have been in the road; Vasquez should have seen the pedestrian, she should have taken control of the car, and she should have been paying attention to her job. In the crash investigation business, these are known as “proximate causes.” If we focus on them, we fail to learn from the novelty of the situation. Herzberg was the first pedestrian to be killed by a self-driving car. Of course, the Uber crash was not just a case of human error — it was also a failure of technology. [...]

When high-profile transport disasters happen in the U.S., the National Transportation Safety Board (NTSB) is called in. The NTSB is less interested in blame than in learning from mistakes to make things safer. Their investigations are part of the reason why air travel is so astonishingly safe. In 2017, for the first time, a whole year passed in which not a single person died in a commercial passenger jet crash. If self-driving cars are going to be as safe as airplanes, regulators need to listen to the NTSB. The board’s report on the Uber crash concluded that the car’s sensors had detected an object in the road six seconds before the crash, but the software “did not include consideration for jaywalking pedestrians.” The A.I. could not work out that Herzberg was a person and the car continued on its path. A second before the car hit Herzberg, the driver took the wheel but swerved only slightly. Vasquez only hit the brakes after the crash.

We don’t know what Herzberg was thinking when she set off into the road. Nor do we know exactly what the car was thinking: The decisions made by machine learning systems are often inscrutable. Roads are dangerous places, particularly in the U.S. and particularly for pedestrians. A century of decisions by policymakers and carmakers has produced a system that gives power and freedom to drivers. Tempe, part of the sprawling metropolitan area of Phoenix, is car-friendly. The roads are wide and neat, and the weather is good. It is ideally suited to testing a self-driving car. For a pedestrian, the place and its infrastructure can feel hostile. Official statistics bear this out. In 2017, Arizona was the most dangerous state for pedestrians in the U.S.

This text appeared on Medium OneZero and is an excerpt from the book Who’s Driving Innovation? by Jack Stilgoe.


Apply (15 mins)

In Module 1 we spoke about stakeholders and values, using the sale of chicken noodle soup at a cafe as an example. Now, let’s think about stakeholders and their values in Elaine’s story.

In breakout groups of 3–4 people, take 15 minutes to fill out the matrix below to connect stakeholders to their values. Each cell is filled in with a statement of what the value means for the stakeholder.

Below is a partially completed matrix based on the scenario we just read. Complete the remaining stakeholders/values as a group.

Safety Convenience Accountability ...
The cyclist Wants to be safe while on the road doesn't apply
The driver
Uber
...


Discussion (10 min)

Return to the full group and spend 10 minutes exploring all of the breakout groups’ matrices.

Discuss:

  • What stakeholders values are most conflicted? How should we resolve these conflicts?
  • What stakeholders are missing? Is it ok for us to speak on behalf of these missing voices?
  • How would you approach conversations with technologists and decision-makers to convince them to improve the design and use of the tool?