Self-Driving, Death, and Questions

A journal of the unfolding aftermath of Uber’s fatal self-driving accident

Ken Ryu
11 min readMar 20, 2018

Tuesday 3/20

A pedestrian in Arizona was killed when she was hit by an Uber self-driving vehicle. The below twitter post is highly illuminating regarding the road where the pedestrian was hit. https://twitter.com/EricPaulDennis/status/975891554538852352/photo/1?ref_src=twsrc%5Etfw&ref_url=https%3A%2F

What we know:

  • The vehicle was manned with a backup driver.
  • The 49 year-old woman pedestrian was walking a bike across the road Sunday night around 10pm when she was hit. https://www.theverge.com/2018/3/19/17139518/uber-self-driving-car-fatal-crash-tempe-arizona
  • The pedestrian was not in an authorized cross-walk…but the road she crossed had a confusing path leading to the unauthorized crossing area where she was hit. (see photo below from EricPaulDennis’ twitter post linked above)
  • The car was traveling northbound on the road.
This map appears to be too far south (see the NY Times diagram below), but kudos to EricPaulDennis for recognizing the multitude of walking paths leading to “non-authorized” crossings.
  • The car was traveling 37–40 mph in autonomous mode in a 45 mph zone.
  • Uber has suspended its self-driving trials.
  • The vehicle did not show signs of slowing down before hitting the pedestrian.

Questions:

  • Will Uber release the LiDAR and camera data and footage to the public?
  • Did the night-time conditions contribute to the accident? LiDAR cameras uses infrared technology which should be effective in low light conditions.
  • Which part of the vehicle struck the pedestrian?
  • Did the vehicles sensors detect the pedestrian and ignore the signals?
  • Did the fact the pedestrian was walking a bike impact the vehicle’s proper identification of the pedestrian?
  • Was the pedestrian traveling from east to west, or west to east?
  • How many other cars were traveling on the road at the time of the accident?
  • Would an alert human driver have reacted differently?

To give a point of reference, 97 pedestrians in Arizona died from January to June 2016. https://www.npr.org/2017/03/30/522085503/2016-saw-a-record-increase-in-pedestrian-deaths

The Uber accident fatality was in Tempe, Arizona (population 182,498 in 2016, approximately 2.6% of Arizona’s population) — https://www.google.com/search?q=tempe+population+2016&ie=utf-8&oe=utf-8&client=firefox-b-1-ab)

Arizona’s statewide population was 6.909 million in 2016. https://www.google.com/search?client=firefox-b-1-ab&ei=s3-xWq2IG6TMjwT-qK_gBA&q=arizona+population+2016&oq=arizona+population+2016&gs_l=psy-ab.3..0j0i7i30k1l3j0i5i30k1l5.62194.63680.0.63834.7.7.0.0.0.0.154.728.0j6.6.0....0...1.1.64.psy-ab..1.6.728...0i7i5i30k1j0i7i5i10i30k1.0.TO72IKQTuGc

Uber’s post-accident handling will serve as an important precedent for such fatal and catastrophic self-driving accidents.

With all the data available from the cameras and sensors, how much of this data should the public be privy to?

The risk is that the data may show limitations and errors. On the other hand, an open and public release of post fatality accident data would:

a) give the public more confidence in the viability of the technology,

b) provide a trust between self-driving operators and the public,

c) force self-driving software and hardware manufacturers to address limitations and bugs with extreme expediency.

The future of self-driving adoption is in the balance. Headline-grabbing accidents feed the fears of a concerned public. Self-driving proponents can publish a mountain of statistics to support the assertion that self-driving vehicles are safer than those piloted by humans, but that will not be the measurement which the public will judge these vehicles. Instead, it will be the response in the aftermath of these crisis events which will determine the public sentiment.

This fatality will tap the breaks on the self-driving movement. Will it lead to a temporary full stop, or will the industry be able to re-accelerate? Uber’s handling will have much to say on the matter.

Update: Wed 3/21

The NY Times published a story on the crash. https://www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-pedestrian-killed.html

This article provides some informative graphics showing the pedestrian’s route, the location of the crash, and the location of the victim after impact.

The crossing arrow seems too far north. It is more likely the pedestrian crossed via the path as hypothesized by EricPaulDennis

As you can see by the photo above, the northbound N. Mills road had 5 northbound lanes in the section where the accident happened.

We can also see that the pedestrian crossed from west to east. She crossed 3 lanes before getting hit in the 4th lane from the left. This looks like a significant failure of the automation software. Not identifying a crossing pedestrian on a one-way street with 3 lanes from which to detect the person seems pretty negligent.

40 mph = 17.9 meters per second or 19.6 yards per second (58.8 feet per second).

The average person walks 3.1 yards per second (6.34 miles per hour). The average car lane is 4 yards wide.

By the time the pedestrian was hit, she likely walked 14 yards. If she was walking at an average rate of speed, she would have been in the road for 4.5 seconds before being struck by the vehicle. Assuming that car was traveling at a steady speed of 40 miles per hour, the car would have been 88.2 yards (264 feet) from the accident site when she first entered the road.

A car traveling at 40 mph with a human driver requires approximately 139 feet (46.3 yards) to come to a complete stop. 80 feet of braking and 59 feet of perception/reaction distance. If we account for only the 80 feet of braking or 26.7 yards to come to a complete stop, that would mean that based on the assumptions above, the vehicle had 61.5 yards or 184.5 feet of perception/reaction time once the pedestrian entered the road. That translates to 3.1 seconds of perception time to identify the pedestrian and require an immediate and full stop.

Let’s be careful to not oversimplify the challenge of the stop/slow decisions an automated vehicle needs to consider. Sensors are only so reliable. An autonomous vehicle that slows or slams on the brakes at every potential hazard has its own danger. If a jogger is running on a path near the curb of the road, what is the vehicle to do? Should the vehicle worry that the pedestrian may suddenly jump into their lane? Should the vehicle slow down significantly to hedge the risk? The problem with that algorithm is that the driver behind the self-driving vehicle will not be expecting this sudden reduction in speed and may rear-end the vehicle.

The take action/ignore decision is complex. The details are still coming out, but based on the early information, this accident is worrisome. One BIG assumption of the above calculation and diagram is that the pedestrian was walking at a normal rate of speed. If she had sprinted across the road, she could have cut her crossing time in a half or even less.

Assuming the pedestrian was walking at a normal rate of speed without pausing, the below series of diagrams approximates the speed of the crossing against the speed of the vehicle, here is what may have happened (see below)

Diagram 1: The pedestrian enters the road. The vehicle is 264 feet from the accident site.
Diagram 2: 1 second later: The pedestrian is close to crossing the first lane. The vehicle is 205 feet from the accident site.
Diagram 3: 2 seconds from the 1st diagram, the pedestrian is in the center of the 2nd from the left lane. The vehicle is now 146 feet from the crash site.
Diagram 4: 3 seconds from the 1st diagram, the pedestrian is in the in the 3rd lane from the left. The vehicle is now 87 feet from the crash site. The vehicle needs to slam on the brakes NOW to stop in time.
Diagram 5: 4.5 seconds from the 1st diagram, the pedestrian is fatally struck by the vehicle. The pedestrian is thrown or dragged 50–70 feet forward to the site where the body was seen.

The diagram is based on the Google maps image based on the NY Times diagram. However, this diagram uses the crosswalk theory of the EricPaulDennis twitter post. It seems plausible that the impact location would be the 50–70 feet south of the location where the body was seen post-impact.

Update: Thur 3/22

A video of the accident has been released yesterday. It shows the view from the car camera facing outward, and one showing the reaction of the human backup driver.

Key findings from the camera footage.

  1. It was dark outside.
  2. The backup driver was not fully paying attention. The backup driver, appeared to be looking down at a phone periodically. Not surprisingly, the backup driver appears to be trusting the self-driving technology.
  3. The street was not busy. As far as the footage revealed, the self-driving car appeared to be the only car on the section of the road where the accident took place.
  4. The pedestrian was walking at a modest pace.
  5. The pedestrian was wearing a dark top, blue jeans, pushing a red or pink bike.

6. The crossing does appear to be following the path leading from the walkway shown in the diagrams above from the 3/22 update.

7. The pedestrian only turns to face the camera at the last split-second. She seems to not have realized the car was coming.

If we view the series in freeze-frame, we can better understand the visibility issues and the fact the pedestrian was unaware of the car. (see below)

SCREEN 1: The headlight of the car if just beginning to show the pedestrian’s shoes. NOTE: Two broken dividers (white lines) are between the car and the pedestrian. Based on the diagrams from 3/22 update, an estimate distance of 80–100 feet can be made.
SCREEN 2: Now only one white dash is between the driver and the pedestrian. The car is only 50–75 feet from the pedestrian. The jeans and the shows are now visible, but she is not facing the camera. Her black top is still blending into the darkness. She is clearly in motion walking from left to right.
SCREEN 3: The car has not slowed down. The distance is only 10–20 feet away. She is finally beginning to face the camera. ABOUT ONE FULL SECOND has passed from the 1st SCREEN to this frame.
SCREEN 4: She finally turns towards the camera.

If we judge by the lane lines, the headlights are illuminating approximately 100–120 feet ahead of the vehicle. Only around 70-80 feet from the pedestrian (in frame 2) does the pedestrian appear more obvious.

The car was recording traveling at 37 mph. Assuming a human driver, the driver would need about 1–1.5 second to reaction. If you watch the video in real time, you will realize how sudden the accident happens. The likelihood that a human driver would have been able to break in time does not seem high. However, unlike the self-driving car, a human would likely have taken action by either braking hard or swerving, even it they were to late to avoid the pedestrian altogether.

This tragic event is a series of unfortunate circumstances culminating in this fatal accident.

  1. The cross-walk system is misleading and makes crossing in this road section inviting.
  2. The pedestrian has to cross 4 lanes to get from one side of the street to the other. When we began crossing, she did not realize the car was coming her way. NOTE: In the diagram below, the area in which she crossed, the road goes from 2 to 4 lanes. With the trees and extra distance, this crossing is particularly dangerous.

3. Did the 2 to 4 lane change impact the computer’s object recognition? If the computer was only considering the two right lanes for possible hazards, the system may have ignored the data from the new two left lanes where the pedestrian was crossing. This would have reduced the computer’s time to respond and track the pedestrian for proper identification.

View in the daylight. NOTE: The pole on the right is the bright light in the top right hand corner of the above SCREEN 1 screenshot.
This view shows the lane splitting from 2 to 4 lanes.

What a difference between the daytime and nighttime photos. LiDAR uses technology that can recognized objects in the dark. It would be useful to learn:

a) What does the LiDAR data show?

b) Was the LiDAR data ignored as not a hazard?

Nighttime problem?

Would this accident have happened in daylight? Under the same conditions in daylight, this accident likely does not happen. In daytime conditions, whether the car was driven by a human or self-driving technology, the pedestrian would have been identified earlier with enough time for the driver or computer to brake or swerve away from the pedestrian.

It is not good enough to state that a human driver would more likely than not have hit the pedestrian just as the self-driving car. The complete lack of braking indicates a detection or logic failure. The self-driving car in theory should far outperform a human driver in nighttime conditions considering it should be able to see in the dark (LiDAR), unlike a human driver.

Object permanence and depth perception

In some ways this accident has some parallels to the Florida Tesla self-driving fatality. In that case, the car did not recognize a flatbed truck crossing the road as a hazard. Instead, the system ignored the truck as a possible overpass road.

Self-driving does well when following other cars, but depth perception and object permanence (or the lack thereof) can lead to wrong decisions by the computer.

The computer needs to include logic that detects movement within the frame of the road and monitor these objects accordingly.

That is strange, the overpass is moving. NOT AN OVERPASS!

Update: Fri 3/23

The plot thickens. The ArsTechnica post attaches a video posted by Brian Kaufman driving North Mills Avenue at night.

The darkness of the Uber accident video is completely different. This is a troubling development. If Uber or the police manipulated the video, withheld a clearer video, or purposely used a camera with poor light sensitivity, we have a potential public relations disaster here. With Uber’s track record of mishandling PR crises in the past, expect extreme scrutiny and skepticism from the public. When the Uber video was first release, I was struck by how dark the road appeared and also was surprised at the weak range of the headlights.

Verdict: Trouble is brewing

Update: Mon 3/26

Looks like this media is moving on from this story. One interesting article that came out over the weekend covered some interesting contrasts between Uber and Waymo’s (Google) self-driving trial data.

http://thehill.com/policy/technology/380038-ubers-self-driving-cars-in-arizona-averaged-only-13-miles-withou

What you need to know from this story:

  1. Average time between a human intervening with an Uber self-driving car: 13 minutes.
  2. Average time between a human intervening with a Waymo (Uber) self-driving car: 5600.

Yikes! That’s 430 times more interventions for the Uber versus the Waymo. This huge discrepancy raised the questions of whether Uber’s technology is ready for prime time. Can Uber improve quickly and get closer to the Waymo stats?

Update: Thur 3/29

The roof-mounted LiDAR was the only LiDAR sensor on the Volvo vehicle. This roof-mounting creates a blind-spot for objects that are low on the road.

My question then is what are the 10 radars doing? Shouldn’t they be compensating for the LiDAR blind spot?

--

--