Initial Forensic Analysis of the Uber Self-Driving Car Incident in Arizona
Initial Forensic Analysis of the Uber Self-Driving Car Incident in Arizona

Initial Forensic Analysis of the Uber Self-Driving Car Incident in Arizona

By Lance Eliot, the AI Trends Insider

In my column published on April 27, 2017 (nearly a year ago), I stated this:

“I expect that we will soon have a self-driving car crisis-in-faith because some self-driving car will plow into a pedestrian. It is bound to happen.”

Sadly, prophetic.

You might be aware that in Tempe, Arizona on Sunday, March 18, 2018 in the evening around 10 p.m., an Uber self-driving car, containing a human back-up operator at the wheel, ran into and killed a female pedestrian, 49-year-old Elaine Herzberg, whom had entered into the street outside of a crosswalk and in the front of the oncoming self-driving car.  Reportedly, the self-driving car was doing about 40 miles per hour when it hit the victim.

Besides the terrible tragedy of having killed the pedestrian, an especially alarming aspect is that reportedly the self-driving car took no evasive action whatsoever, and, furthermore, the human back-up operator also reportedly took no evasive action.

This comes as a shock to many.

Well, they need to prepare themselves for many more such shocks. I say this because some are currently assuming that if this is an issue with Uber’s self-driving car that it is somehow confined only to Uber’s self-driving car (a Volvo).  Presumably, all that needs to happen is to discover what the technological problem was, make a quick fix to their self-driving cars, and the world just moves along. This is narrow thinking.

Though it is the case that each of the auto makers are generally taking their own development approaches to their self-driving cars, and therefore this potential problem might not exist in those other self-driving cars, and whatever might have been a problem in this instance will not necessarily be a problem with other self-driving cars, though it could be and it’s worthwhile to consider the ramifications as such – meanwhile, you need to be mentally ready for the aspect that other self-driving cars are bound to have some other kinds of problems that could potentially lead to a similar result.

I will therefore update my prediction of last year:

“In spite of whatever is learned from this recent incident, we will soon have another self-driving car that plows into a pedestrian, which will then shake to its core the self-driving car industry and a massive crisis-in-faith will ensue.”

We need to consider that this is not merely a technological problem, which is the simplest way to cast the incident. There is a moonshot race toward being the first to produce a self-driving car that has regrettably put safety at a lesser priority than it deserves, and for which once we have enough self-driving cars on our roadways that the risks of the safety factors will play out in front of the public view. It’s like spinning a roulette wheel and eventually the number is going to come up.

Let’s be clear, yes, this is a technological issue, but also along with being a societal issue, a business issue, an ethical issue, etc. Don’t trivialize it but trying to make it seem like it is just some bug in code and once we find it that we’re all done with this matter.

Here’s some of the questions that have been trending in the media about this Uber incident:

  •         How could a self-driving car have hit a pedestrian (they are supposed to be programmed to not do so)?
  •         Aren’t self-driving cars going to bring us “zero” fatalities, which is what has been promised for the advent of self-driving cars?
  •         Even if somehow the self-driving car did nothing, isn’t the human back-up operator there to take over and prevent anything from happening (for which the self-driving car could not otherwise detect or prevent)?
  •         Etc.

If the above sends chills up your spine, here’s something else to realize, namely that the latest version of self-driving cars are being built with no means to have any human occupant be able to directly handle the brakes, the steering wheel, and otherwise have no access to the driving controls of the car at all. In this case there was a human operator, so at least there was a chance that the human operator might have been able to avert the incident. True Level 5 self-driving cars, which are supposed to be able to drive without any human intervention, won’t even have the chance per se of a human trying to take over control of the self-driving car.

Now, before I get clobbered by everyone in the self-driving car industry, I want to emphasize that I am a huge proponent of self-driving cars. My firm is even developing AI software for self-driving cars. What I find worrisome is whether we are giving enough attention to safety. The rush toward wanting to be the first to have self-driving cars, which is fueled by the media attention that goes with it, and the frenetic atmosphere of the auto makers and tech firms wanting get to the moon first, it has unfortunately also led to safety getting less attention than it deserves.

If the Uber self-driving car had detected the pedestrian and tried to take evasive action, we’d be having a different conversation right now. It would be about what it did and how much effort it took to try and avoid hitting the pedestrian. But, the aspect that it seemingly did nothing is the part that catches our breath.

Likewise, if the Uber self-driving car had at least alerted the human operator, or if at least the human operator had taken over the controls and tried to avoid the pedestrian, we’d be having a different conversation. But, the aspect that the human operator seemingly did nothing is the other part that catches our breath.

I’d like to address these questions and use the Uber incident as a basis for doing so.

At the Cybernetic Self-Driving Car Institute, besides developing AI self-driving car systems and being a keen observer of the marketplace, we also do audits of self-driving car software and designs and perform as forensic specialists or expert witnesses regarding self-driving cars.

This Uber incident is keenly of interest to us and everyone else in the self-driving car industry. It is widely hoped that the underlying technological elements will be revealed by Uber. Whether they will do so is an open question. No one wants to reveal their inner proprietary secrets. That’s a given. We’ll have to see in what manner and detail the matter is reported, and whether it is done so voluntarily or under pressure by legal or regulatory bodies. The incident is being investigated by the NTSB (National Transportation Safety Board) and the NTHSA (National Traffic Highway Safety Administration), which is important and helpful to the matter, but you should also be prepared for a lengthy time period before much is revealed about their findings.

I will provide herein a kind of armchair forensic analysis of the March 18, 2018 incident.

Keep in mind that as I write this, it has only been about a week since the incident occurred, and so as I write this I do so with almost no details about the accident. It will be weeks or months before much more is publicly known about the incident. For a while, whatever had been collected or analyzed will be under wraps. As such, I am going to provide an “armchair” forensic analysis, meaning that I can only offer some educated guesses and speculation about what maybe happened.

This speculation though is actually useful because it lays out the various scenarios of what might have occurred. It also will help those of you interested in AI self-driving cars to have some further introspective about what goes on inside self-driving cars.

I cannot reasonably reach any definitive conclusions about what happened, but I can at least shed light on what might have happened and what to be on the look for. It will be interesting later on to see if I was able to land on anything that actually turns out to be the true culprits or reasons for the accident, which hopefully we’ll all know once the actual evidence is explored and the investigations are completed and published.

About Forensics

Forensics is a type of science that undertakes an investigation to provide helpful insights for both criminal and civil cases. Most people tend to think of forensics as only for criminal cases, and we see this aspect portrayed by actors in many TV shows and films, but the same kinds of forensic analyses are often needed in civil matters too. If two cars smack into each other, and there’s no injuries and no significant damages, the matter could likely get mired in a civil case of one party suing the other for financial compensation. It is likely that forensics experts would be called upon to participate in assessing the circumstances and providing potential insights and even actual testimony for the case.

Sometimes a forensic specialist will go to the actual scene of an incident and collect evidence, while in other instances they will mainly work in a lab or office and perform their analysis and conduct needed research there. The techniques used by forensic specialists will vary depending upon the nature of the case. Also, there are at times controversy about the techniques used, in the sense that some techniques are considered open to question or interpretation, and so you can have one forensic specialist that claims one thing and another one that claims something completely different.

Indeed, the two sides of a case, such as for criminal cases the prosecution and the defense, will line up forensic experts that are likely to go head-to-head about their respective findings. This is an important point because the layperson often assumes that the forensics is cut-and-dry, black-and-white, and there isn’t any ambiguity or room for debate. In matter of fact, the odds are that for any complex case you can end-up with seemingly diametrically opposed conclusions by two fully qualified forensics specialists, each of which uses a particular technique or approach, and has made certain assumptions based on whatever evidence has been gathered.

For car accidents, there usually isn’t a civil lawsuit because the insurance companies that represent the drivers will duke it out as to which side should pay what. Most of the time, they work this out somewhat amicably (but fiercely). There are occasions though when an injured party believes that they aren’t getting a fair shake, and so they proceed to file a civil lawsuit. Typically, the insurance companies will then represent the respective drivers for the case, but this depends upon various aspects of the case.

We all recognize that if you get caught driving drunk, or Driving Under the Influence (DUI), you are usually subject to criminal penalties and especially when involved in a car crash. If you’ve caused property damage and/or injuries, there can be some pretty severe penalties involved. What many don’t realize is that you can also be charged with reckless driving, a criminal offense, even if you weren’t DUI. Most states require that you drive your car in a sound manner, and so if you do not do so, it can be considered reckless driving. It won’t matter that you were perfectly sober. Driving in a manner that causes damages or injuries is considered a violation of the law. Wet reckless is when you are DUI at the time, while dry reckless is when you were reckless and not intoxicated.

In the state of California, reckless driving is considered a misdemeanor as per California Vehicle Code Section 23103: “A person who drives in a vehicle upon a highway in willful or wanton disregard for the safety of persons or property is guilty of reckless driving.” Conviction can lead to county jail time for up to 90 days plus potentially a fine of up to $1,000. If there are significant property damages or injuries, things can get much worse in terms of the charges leveled and the outcome for the guilty party.

Potential Charges

For the Uber incident, there could be criminal charges involved, and there could be civil lawsuits involved.

It is possible that criminal charges might be levied against Uber, if the formal investigation concludes there was some form of recklessness or other failing on the part of Uber to ensure that their AI and the self-driving car was operating properly and appropriately.

There could also be criminal charges levied against the Uber human back-up operator, if the investigation concludes that the human operator failed to perform their duty.

Even if there aren’t any criminal charges, the odds of a civil lawsuit are probably high, though supposedly the women that was killed was homeless, and so the lawsuit would need to be undertaken by someone connected to her, which no one has yet come forward as such.  We’ll probably see a civil suit launched within the next 30-days, I’d guess.

Data About What Happened

There are various potential pieces of evidence that can be used to try and ascertain what happened. I’m sure that the on-scene investigation gathered the physical evidence at the scene of the incident. It would be important to see whether there were any tire marks to indicate whether the Uber car was braking or not, and to study the damage done to the self-driving car, and other damages. The various aspects of the surrounding environment are crucial too, including the roadway surface, the layout of the roadway, nearby objects, and so on.

There haven’t been any human witnesses that have come forward as yet.

Meanwhile, two videos were released, one that was from the Uber self-driving car via a video camera that was pointed forward, and a second video that was pointed inward at the human back-up operator.

It’s handy that the videos were released, but there’s a lot more we’d all like to know, including this:

a) Blackbox recorder in the Uber self-driving car

b) Processor memory of the on-board systems

c) Uber/Volvo over-the-air in-the-cloud system

If there’s a blackbox recorder in that Volvo, presumably it could be inspected and the recording of the car status might tell whether the AI system was engaged. This determination is partially based on whether or not the blackbox survived intact (it should have since the damage to the car was relatively minimal), whether or not it is readable, and whether or not it was appropriately recording the car status, along with whatever Volvo and Uber have opted to have recorded as status.

The processor memory of the on-board systems is another place to look. Once again, this presumes that those systems survived the crash (probably did so, since the crash impact did not seem to severely destroy the car), also whether or not the memory was intact, etc.

Another place to look is at any in-the-cloud system that communicates over-the-air with the self-driving car. This might or might not help, depending upon when the last communications with the car were, and what was captured from the self-driving car, and whether the cloud kept intact the data collected, etc.

As far as we know right now, the weather shouldn’t be a factor in this incident, since it appears via the videos that there wasn’t any rain (which could have made the roads slick), and there wasn’t any snow (which could have obscured the sensors), etc. It appeared to be a typical Tempe evening, consisting of dry roads.

The video also suggests that other traffic was not a factor. It appears that there weren’t other cars nearby during the incident. There doesn’t appear to be any obstructions on the roadway, and so debris or other such factors don’t seem to come to play in this case.

Scene Analysis

Let’s do a scene analysis, based on what we know so far (again, all preliminary).

See Figure 1.

According to reports, the Uber self-driving car was heading northbound on North Mill Avenue. The incident occurred at a substantive distance prior to an intersection, and the pedestrian was walking a bicycle across the street, doing so illegally and jaywalking. The Uber self-driving car was reported as moving at 40 miles per hour, which is about the same as 60 feet per second. Some have said that the speed limit was 35, which implies that the Uber self-driving car was speeding, but others have said that the speed limit was 45, which would imply that the Uber self-driving car was abiding by the speed limit. I’ll not address the speed limit issue herein at this time.

The northbound route at the juncture of the incident was apparently a two lane roadway that had a sizable median to the left, separating the northbound traffic from the southbound traffic. The median appears to have shrubs and trees, which we’ll come back to in a few moments.

The Uber self-driving car appears to have been in the rightmost lane, and struck the pedestrian with the bicycle at a nearly direct head-on manner. The video seems to suggest that the Uber self-driving car was not braking and nor taking any kind of potential evasive action.

Per the video, it takes about 2 seconds from the time that the pedestrian and bicycle appear until the Uber self-driving cars strikes them. As shown in the diagram, I have placed the car at about 120 feet from the impact, which is presumably when the video suggests that visually the pedestrian and bicycle can be first seen by the camera. The official video is rather poorly illuminated, and it suggests that the area was relatively dark, but there is controversy over whether the on-board video was properly tuned to the lighting and whether the video sufficiently shows the actual lighting. Indeed, others have since the incident made their own videos by driving that same stretch at night, trying to showcase that it is much lighter there than was portrayed in the official video.

In any case, given a speed of 40 miles per hour, which is 60 feet per second, and since the video seems to suggest that from the point of being able to see the pedestrian and bicycle and to the impact that it was 2 seconds, we can guess that the distance was about 120 feet.

I show in the diagram the typical ranges for various kinds of sensors that are on self-driving cars. I’m not saying that these are the sensors that were on this particular self-driving car, and we’ll need to find out what actual sensors were loaded into this self-driving car.

Generally, a wide forward camera has about a 197-foot maximum image collection capability. The lighting obviously makes a significant difference. Inadequate lighting can dramatically decrease that distance. There is some controversy about the headlights on the Uber self-driving car, since it seemed to only be able to cast light about 120 feet ahead, and yet we would normally expect headlights in proper working order to be able to shine ahead 160 feet (according to the NHTSA averages), and for a modern car perhaps even 200 to 220 feet.  This is something that will need further exploration in this incident.

Some that saw the video were quick to say that the incident was “unavoidable” because they used solely the visual aspects to try and decide what was possible. This is what we would do if a human was driving a car. We know that humans have essentially only one form of sense to drive, their vision. Therefore, it would be easy to fall into the mental trap of assessing the situation by what a human driver does.

But, this is a self-driving car. As such, it is presumably loaded with lots of other kinds of sensors. As you can see from the Figure 1, LIDAR can detect about 656 feet (this is a form of light and radar), regular radar can do about 524 feet, and so even if the visual cameras weren’t able to see anything sooner, these other sensors should have.

In essence, if the LIDAR and radar were on-board and working, and at a speed of 40 miles per hour for the self-driving car, the system should have had maybe 8-10 seconds of advance warning to have detected the pedestrian and bicycle. Now, I realize this is somewhat misleading because as far as we know the pedestrian was not just standing stationary there in the middle of the lane of the self-driving car.

We might assume that the pedestrian was over on the median and began to cross into the lanes of traffic, walking the bike as she was doing so. A normal walking speed is around 3.1 miles per hour, which is about 4.5 feet per second. The video seems to show her in the lane at the 2 seconds prior to the incident. We can deduce that if she was walking the bike, and we go backward in time, she presumably was on the median about 2-3 seconds sooner than when first seen in the video.

We would then say that the Uber self-driving car at 2-3 seconds sooner than when the camera sees her was perhaps another 120 to 180 feet back from the point at which the camera first spots her. We’re now then at a distance of about 300 feet back from the point of impact.

What does this tell us? It suggests that even if the Uber self-driving car was back at 300 feet from the point of impact and the pedestrian was on the median and getting ready to go into the street, the distances for the radar and LIDAR to spot the pedestrian and the bicycle as they came into the street would still presumably be feasible, given the maximum distance capabilities of those devices.

Some have suggested that the shrubbery and trees on the median could have made it hard for the radar and LIDAR to distinguish the pedestrian and the bicycle.  Yes, that’s definitely an issue. They could either have been behind something that would have made the radar and LIDAR unlikely to spot them, or it could be that the nature of their structure made it hard.

Let me explain that aspect. If you train a neural network, which is considered a form of machine learning, you might feed it lots of images of pedestrians, and so it gradually trains on how to spot a pedestrian (via their image of having a body, legs, arms, a head, etc.). If you train a neural network on looking at bicycles, it can find patterns to be able to spot a bicycle, such as the tires, the handle bars, the seat, and so on.

If you combine together a pedestrian and a bicycle, it creates a new kind of image that is neither a pedestrian alone and nor a bike alone. Us humans can readily realize that a person standing in front of a bicycle is two kinds of objects, namely a pedestrian and a bicycle. A neural network that’s not been trained for that image would not readily be able to realize what the combination is.

In essence, it is possible that even if the radar and LIDAR detected this “blob” consisting of a pedestrian and a bicycle, it was not logically able to determine what it was. This is crucial because if the AI was programmed to predict what might happen, and if it was established that a pedestrian could run into a street, or a bicycle could roll into a street, it might not have been able to discern what the intention of this blob was going to be.

Okay, let’s assume that maybe it was a blob that the radar and LIDAR detected. Even if that’s the case, it still would have been able to detect that the blob was moving. During those few seconds that the blob moved off the median, into the street, and then into the lane of the Uber self-driving car, it should still have been able to calculate that an object was moving into the oncoming path of the self-driving car. It might not have known what the blob was, but it could have at least determined that the blob was moving and moving into the path ahead.

Timeline of Collision Detection

Let’s consider Figure 2.

Some people have said that the moment that the self-driving car detected the pedestrian and bicycle, the AI should have instantaneously taken evasive action. Whoa!  We need to consider that “time” is an element in any kind of system. Things don’t just magically happen instantaneously.

As shown in Figure 2, there is a time involved in doing sensor data collection and analysis, let’s call this amount of time to be known as t1. The sensor data and analysis get fed into the sensor fusion subsystem, which takes some amount of time to analyze all of the sensors together, and we’ll call that amount of time to be t2. The sensor fusion analysis is fed into the virtual world model of the surrounding driving environment, and these updates take an amount of time t3. Then, the AI prepares an action plan based on what has been fed so far, and it takes some amount of time t4 for this to occur. Finally, the AI action plan flows to the controls activation subsystem, which takes some amount time t5 to send commands to the driving controls.

Therefore, we have Total Time = t1 + t2 + t3 + t4 + t5, which occurs prior to the driving control of the car doing anything other than what they were last told to do. In essence, if the self-driving car was doing 40 miles per hour, and the accelerator was set for that, and the brakes weren’t being applied, and the steering was straight ahead, then until the Total Time occurs there won’t be any new changes applied to the driving controls.

Studies of humans show that they typically take about 2.5 to 5 seconds to react to a sudden driving situation (from the point at which they first realize it), and it can be up to another 5-10 seconds before they fully take appropriate action to hit the brakes or steer the car. There is much debate about the norms of human reaction times in driving situations. Different people react differently, and different situations involve different reactions.
That being said, some studies claim that at a speed of 40 miles per hour that if a human realizes they need to stop the car on a suitable straightaway and they instantaneously jam on the brakes, the car itself could come to a stop in about 164 feet. This so-called stopping distance is a combination of “thinking time” (which would take about 76 feet) and “braking time” (about 88 feet), which, in this case implies that at the moment that the camera seems to reveal the pedestrian, if the brakes had ideally immediately been applied at the 2 second mark of 120 feet, and given that the roadway was a dry condition and seemingly well paved, and assuming the Uber self-driving car had good tires and good brakes, the Uber self-driving car would have had a slim chance of coming to a complete halt prior to the pedestrian but at least it would have struck the pedestrian with dramatically less force (likely leading to injury but not necessarily death); alternatively, the car could possibly have been steered away from the pedestrian while also hitting the brakes (causing no blow at all, or perhaps a glancing blow). Keep in mind that these are all theoretical numbers at this stage of the analysis and we’ll need to see what the official investigation shows as to the actual distances and actual times involved. I also advise that everyone be careful using the word “unavoidable” because as you can see from these numbers, there are “unavoidable” incidents that can have catastrophic results involving death, while there can be “unavoidable” incidents that might instead involve injury but not death.
Thus, if a human was actively driving the Uber self-driving car, and they were directly paying attention to the road, and if the video is accurately depicting that the pedestrian could not be seen other than the 2 seconds or so prior to impact, it seems unlikely that the human could have reacted in time to have completely stopped the car, though they might have been able to slow it. But, the video could be misleading, and an attentive human driver might have been able to see the pedestrian and bicycle on the median, and therefore had more time as a defensive driver to get ready to swerve or stop the car. Indeed, one might say that if the pedestrian had been without a bicycle, it might have been harder to spot her, but given the larger size of the “blob” by having both together, it would presumably have been easier to spot.

 

Furthermore, in the role as a back-up operator, in theory the human driver in the self-driving car is supposed to actively be watching for situations just like this. Unlike the average human driver that is just driving along in their own car, the back-up operator is purposely there to be aware and alert. And, they are supposed to be trained to do so. Unfortunately, what often happens is that the back-up operator becomes accustomed to nothing unusual happening, and so they become complacent. In this case, the inward pointing video shows that the back-up operator was looking down and away from the roadway. And glanced up just as the impact occurred.

Some of the self-driving car companies have two operators in their cars. One sits at the controls, while the other one sits in the back and is monitoring the status of the AI system. Presumably, the second operator sitting in the backseat can be acting to keep the operator in the front seat alert, doing so by watching them and urging them to stay alert. Some wonder whether Uber should have had a second human operator that could have been aiding the operator driving the vehicle to remain attentive. Others also wonder what kind of systems were in the car to try and keep the operator alert, such as systems that force the driver to keep their hands on the wheel, and systems that watch the eyes of the driver to make sure they are looking ahead.

In terms of the Total Time = t1 + t2 + t3 + t4 + t5, we don’t yet know how long each of those steps took for this self-driving car. It depends on the speeds of the computer processors and the nature of the programming code and pattern matching systems, etc. The point is that the AI won’t react “instantaneously” and instead it takes time to figure what is occurring and what to do about it.

Collision Detection Aspects

As shown in Figure 3, for self-driving cars there are right now an AI system that serves as the primary driver, and a human operator that serves as the secondary or back-up driver.

What’s supposed to happen is that the AI system detects a potential collision, and possibly the human back-up operator does too, but presumably the AI system will take the needed action and the back-up operator just goes along for the ride in that use case.

There’s also the circumstance of the AI making a detection of a possible collision, and the human back-up operator does not, in which case the AI system takes the needed action and the back-up operator is fortunate that the AI figured out what to do.

The not-so-good case is when the AI does not detect a potential collision. In theory, the back-up operator than takes over the controls, assuming they detect the upcoming collision. This is not as easy as it sounds. The human back-up operator might be reluctant to takeover the controls and unsure of whether a potential collision is really going to happen and can become over-confident in the AI system.

The worst-case scenario is when the AI doesn’t detect the potential collision and nor does the human back-up operator.

It seems that’s what happened in the Uber self-driving car instance in Tempe.

Take a look at Figure 4.

In the upper right corner of the Collision Detection Forensic Matrix, labeled as box P.2, we have the double protection of both the AI and the human operator detecting a potential collision. This is what we want to happen.

The upper left corner shows the instance of when the AI makes the detection, but the human operator does not, and it’s what we also expect to have happen from time-to-time (labeled as P.1), namely that the AI has more advanced sensory capabilities and alertness than does a human, and so it should presumably be able to do a better job at detecting potential collisions. That’s the theory of it.

But, we know that today the AI is not fully at a human driver functioning capacity, and so we have the human operator there to serve as a back-up, and so the lower right corner (P.3) shows the instance when the human covers for the AI.

The toughest scenario of them all is in the lower left corner, labeled as P.4. This is the circumstance wherein the AI doesn’t detect a collision and nor does the human back-up operator. I know that many of the auto makers and tech firms say that this “should never happen,” but that’s wishful thinking. We seem to have a now well-publicized case in which it did happen. And, as I’ve predicted, we’ll have more.

No-Evasive-Action Matrix

Figure 5 shows the No-Evasive-Action Failure Matrix.

If a self-driving car opts to not take any evasive action, neither done by the AI and nor by the human operator, we need to consider how this might have occurred.

There are four scenarios.

If the AI detected a collision upcoming, and the human back-up operator did so too, but if neither took any evasive action, we would have essentially a catastrophic action failure (box N.2). It could be vexing to think that neither the AI and nor the human took evasive action, even though they both detected that a collision was imminent.

If the AI detected a collision upcoming, and the human did not, and yet the AI did not take evasive action, we’d say that AI failed to do something even though it detected the collision (box N.1). If the human detected a collision upcoming, and the AI did not, and yet the human did not take evasive action, we’d say that the human failed to do something even though they detected the collision (box N.3). Finally, if both failed to detect, it pretty much stands to reason that no evasive action would be taken, and so we’d want to know why neither of them detected the collision.

You might be wondering why the AI might not detect a potential collision.

Take a look at Figure 6 to see a Collision Detection Forensic Tree.

As shown, let’s consider the circumstance of the AI detecting a potential collision, but it didn’t take evasive action. It could be that there was insufficient time to react (labeled as D.1). If the Uber self-driving car actually spotted the pedestrian at the 2 seconds to impact, and if the t1+t2+t3+t4+t5 took let’s say 3 seconds, it would imply that the AI was midstream of detection and had not yet sent commands to the controls of the car to take action.  Therefore, from all outside appearances, it would look like the AI did nothing at all. In actuality, it might have been in the midst of deciding what to do.

Another possibility is that the AI detected the pedestrian and the bicycle but failed to predict that a collision would occur (labeled as D.2). Maybe the AI calculated that the “blob” was not going to intersect into its lane, or that they would pass through the lane and so no action was needed.  From outside appearances, we don’t know if the AI actively decided that the prudent course was to continue ahead and did so because there didn’t seem to be a collision arising.

There might have also been an error in the processing of the AI. Perhaps it was updating the virtual world model, and there’s an error in that part of the system that led to the placement of the pedestrian and bicycle further away from the lane. Or, maybe the AI action plan had no prior programming to cope with a circumstance like this, and so it was not able to come up with an evasive plan. From outside appearances, we don’t know if the inner workings of the AI might have made errors as it was undertaking t1, t2, t3, t4, t5.

There’s another twist to this too.

Suppose the AI did detect the pedestrian and bicycle, but classified the two as a blob, and therefore considered it to be an unknown object. Maybe it had no clue as to what it was and just only surmised that there was something there. As such, another option would be to try and decide whether to do an emergency braking, or instead just drive through whatever it is. This is akin to when you sometimes see an tumbleweed on the roadway and maybe you decide that it is safer for you to just hit the tumbleweed, rather than doing a dangerous braking or swerving, or maybe you believe there is insufficient time to take any evasive action and so you just proceed ahead. From outside appearances, we only seem to know that the self-driving car proceeded ahead unabated, but it could have been a misguided intentional act under the assumption that there was just some kind of debris in the roadway and thus seemingly the best course of action was to strike it.

The other part of this tree is the part that involves the AI not detecting the pedestrian and bicycle.

This could have happened due to limitations of the sensors that were on this particular self-driving car (U.1), or it could be that the sensors failed (U.2). For example, suppose the LIDAR was not working correctly or had some malady during the time that it was scanning in that area, and the same for the radar. Now, presumably, the importance of having multiple sensors and of different types is that when one of them falters or fails, the others are there to help take up the slack.

The sensor fusion could have made an error (U.3). Suppose the LIDAR reported that it didn’t detect the pedestrian, while the radar did. How was the sensor fusion programmed? It might have been programmed that if the LIDAR doesn’t vote the same way as the radar, it considers the circumstance as a false reading by the radar. And, so the sensor fusion ignores the radar. If you then also add to this calculation that maybe it is programmed to wait and see what the cameras spot, you could easily get down to those last 2 seconds, in that the sensor fusion might have been deciding that the radar and LIDAR weren’t aligned with each other and it was going to therefore wait until the camera said something.

There could be other systems errors that could have led to the lack of a detection (labeled as U.4). Keep in mind that there are a myriad of subsystems involved in a self-driving car, and likely millions of lines of programming code. Even with this being tested by lots of simulations, the auto makers and tech firms are saying that they cannot really test everything until they put these cars onto the roadways. There could be some other aspect internally that prevented the detection.

Human Operator Aspects

Take a look at Figure 7 to consider what occurs about the human operator and detection of a collision.

Suppose the human operator does detect a potential collision but opts to not take evasive action. Why would this be?

It could be that the human back-up operator assumed the AI would handle it (D.1). It could be that they weren’t able to react in time (D.2). It could be that they estimated there would not be an actual collision (D.3). Or, they might have just frozen up, maybe startled at what was going to happen (D.4).

Suppose the human operator did not detect a potential collision. How can this be?

It could be that their vision was blocked (U.1). It could be that they weren’t looking (U.2). It could be that they were looking elsewhere (U.3).

According to the inward facing video, the human operator in this case seemed to be looking downward at something, so we’d call this a U.3 instance of looking elsewhere, and they were not looking at the roadway for those crucial moments and so they are also doing a U.2 (not looking).

Lessons To Be Learned

Overall, it’s still early to figure out what actually happened.

But, I hope that we learn so far at least these lessons:

  •         AI is not infallible, and we should not anthropomorphize it into being all-knowing
  •         People developed the AI and so we need to consider what the system was developed to do
  •         We need to hold accountable the companies that made the AI and not shrug it off as just “woeful systems”
  •         There is no such thing as zero fatalities with self-driving cars, not now, not ever
  •         Self-driving cars are a combination of hardware and software, all of which can falter
  •         Human back-up operators will become complacent and we need to minimize this
  •         Reaction times for AI systems need to be determined and tuned to optimal levels
  •         What happens with one particular self-driving car model can occur similarly in others
  •         Finding and fixing one potential error or bug does not ergo mean that the AI is now perfected
  •         Other self-driving car models are just as likely to have other kinds of issues or errors, so don’t become fixated on one issue that happens to have first arisen with great visibility

And, we need to increase the importance of safety as a key factor in the design, development, and fielding of self-driving cars. This needs to be a mantra for all stakeholders and for all parts of the industry.

This content is originally posted on AI Trends.