Forensic Analysis of Tesla Crash Based on Preliminary NTSB June 2018 Report
Forensic Analysis of Tesla Crash Based on Preliminary NTSB June 2018 Report

Forensic Analysis of Tesla Crash Based on Preliminary NTSB June 2018 Report

By Lance Eliot, the AI Trends Insider

Based on the June 7, 2018 release of the preliminary NTSB report about the fatal car crash of a Tesla on March 23, 2018, I provide in today’s column an initial forensic analysis of the incident.

Keep in mind that the just released NTSB report (about three days ago as of the writing of this column) is very slim on details at this juncture of the investigation and so there really isn’t much in terms of facts and evidence that would allow for a thorough analysis. Nonetheless, it is instructive to be able to try and piece together the clues released to-date and see if useful insights can be gleaned.

Faithful readers of my columns will recall that I had done a similar forensic analysis of the Uber self-driving car incident that occurred in Arizona, and that later on, upon release of the initial report by the NTSB, it turns out that my predictions of what likely occurred were quite prescient. Indeed, there didn’t seem to be any other published news item about the incident at the time that had so aptly predicted what might have taken place.

Here’s the original Uber incident analysis that I had posted: https://aitrends.com/selfdrivingcars/initial-forensic-analysis/

Here’s the follow-up about the Uber incident analysis and how it matched to the NTSB investigatory report: https://aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

Learning From the “Past”

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. As a result, we are keenly interested in what happens in incidents involving self-driving cars, along with wanting to provide insights for auto makers and other tech firms that are in the midst of developing and maintaining such AI systems.

As per the famous quote of philosopher George Santayana: “Those who cannot remember the past are condemned to repeat it.” With the rapid advances taking place with AI self-driving cars, and with the use of our public roadways as part of a grand experiment to see whether self-driving cars are viable, it is especially crucial that we all look at every morsel of what is taking place and try to as rapidly as possible consider what action is most appropriate to ensure the safety of us all. Our collective desire as a society to have AI self-driving cars needs to be balanced by our collective desire to be reasonably safe on our public roadways too.

For some of you, you might say that the risks of any self-driving car endangerment are simply the presumed calculated risk of the person that has opted to drive such a car. This though is narrow thinking and does not include the proper larger scope. The human driver is certainly taking a risk, but so too would any occupants in the car that are accompanying the car with the human driver – suppose the driver has children in the car, are those children able to also calculate the risks involved? The self-driving car could ram into and injure or kill humans that are in other nearby cars – did they get a chance to assess and take on the risks of that self-driving car? Pedestrians could be injured or killed – did they have a say in accepting the risks of the self-driving car?

I mention this because at some of my presentations around the country there are at times self-driving car pundits that stand-up and say that it should be each person’s own choice as to whether or not they want to drive a self-driving car, and that the government and anyone else should stay out of it. If the driver of a self-driving car was able to drive the car in a manner that was completely isolated from all of the rest of us, I likely would agree that it could be an informed personal decision to make.

We must though keep in mind that driving on our public roadways is a societal interaction and therefore would seemingly be a societal decision. It is for this reason that driving by humans is deemed a “privilege” and not a “right” per se. The Department of Motor Vehicles (DMV) in most states lays out what a human driver must do and not do, in order to retain the privilege to drive a car on our public roadways. There isn’t an irrevocable right to drive on our public roadways. It’s a revocable privilege.

Context of the Tesla Incident

Here’s some key aspects about the Tesla incident, based on using the NTSB preliminary report, which noted this:

  •         Accident Location: Mountain View , CA   
  •         Accident Date: 4/23/2018
  •         Accident ID: HWY18FH011
  •         Date Adopted: 6/7/2018
  •         NTSB Number: HWY18FH011 Preliminary

On a relatively sunny and dry day in California on Friday, March 23, 2018, in the morning around 9:30 a.m. PDT a Tesla Model X P100D 2017 was being driven southbound on the multi-lane US Highway 101 (US-101) in Mountain View and was nearing the interchange with the State Highway 85 (SH-85). This is a typical California type of interchange that involves various paths to get onto each of the two intersecting major highways. Drivers can choose to go from the US-101 onto the SH-85, and drivers that are on the SH-85 can choose to go onto the US-101. Both of these highways are very commonly driven and typically popular with traffic, especially on a Friday morning.

Notice that I’ve mentioned some important aspects already. Was the incident occurring on a rainy day? No. It was sunny and the roads were dry. This is important because we might otherwise need to consider the weather as a factor in what occurred. I think it is relatively sound to assume that weather was not a factor since the roads were dry and it was sunny. Was it nighttime or daytime when the incident occurred? It was daytime, and nearing mid-morning, and so it would have been relatively well lit. I mention this because if it were say nighttime, we’d need to consider the impact of darkness and how it could have diminished the capability of the cameras used for the self-driving car.

Were the roads involved somewhat obscure or more commonly used? Answer, they were commonly used. This did not occur in some back-country location. These were modern day highways. They are well paved and well maintained. They are multi-lane in shape and not single lanes. Traffic is relatively abundant on these highways. If it was the same time on a Sunday, I’d bet that there would have been a lot less traffic. Friday morning is around the time that many drivers are going to work and so it would have been a generally busy highway, along with generally busy in terms of cars trying to use the interchange to get to where they needed to go.

You might wonder why I am laboring to mention these nuances. I hope that you’ll see the “magic” behind doing a forensic analysis. It is crucial to consider each piece of evidence and try to use it for putting together the puzzle. This involves considering what we do know, what we can use as leverage to speculate about, and also what we know to not be the case – for example, we know it was not snowing. This then rules out that somehow snow might have been a factor in the incident. Now, since it essentially never snows in Mountain View, I suppose you might say its kind of ridiculous that I point out it wasn’t snowing, but it is as important to point out what we can about any factors that might come to play. Not everyone knows that it doesn’t snow in Mountain View and it could be that some reading this would wonder whether say snow or rain or other conditions came to play.

The Tesla at the time of the incident was in the High Occupancy Vehicle (HOV) lane, which was the second lane from the left of the median that divides the southbound and northbound lanes. Normally it is permitted to travel in the HOV lane when you have multiple occupants, while in this case there was a solo driver, but since the Tesla is an electric car it qualifies to go in the HOV. The human driver was a 38-year-old male and sadly he died as a result of the crash. There isn’t any detail yet in the NTSB preliminary report about the driver, but for this herein analysis let’s assume that he was versed in driving the Tesla, and that he had likely gone this way before, and that he was familiar with the nature of the Tesla driving controls. We don’t know that any of this is the case, so please keep in mind that it is for the moment an assumption.

The NTSB indicates they have examined “performance data downloaded from the vehicle.” What we don’t yet know is what this actually means. Did the Tesla have an Event Data Recorder (EDR)? This EDR is a so-called black box that is akin to the black box that are used in airplanes and helpful when determining why an airplane crashed. Or, did the NTSB have to get various memory dumps from other processors in the Tesla, and if so, which ones and what ones weren’t available perhaps due to being destroyed or damaged? Has the NTSB gotten access to data that would have been transmitted potentially to the Tesla cloud that allows for OTA (Over The Air) updates? We don’t yet know the depth and extent of the data that the NTSB has collected or intends to collect.

See my article about Event Data Recorders and AI self-driving cars: https://aitrends.com/selfdrivingcars/event-data-recorders-edr-self-driving-car-need-black-box-relook/

See my article about OTA and AI self-driving cars: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

The Crash Itself

The NTSB report indicates that the Tesla was going at about 71 miles per hour when it struck the tail-end of a concrete median that had a “crash attenuator” as an additional barrier at its leading edge. You’ve probably seen these attenuators before and yet didn’t know they had a name. It is a special kind of barrier that is setup to help try and “soften” the blow of a car that crashes into the finger-point of a concrete median. Years ago, traffic studies showed that when cars hit directly a concrete median at the edge point, it pretty much is like slicing bread with a knife. Thus, the thought was to put something at the edge that would dampen the blow. In some cases, you’ll see those bright yellow barrels that are filled with sand or sometimes water. The attenuator used in this instance is specially built for the purpose of sitting at the edge and taking the blow of a car, along with trying to warn a car to not hit it, doing so by having yellow and black markings to visibly suggest don’t hit it.

What’s an interesting added twist is that the NTSB report says that the attenuator had been previously damaged. It would be helpful if the NTSB could say more about this. Was the attenuator so damaged that it could no longer perform as intended? Suppose it was damaged to the extent that it could no longer soften the blow of a crash, or maybe could only do so to the degree of say 50%? This is important to know as part of whether or not the crash might have led to survivability or not. Also, was the painted or marked aspect of the attenuator still visible and fully presented, or was it damaged such that the attenuator was no longer sufficiently able to suggest the don’t hit me indication?

This could be crucial for both human drivers as to whether they could discern that it was a crash-warning attenuator, and in this case perhaps even more so crucial since the question is whether the Tesla cameras and vision processing could discern it.

I mention this aspect since by-and-large humans are generally able to visually figure out when they see an attenuator that is indeed an attenuator, and humans have the “common sense” that it is there to safe lives and that if you are driving toward it you are likely making a big mistake because it is there when behind it is something fierce like a concrete median. Did the camera of the Tesla capture images of the attenuator? Did the AI system examine the images and detect that within the image was an attenuator? We don’t yet know, since the NTSB report doesn’t say anything about this.

It is possible that the Tesla AI system has used machine learning to try and determine what attenuators look like. This can be done via the use of artificial neural networks that are trained on thousands and thousands of images of attenuators. Generally, one would like to assume that the Tesla AI has been trained sufficiently to recognize an attenuator, but suppose that this one was damaged in such a way that it no longer well-matched the training images? It is conceivable that the AI system did not thusly categorize the attenuator as an attenuator, and might have classified it as some unknown object, if it detected it at all.

See my article about the lack of common sense reasoning in today’s AI self-driving cars: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/

See my article about the need for AI self-driving cars to have defensive driving capabilities: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

Something else about that damaged attenuator is worth mentioning. According to the NTSB report, the attenuator was damaged on March 12, 2018 by a crash involving a Toyota Prius 2010. The attenuator, which the NTSB report says was an SCI smart cushion attenuator system, one might ask why it had not been repaired as yet? The Tesla incident happened on March 23, 2018. That’s about 11 days or nearly two weeks since it was first damaged on March 12, 2018. Was it scheduled to be repaired or replaced? Again, we don’t know that it would have made a difference, but it would be something that the NTSB will hopefully report on.

The NTSB report does not indicate what kind of sensors were used on this particular Tesla Model X. It would be crucial for the NTSB to indicate what sensors were used, and also whether it was known as to whether or not the sensors were in working order at the time leading up to the crash. We don’t yet know.

I am going to guess that it might have had these sensors (please don’t hold me to this, it is just an educated guess based on what normally would be included):

  •         Rearward Looking Side Cameras: 100 meters (about 328 feet)
  •         Wide Forward Camera: 60 meters (about 197 feet)
  •         Main Forward Camera: 150 meters (about 492 feet)
  •         Narrow Forward Camera: 250 meters (about 820 feet)
  •         Rear View Camera: 50 meters (about 164 feet)
  •         Ultrasonic: 8 meters (about 26 feet)
  •         Forward Looking Side Camera: 80 meters (about 262 feet)
  •         Radar: 160 meters (about 524 feet)

I’ve also indicated the typical ranges of the considered maximum detection for each of those devices. This is important to know since the analysis of a crash involves determining how many feet or meters away could the AI system have potentially detected something. In the case of the Uber incident in Arizona, I had used the reported speed of the car to then try to ascertain how far in-advance of the crash could the AI have potentially detected the object in the roadway (it was a pedestrian walking a bicycle). Knowing the type of sensors and their detection ranges is vital to such an analysis.

The Counterclockwise Spin

Per the NTSB report for the Mountain View incident, the report says that the Tesla struck the attenuator while going 71 mile per hour, and then the impact “rotated the Tesla counterclockwise” and ultimately “caused a separation of the front portion of the vehicle.” I think we can all agree that hitting something like the attenuator at a speed of 71 miles per hour is going to exert a tremendous amount of force and it certainly seems to have been the case since it caused the front of the Tesla to become separated. It was a hard hit. I don’t think anyone can dispute that.

What’s interesting is the notion that the Tesla spun counterclockwise. I’ve not yet seen anyone comment about this aspect. I’ll speculate about it. If the Tesla had hit the attenuator fully head-on, we would need to study the physics of the result, but it generally might not have lead to a spin of the Tesla. More than likely, the Tesla probably hit at a front-edge of the car, such as toward the left side of the front edge or to the right of the front edge. This would be more likely to generate a spinning action after the impact. Since the Tesla apparently spun counterclockwise, we’ll assume for the moment that it hit at the left side edge of the front of the Tesla, which then jammed the left side against the attenuator, and the right side of the Tesla continued forward which caused it to pivot from the leftside, making it go counterclockwise. This is all speculation and we’ll need to see what the NTSB has to say about it.

I’ll explain in a moment why I think this counterclockwise spinning is a useful clue.

Aftermath of the Crash

After having crashed and spun, the Tesla was involved in subsequent collisions involving two other cars that were presumably driving southbound and got inadvertently caught up in the incident. I’ll assume for now that those additional crashes had nothing to do with the initial crash per se, and were just part of the aftermath.

I am saddened that those subsequent crashes occurred, and it also serves as a reminder about my earlier remarks that incidents involving self-driving cars aren’t necessarily confined to impacting just the driver but can also include other innocents that get caught up in the cascading impacts. According to the NTSB report, one of those other drivers suffered minor injuries, and the other was uninjured. I’d say that’s nearly a miracle. No matter what led to the initial crash, any aftermath can often be horrific. 

The NTSB report further mentions that the 400-volt lithium-ion high-voltage battery in the Tesla caught fire as it was breached during the incident, and a post-crash fire ensued (these can be intense fires; the fire department arrived and was able to put out the fire). It is uplifting to note that the NTSB report essentially suggests that bystanders got out of their cars and came to the rescue of the driver in the Tesla, bravely removing him from the vehicle, in spite of the dangers of the fire. The NTSB report indicates that the Tesla driver was transported to a hospital but died there from his injuries. In any case, I applaud those brave souls that rescued him from the crash.

Layout of the Crash Scene

The NTSB preliminary report does not depict a layout of the crash scene. I’ve used Google Maps to try and see if I can figure out what the crash scene was like. This is admittedly speculation. The final NTSB report should officially describe the crash scene and will be usually based on photos taken at the time and physical evidence at the crash scene, as per the investigative team that inspected the location.

Based on what I can discern, it appears that the crash occurred at the split of the US-101 continuing forward and an exit ramp to get onto the SH-85 as a left-side offshoot. There appears to be a triangular shaped “gore area” that divides the two. I’m sure you’ve seen this kind of thing before. You have two lanes that are running next to each other, and at some point up ahead they split from each other. At that splitting or fork, one lane goes straight ahead, while the other veers off. In-between the split is a triangular area that divides the two splitting lanes. In this case, there was a concrete median in that gore area and it had the attenuator at the front-edge of the concrete median.

It appears that there was for a while two HOV lanes leading up to the split. One HOV lane was for those continuing on US-101, and the other was for those cars wanting to veer off to the left as part of the exit from the US-101 leading to the SH-85. I don’t know that this is exactly where the incident occurred. As I say, I’ve just looked at Google Maps and tried to guess based on what the NTSB slim details so far suggest.

See Figure 1.

Timing of the Crash

The vital aspects about the crash are contained in the timing aspects reported so far by the NTSB.

According to the NTSB, at 18 minutes 55 seconds before the crash, the Autopilot was engaged. It apparently remained engaged the entire time thereafter. This is important because if the Autopilot was not engaged then we likely wouldn’t be discussing any of this, it would generally have been a traditional crash since the Tesla would have been operating like any everyday car. Also, if the Autopilot had suddenly been engaged just seconds before the crash, we might be of a mind to say that the Autopilot had insufficient time to get started and so it was not especially a participant in the crash. In this case, it seems like the Autopilot had been on, it had been on for quite some time before the crash, and it was still on at the time of the crash.

Take a look at Figure 2.

The Tesla was reportedly following a lead car, doing so at 8 seconds before the crash (and might have been doing so even longer, but we don’t yet know), and at 4 seconds before the crash it was no longer following a lead car. For those of you not familiar with self-driving cars, they often use a pied piper approach to driving. They spot a car ahead, and then try to match to the pace of that car. If the car ahead speeds-up, the self-driving car will speed-up, but only to some maximum such as the speed limit or perhaps what has been set on the cruise control via the human driver in the self-driving car. If the car ahead slows down, the self-driving car tends to slow down.

See my article about the pied piper approach to self-driving cars: https://aitrends.com/selfdrivingcars/pied-piper-approach-car-following-self-driving-cars/

As I’ve said many times before, this pied piper approach is extremely simplistic. It’s what a teenage novice driver does when they are first learning to drive, but they quickly realize there are many downsides to this approach. If the approach is not augmented by more sophisticated driving techniques, it’s something that has only limited utility. By the way, keep in mind that the Tesla models of today are considered at a Level 2 or Level 3, but are not yet at a true Level 5, which is a self-driving car for which the AI can entirely drive the car without any human intervention. At the levels less than 5, the human driver is considered the driver of the car, in spite of whatever else the “self-driving car” can do or is suggested or implied that it can do.

See my article about the levels of self-driving cars: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

Once the lead car was no longer ahead of the Tesla, the Tesla then reportedly increased speed, since it was going at 62 miles per hour while following the lead car, and then with a presumed clear space ahead it opted to increase speed to 71 miles per hour at the time of the crash. This “makes sense” in that the system will aim to go to the maximum allowed speed if it believes that there is no car ahead of it that would impede doing so. The NTSB says that the driver had set the cruise control to a maximum of 75 miles per hour, and so the Tesla was likely trying to accelerate to that stated speed. As a side note, the NTSB points out that the speed limit in that location is only 65 miles per hour.

What’s especially intriguing in the NTSB preliminary report is that supposedly the Tesla was left steering, doing so at 7 seconds before the crash. The implication is that this continued until the actual crash. What was causing the Tesla to left steer? We aren’t sure that it was the human driver, since the NTSB says that there weren’t any hands on the steering wheel at 6 seconds to go, and nor until the crash. We also don’t know how much left steering was involved – was it a radical left or just a mild torque to the left?  We really need to see what the NTSB final report says about this.

Per the NTSB, there was no pre-crash braking by the Tesla. This implies that neither the human driver hit the brakes, and nor did the Autopilot system. Per the NTSB, there was no evasive steering. This implies that neither the human driver tried to steer clear of the crash, and nor did the Autopilot system.

This then is the conundrum.

We have a car going about 70 miles per hour that plowed into the gore area and unabatedly slammed into the attenuator that was at the end of the concrete median and did so without any apparent indication by the behavior of the car that it was about to happen.

The human driver did not try to stop the car. The human driver did not try to avoid the attenuator by swerving the car. Any of these options were presumably available in that we can assume reasonably that the brakes were operational and that the steering wheel was operational, as far as we know.

The Autopilot system did not try to stop the car. The Autopilot system did not try to avoid the attenuator by swerving the car. Any of these options were presumably available in that we can assume reasonably that the brakes were operational and that the steering was operational and that the Autopilot system had an ability to command those controls, as far as we know.

Before I launch into speculation about how this occurred, let’s add some other elements to the situation. The speed of 62 miles per hour is about the same as going 91 feet per second. The speed of 71 miles per hour is about the same as going 104 feet per second. The usual rule-of-thumb for proper driving practices is to maintain a distance of about 1 car length for every 10 miles per hour of speed. Most human drivers don’t do this, and they often are very unsafe in terms of the distance they maintain from a car ahead of them. In any case, most self-driving car systems try to maintain the proper recommended distance.

The Tesla Model X is approximately 16 feet in length. So, going at about 60 miles per hour, it presumably was trying to maintain a distance of about (60 mph / 10) x 16 = 96 feet, or 6 car lengths. In the timeline, this would mean that the Tesla was about one second behind the lead car.

Where did the lead car go? Did it opt to get out of the HOV lane and maybe went into the lane to the right? In which case, this implies that in one second of time, from 5 seconds out to the 4 seconds out, it changed lanes and got into the next lane over. Or, did it maybe switch into the exit ramp lane to the left? We don’t know.

Or, did the Tesla move to the left from the 7 seconds out to the 4 seconds out, using up 3 seconds, moving so much so that it was no longer directly in the HOV lane that it had been using to follow the lead car? Thus, the lead car never made any lane change, and it was instead the Tesla that essentially did so, and therefore it no longer detected the lead car that was in the prior HOV lane that the Tesla had been presumably squarely in.

It could be that the Tesla was shifting to the left and ended-up not yet being fully in the exit ramp lane, and nor any longer fully in the ongoing HOV lane. It was in-between. Suppose it had not yet fully ended-up in the exit ramp to the left, and then struck the attenuator at the left side of the front of the Tesla. This fits with the aspect earlier that the Tesla then did a counterclockwise spin (I told you that I’d bring this back into the analysis).

Have you ever been in your car on a highway and couldn’t decide whether to take an exit or continue forward, and you began to straddle both options, which ultimately would mean that you’d ram into whatever was sitting between the two options? I’m guessing you’ve done this before. And, sweated like mad afterwards that you cut things so close as to making the decision of which path to take.

In this case, one scenario is that the Tesla was following a lead car, and for which this seemed perfectly normal and common, and then with just a few seconds before impact with the attenuator, and for reasons yet unknown, the Tesla began to shift to the left, as though it was going to get out of the existing HOV lane and into the exit ramp lane but did so without sufficient urgency.

Did the Autopilot intend to actively switch lanes?

Or, did it somehow lose itself in terms of the markings on the roadway and it was unsure of where the lane really was?

Why didn’t the human driver take over the controls and do something? One explanation is that with only about 3 seconds left to go, which is the point at which the Tesla was apparently no longer following the lead car, the human driver might not have had sufficient time to realize what was happening. Up until then, perhaps the human driver was watching the car ahead and assumed that as long as the car ahead was ahead, it was safe to continue allowing the Autopilot to drive the car.

Was the human driver paying attention to the road? Maybe yes, maybe no – we don’t know. The NTSB says that for the 60 seconds before the crash, the human driver put their hands on the steering wheel on 3 occasions for a total of 34 seconds. This implies that within that last minute, the human driver possibly was paying attention to the driving task.

The Tesla has a steering wheel touch indicator, but does not have a eye tracking capability and nor a facial tracking capability.  These are aspects that some industry experts have asked about and which in this case could have provided further info about the situation, see my article describing this: https://aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/

Why didn’t the Tesla detect the attenuator?

In other words, even if the Tesla somehow was veering to the left, whether due to the human driver or due to the Autopilot itself, presumably the Autopilot should have still been able to detect that there was an attenuator up ahead and that the Tesla was heading straight for it.

There are a multitude of possibilities to explain this.

It could be that on this sunny morning that the sun was in a position that caused glare and that the cameras on the Tesla could not get a clear enough image to detect the attenuator.

You might say that even if that happened, the forward-facing radar should have detected the attenuator.

Was the attenuator so low to the ground and positioned that the radar couldn’t get a solid radar return?

Or, could it be that the damaged attenuator made it less likely to be spotted by radar?

Another possibility is that the camera was reporting that it didn’t see anything ahead, and let’s pretend the radar was saying there was something ahead, but suppose the AI system is coded in a manner that it needs to have both agree in order to take action. It might be that it was programmed that if the radar says be wary, but if the camera does not agree, then the car continues ahead and waits until the two will concur. This is sometimes done to avoid ghosts that appear falsely either on the radar or on the visual processing. Presumably, the AI should not take overt action if it is not “abundantly” the case that action is needed.

In this case, it could be that the no action was taken under the assumption by the system that no action was better than taking the wrong action. In hindsight, we of course would say that the system should have taken evasive action, even if it only had partial indication about what was ahead, but this is only speculation about the events that transpired.

It is noteworthy too that the lead car was no longer ahead at the 4 second mark. This means that the time that the Tesla system had to presumably spot the attenuator was perhaps only with about 4 seconds left to go. To some degree, the camera images and the radar could have been blocked by the lead car. With the “sudden” appearance of the attenuator, there is another possible explanation for the situation.

One further scenario is that the Tesla system ran out of time.

Suppose that it really did get solid images of the attenuator, and that it got solid radar. The question arises as to how much time is needed by the Autopilot to digest this information and then take action.

See my framework about AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

Here’s the standard framework of stages for a self-driving car:

  •         Sensor Data Collection
  •         Sensor Fusion Analysis
  •         Virtual World Model Updating
  •         AI Action Plan Updating
  •         Car Controls Commands

Suppose with 4 seconds left to impact, the sensor data collection took a chunk of that time. Then, assume that the sensor fusion of combining the sensor data took a chunk of that time, including maybe wrestling with a difference of opinion by the camera images versus the radar. Then, the virtual world model had to be updated to reflect the surroundings. Then, an AI action plan had to be updated as to what steps to next take in terms of the driving of the car. Finally, there is a chunk of time involved in issuing car controls commands and having the car respond and abide by the commands.

One aspect is that the Autopilot used up the available time and was in the midst of determining what to do. Perhaps it was going to take perhaps 5 seconds to figure out what to do and enact an evasive maneuver, but with just 4 seconds until impact it was too late by the time it figured out what action to take.

See my article about cognition timing and AI self-driving cars: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

Awaiting the NTSB Final Report

Until the NTSB provides more details in subsequent reports about the incident, we’re all in the dark about what actually happened.

In this analysis, I’ve opted to not get mired into the ongoing debate about who is responsible for the acts of a self-driving car in these kinds of incidents. As Tesla has made very clear, their view is that the human driver is ultimately responsible for the driving of the Tesla cars: “Autopilot is intended for use only with a fully attentive driver,” furthermore it “does not prevent all accidents – such a standard would be impossible – but it makes them much less likely to occur.” http://abc7news.com/automotive/i-team-exclusive-tesla-crash-in-september-showed-similarities-to-fatal-mountain-view-accident/3302389/

I’ve written and spoken many times about the issue of the notion that self-driving cars and their human drivers are co-sharing responsibility. It’s something as a society we need to carefully consider.

See my article on responsibility and AI self-driving cars: https://aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/

See my article about human back-up drivers and AI self-driving cars: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For those of you that are interested in these kinds of self-driving car crash analyses, I’ll be updating my analysis once there is more reporting by the NTSB about this incident. The end result will hopefully make us all aware of the potential limitations of self-driving cars and allow us all as a society to make further informed decisions about what we expect of them. This also should aid auto makers and tech firms in determining what kind of safety aspects they should be including in their AI systems and how to try and make self-driving cars as safe as feasible.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.