Normalization of Deviance Endangers AI Self-Driving Cars
Normalization of Deviance Endangers AI Self-Driving Cars

Normalization of Deviance Endangers AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider

The movie “Deepwater Horizon” provides an entertaining and informative glimpse at what transpired in April 2010 that ultimately led to a floating oil drill platform explosion, and generated the worst oil spill ever in the United States. I suppose that I should have said “spoiler alert” and not told you what the outcome of the movie is, but I am guessing that you likely already are aware that the actual oil drilling platform was called Deepwater Horizon and that the movie of the same named depicted the historical events involving the crisis that occurred there.

If you don’t know anything about Deepwater Horizon, you might perhaps assume that the explosion and fire onboard the platform was likely due to slip shod work that failed to abide by safety practices and that the crew there had become complacent and careless in their efforts. Would you then be surprised to know that the platform had one of the highest safety records and had gone a nearly unprecedented seven years without any significant accidents? Ironically, the platform crew had just received a special recognition award for their incredible safety accomplishment, receiving it just prior to the deadly explosion that killed nearly a dozen crew members and started a disastrous oil spill that ensued for months.

The crew consisted of seasoned experts. They were known for their safety record. How could they have been so unawares that a major incident was soon to occur. Also, keep in mind that they had practiced and were presumably prepared for all sorts of calamities that could happen on the platform. They routinely practiced various kinds of breakages, accidents, and other snafus that could lead to the platform being disrupted or potentially have a severe problem.  They had equipment that provided all sorts of indicators about the status of the platform. They had automatic alerts throughout the systems there, helping to ensure that any kind of anomaly would be detected right away, and the humans running the platform would be warned that something was amiss, so they could take action before a small problem erupted into a big one.

One answer to this confounding paradox is that it might have been due to normalization of deviance.

What’s normalization of deviance, you likely are asking? It is a phenomenon of human behavior that we sometimes will allow for small deviations beyond the norm that we shrug off, often due to over confidence, and those small variants begin to add-up until it becomes an overwhelming deviation. You might think of this as a snowball effect, wherein one small aspect leads to another and another, until ultimately, we have a gigantic snowball that plows us under.

Notice that I mentioned that this can occur due to over confidence. Another alternative is that someone might not be astute enough to even realize that the small deviations are occurring. If you have a novice doing a task, they might not realize when something is outside the norm, precisely because they don’t yet even know what the norm is. In the case of the Deepwater Horizon, I had mentioned earlier here that the crew was seasoned and experts. They weren’t novices. So, unlike novices that we could potentially readily understand why they don’t recognize small deviations, in the case of experts it is usually due to their over confidence in their own abilities. They are too smart for their own good, one might say.

Space Shuttle Challenger Lessons

You might be tempted to think that maybe these experts were actually so-called amoral calculators. That’s a phrase used to describe someone that is not taking into account the morality of their decisions and that opts to purposely undermine or avoid safety precautions. The famous case of the space shuttle Challenger was at first thought to be an example of tossing safety to the wind by amoral calculating personnel.

Challenger was the shuttle that blew-up shortly after launch, due to rubber seals that had become hardened and brittle during cold weather on the launch day and ended-up leading to a leak that burned a hole into the fuel tank, and the rest is sad history. Much of the media attention afterward went toward the aspect that on the launch day, in spite of knowing about the dangers of cold weather, the decision by NASA to launch must have been due to evil doers that were willing to sidestep safety for the sake of pleasing the media and others that were waiting for the launch.

Turns out that NASA was aware of the seal issues, over an extended period of time, but step by step had believed over time that it was not much of a big deal and could be readily managed (see a great book by sociologist Diane Vaughan entitled “The Challenger Launch Decision” that covers the NASA incident in fascinating detail).  They had incrementally deluded themselves into believing that the rubber seals were simply a fact of life that had to be dealt with. And, they believed that it could be dealt with, since prior launches had gone off without a hitch.

For Deepwater Horizon, some analyses of the incident have concluded that the drilling platform destruction and resultant oil spill were the result of years of stretching the envelope. Over time, these floating platforms have been taken into deeper and deeper waters. The technology and the procedures were modestly adjusted to handle the increasingly severe aspects of pressure and other hazards in those deeper waters. It seemed as though they were successfully adjusting since there the Deepwater Horizon had such an unblemished record. We must be doing something right, they figured. Why else are we able to keep taking on higher risks and yet haven’t seen any substantial problems arise.

In the end, this over confidence and snowball effect can lead to a sudden and dramatic disaster. Any organization today that tackles complex systems that have the potential for life and death results should be cognizant of the normalization of deviance phenomenon and be on the look for it. Early detection is imperative to catch on, early enough, and realize explicitly that things that are building toward a really bad result.

What does this have to do with AI self-driving cars?

At the Cybernetic Self-Driving Car Institute, we believe that there are auto makers and tech firms developing AI self-driving cars that are potentially undergoing normalization of deviance, and yet they are currently unaware that they are doing so. It could lead to a disastrous result.  We have been trying to bring attention to this real and alarming possibility, and also do something about it.

Scampering to Lead the Way

First, keep in mind that right now, there is an arms race of sorts taking place in the AI self-driving car arena. Each day, there is one auto maker or tech firm or another that announces a claimed new breakthrough toward self-driving cars. We’ve seen that the self-driving car makers are each scampering to get their self-driving cars onto the roadways. The belief is that public perception of who is leading the way will mean that the auto maker or tech firm will become the chosen self-driving car model upon which everyone else will want to have. It’s the usual tech industry first-mover way of thinking. Whomever lands on the moon first will grab all the market share, it is assumed.

Unfortunately, this notion is also promoting very aggressive choices in how the AI systems work on self-driving cars. There is not being allowed sufficient time to make carefully considered choices. It’s a matter of developing the software, creating the neural network, or whatever, and doing some amount of apparently sufficient testing, and then put it into the real-world.

Now, we are not suggesting that these are amoral calculators. Instead, we are suggesting that just like the Deepwater Horizon example, these are experts that are doing what they do, and doing so with earnest belief in what they are doing. But, along the way, they are shaving corners here and there. Overall, their AI self-driving cars seem to be working, and this bolsters them to make more corner shaving and keep going. Inch by inch, they are building toward an adverse outcome, but for which they don’t even realize is coming. They are deluding themselves into thinking that since things are going well, and therefore have the belief that it will continue in that manner.

To-date, any of the incidents involving AI self-driving cars have been pretty much kept quiet. Yes, in some states there are requirements that the self-driving cars being tested on public roads must report their incidents. But, the reporting is usually at a high-level and without any sufficient detail for the general public to know what the incident truly consisted of. Furthermore, most of the self-driving cars have a human engineer in the self-driving car that is there to takeover controls if something goes amiss. In that sense, it further masks what the incident might have been.

As we get nearer to AI self-driving cars on the public roads and that don’t have a human engineer waiting breathlessly to takeover the vehicle, we are likely to begin to finally see some of the results of the normalization of deviance. The AI will get itself into a pickle and be unable to readily resolve a situation. This notion that the AI should merely slow down the self-driving car and pull off to the side of the road is not going to be sufficient in all cases. There will be cases wherein the self-driving car is going to get into a bad incident and it will potentially be due to the AI system.

Hopefully, when these first bad incidents happen, it will be a wake-up call. If we can all catch the normalization of deviance before it gets overly far along, it might be a means to ensure that all of the auto makers and tech firms will become more introspective of their systems. We might also see more of a movement towards having third-parties that come in and review or audit the AI systems that are being devised for self-driving cars. If it gets bad enough with a sizable number of serious incidents, we might even see regulators that will put in place regulations to combat this.

Right now, the AI self-driving car is the darling of new automation. It appears to have the promise of zero fatalities (a myth that I’ve spoken about and written about many times), along with other tremendous societal positive impacts. Suppose though that we are in the midst of creating AI self-driving cars that each has hidden disaster points within them. If this continues without being caught early, we could end-up with thousands upon thousand of self-driving cars on the roads that are waiting to have bad results.

I know that you are thinking that we can always just do an over-the-air fix for anything that arises. In other words, most of the AI self-driving cars are being equipped with an ability to update or revise the software on-board the self-driving car via a network connection. Yes, this is a quite handy way to fix things, but imagine if there are thousands upon thousands of AI self-driving cars that need to have these over-the-air fixes. Will the self-driving cars be considered unusable until an adequate fix is available, or will all of those owners be unable to use their vaunted self-driving car until the fix is perfected? How long will it take to derive such a fix? And, if a self-driving car is unable to connect to a network, will it even know that it needs to do an update and thus be cruising around with an adverse known ticking software bomb within it?

AI Self-Driving Cars Distribute Risk

Fortunately, the odds of a large-scale disaster similar to a shuttle launch or a drilling platform eruption is relatively low for self-driving cars, primarily because the self-driving cars are of a distributed nature and not one large monolith. In that sense, we are distributing out the risk. When I say this, I don’t want though experts and non-experts alike to think that they shouldn’t therefore be concerned about normalization of deviance. It’s still a true and valid concern for the developers and companies that are making AI self-driving cars.

Early successes with AI self-driving cars will continue to build-up support for pushing forward on this exciting technology. If we have some fatalities and they are due to poorly written code in the AI of the self-driving car, I would bet that the public perception of self-driving cars will rapidly erode. Once the public perception erodes, you can bet for sure that the elected government officials are also going to take a dim view of self-driving cars. It will dampen progress and maybe even derail where things are headed. Let’s not let that scenario play out.

Instead, let’s make sure that the AI self-driving car makers are watching out for the normalization of deviance. As they say, mistakes, mishaps, and disasters are often socially generated and can be traced to social structures and systems that failed to detect and avert them. Time for AI self-driving car developer and firms to make sure they are attune, especially before someone gets hurt and the brakes are applied to our industry.

This content is originally posted to AI Trends.