By Lance Eliot, the AI Trends Insider
I’m sure that you are familiar with the term post-mortem.
We see in the news all the time that when someone dies, and if there are suspicious circumstances, there is a post-mortem done to identify what happened and how the person died. Did the bullet enter into the front or the back of the person? Did they die directly due to the penetration of the bullet, or did they die because they bled out of the bullet hole? Often it takes weeks for a proper post-mortem to be performed. It requires specialized skills, careful examination, and often can produce insights but also create new questions. Suppose there’s a knife wound, a bullet wound, and a blow to the head. A post-mortem might be inconclusive about what actually killed the person (any of those three might have done it), or might allow for a multitude of interpretations (one expert says it was the knife, another one says it was the blow to the head).
The concept underlying a post-mortem has gradually found its way into businesses. When a major project falters or fails, there is often a company post-mortem done to figure out what happened. Maybe the company failed to provide sufficient resources to get the project done. Maybe the project had an unrealistic deadline and could not be achieved in the timeline stated. Maybe the project did not gauge what was actually desired to be undertaken and so the final results don’t do what was hoped for. And so on.
These post-mortems can be done by internal efforts, while in other cases an outsider is used. If a company thinks that the internal teams cannot be “unbiased” in their introspective assessment, it often is handy to bring in an outsider. There is also value in using an outsider in that they might have special skills at doing post-mortems of business projects. There are methods that can be used and a variety of specialized techniques. Of course, sometimes the internal teams are worried that the outsider is merely being used to undertake a witch hunt. If a project has faltered or failed, there is often a price to pay, and an outsider might be the means to find or allege there is a guilty party, and then off with their heads. To the chagrin of some, at times the true guilty party gets off-the-hook and someone else takes the fall.
In my experience as a seasoned company leader and executive, I always try to focus the post-mortem on trying to find out what we can do to avert such a falter or failing in the future. This does not necessarily mean that a particular person or persons screwed-up per se. It could be that the company processes were the culprit. Or, maybe top management was just as culpable, and though painful to consider, it is something that any good leader needs to be ready to see. Plus, the leadership should not either pre-determine the outcome (sometimes they tell an outsider what outcome they want), and nor should they be reactive during the post-mortem effort.
I actually prefer to refer to the effort as a debriefing or post-project analysis. The use of the word post-mortem has a quite negative connotation. It suggests someone died. In the case of projects, they are rarely of a life and death nature. A leader needs to be aware that some projects will fail, and in fact there are many that advocate that if you aren’t failing some of the time that you aren’t trying hard enough. When you use the word post-mortem, it conjures up images of death, blood, ugliness, and also for a conventional post-mortem you are looking for the murder weapon and who done it. I’d prefer that the debriefing or post-project analysis of a project be more positive in nature and be about lessons to be learned and changes to be made.
Allow me to introduce another term for a somewhat different aspect, and a term that I’m betting you might not know, pre-mortem.
Yes, the word is pre-mortem. What’s that, you might ask?
Pre-Mortem
It is a term used in business to suggest that one way to possibly avoid a project from faltering or failing would be to beforehand try to predict why it might falter or fail. You do this before the project gets underway. This allows you to try and look at the project in a different way.
You normally perceive a project in a start-to-finish manner. You do one step, then the next step, and then the next. As much as possible, you are aiming to make sure that each of the steps do what it they are intended to do. Anticipating potential errors or issues is important. So, you try to build into the project various contingencies, in case things go awry, and you collect metrics in order to try and as early as possible detect when something is amiss.
The sooner you can detect an error or issue, usually the less of an effort and cost to correct it. If you allow an error or issue to fester and permeate the rest of the project, it can become harder to deal with. It’s almost like aiming at the moon from the earth, and if your rocket ship veers at the start of the journey, you could end-up millions of miles away from the moon. If your rocket is on-course for most of the distance, and veers toward the tail end, it usually is easier to do a course correction and get back on-track.
For a pre-mortem, you need to think about the project from the finish-to-the-start.
This sometimes sparks you to find potential errors and issues that could otherwise surprisingly arise. Suppose I ask you to read the alphabet in the letter order of A to Z. You have done it a thousand times. You can rattle off each letter without hesitation and without thought. Suppose I then ask you to read the alphabet in order from Z to A. Suddenly, you go much slower. You need to give special attention to the matter. Studies have shown that people reading a list that goes from A to Z, and that had a letter missing or out of sequence, often did not catch the error, due to reading it at lightning speed. Meanwhile, when having to go in the reverse order, they mentally had to calculate each aspect and were more likely to find a letter out of sequence.
That’s what can happen when you do a good pre-mortem. It forces you to carefully review each step, and think about what could go wrong. The way you begin is to start by trying to identify ways in which the project outcome could come out wrong. Suppose the project goes over-budget. OK, now, how could that arise? If there are five steps to the project, you’d want to look at which of the steps involves the most expenditure of cost. Aha, the third step will involve the purchase of some needed expensive materials, and if that cost has gone up by the time the third step occurs, it could bust the budget. Having now via the pre-mortem thought of this beforehand, you might decide to go ahead and get the materials now, rather than waiting and getting stuck because the price has gone up when that third step, months from now, gets underway.
Notice that I said the phrase “a good pre-mortem” which is an important distinction in pre-mortems. Just like a post-mortem that gets twisted into becoming a witch hunt, a pre-mortem can get twisted in untoward ways. In some companies, the pre-mortem turns into a political battle, whereby those that want the project are protective and resist any kind of suggested foul outcome, while those that don’t want the project to proceed will make-up wildly adverse outcomes. If you can get a wildly adverse outcome to seem plausible and stick, it might scare the stakeholders into deciding to not proceed at all. As such, a pre-mortem carries a danger that it can inadvertently kill a project before it even gets underway.
That being said, there is certainly the chance that the project should not be taking place, and thus the pre-mortem might get everyone to have a more realistic sense of the risks involved. It could be that without a pre-mortem, nobody seriously considered what could go awry. The pre-mortem could save you from a big problem. Generally, though, the notion is that you want the project to succeed, and the pre-mortem helps to further guarantee or at least help assure that it will do so.
In recap, you do a pre-mortem by first trying to identify potentially adverse outcomes, and then you work backward through the steps of the project to try and ascertain how such an outcome could occur. When you find where it could occur, you would then reconfigure the project so that it will either avoid that bad outcome or otherwise try to mitigate its chances of occurring.
In terms of identifying potentially adverse outcomes, some critics of the pre-mortem say that you could spend forever coming up with a zillion bad outcomes.
I am going to invent a new toothbrush. Suppose the toothbrush ends-up harming people’s teeth because the bristles are too harsh. This seems like a reasonable kind of bad outcome, meaning that it is something that we all could reasonably agree could go wrong. Suppose someone says that the new toothbrush could have as an outcome that it causes cancer. I dare say that this seems rather farfetched. It’s hard to imagine why a toothbrush could cause cancer. Even if you can come up with something oddball to cover it (the toothbrush is made of cancerous materials), it is really a bit out there in terms of a reasonable kind of adverse outcome.
Therefore, I always try to brainstorm for what seem like reasonably reasoned bad outcomes. We might list a comprehensive bunch of bad outcomes, and then review the list for reasonableness. The effort to figure out how each of the outcomes might arise can be substantial, and so you don’t want to consume effort unless you think that a particular bad outcome seems plausible. On the other hand, don’t knock out of the list bad outcomes that could truly happen, since you are then undermining the point of doing the pre-mortem.
I had one executive that was irked when we suggested that one bad outcome could be that a new system being developed for a major project could create a security hole in their massive database and allow hackers to get into it. He insisted this was “not possible” and that we should strike it from the list. It was such an emotionally charged outcome that he refused to look at the outcome in any impartial manner. Sometimes a pre-mortem needs a delicate hand to get it to occur well.
What does this have to do with AI self-driving cars?
At the Cybernetic Self-Driving Car Institute, we make use of the pre-mortem for our AI development efforts and we urge auto makers and tech firms that are also making AI software for self-driving cars to do the same.
Pre-Mortem Process
Take a look at Figure 1.
This diagram shows some important systems development processes.
The typical “waterfall” style development effort consists of doing a system design, then a system build, then system testing, then fixes based on the testing, and then fielding of the system. Like most dev shops these days, we are using agile methods and so this portrayal of the classic method is somewhat of a simplification but it gets across the overall points that I want to make.
I’ve circled the steps that involve the testing and the fixing of the testing bugs or errors discovered. This is the part of the systems development process that involves trying to find errors or issues, and then resolving them.
Suppose we’ve seeded an error or issue, unknowingly, and we don’t find it during the systems development process. In that case, the system gets fielded with a hidden error or issue embedded inside it. Let’s hope that the error or issue will not arise at an inopportune time. For your everyday systems like an online dating system, if an error arises it might not be especially life threatening (though maybe it pairs you with the worst date ever).
For AI self-driving cars, since they are life-and-death systems, the testing and the error fixing needs to be extremely rigorous. Some firms are rigorous in this, some are not. Even the ones that are rigorous still have a chance that there’s an error or issue that was missed being found and that now exists in the live system that is driving you around in that shiny new self-driving car. You can bet that there’s an error or issue hidden somewhere in there. Estimates are that most of the self-driving car software consists of millions of lines of code. I assure you that the code is not going to be perfect. It will have imperfections, for sure.
I am sure that some of you are howling that even if there is an error or issue, it can be readily fixed by an OTA (Over The Air) update to the self-driving car. Yes, that’s a possibility. But, meanwhile, I ask you, if the error or issue has to do with say preventing the self-driving car from smacking into a wall, and suppose this actually happens, what then? Sure, if the auto maker or tech firm later finds the error or issue, it can do an update to all such self-driving cars, assuming that the OTA is working properly and that the self-driving cars are doing their OTA updates. Nonetheless, we still have a dead person or people due to the error or issue, and maybe even more deaths until the error or issue is figured out and fixed.
Let’s go along with the notion that in fact a self-driving car does smack into a wall. We’d want to do a post-mortem of the AI system and the self-driving car.
In Figure 1, you can see the process for doing a post-mortem of the system.
You start with whatever you know about what actually happened. You then usually will go into the code of the system to try and figure out how it could have led to the adverse outcome. This might also get you to relook at the design of the system. It could be that the error or issue is some isolated bug, or it could be that the system design itself was flawed and so it is a larger matter than seemingly just changing some code.
For the post-mortem of the self-driving car smacking into a wall, we’d want to collect as much information about the nature of the incident as we could get. We’d want to get the black box that presumably resides in the self-driving car. We’d want to know whatever scene analysis has been done in terms of what the conditions were up to and at the crash point, such as whether the streets were wet from rain, and so on. We’d want to examine the memory of the on-board devices. We’d want to see the OTA information of whatever the latest status was of the self-driving car as it communicated with the cloud based updating system. Etc.
Based on whatever we can discover about the incident, the next step in the post-mortem involves searching in the AI system to try and figure out what led to the self-driving car willingly going into the wall. This might involve code inspection. It might involve examining neural networks being used by the AI. And so on.
The question arises as to whether whatever we find could have been possibly found sooner.
As shown in Figure 1, the pre-mortem might have led to discovering whatever the error or issue is, and had we found it during the pre-mortem it might have been corrected prior to the AI self-driving car being fielded.
The pre-mortem process is quite similar to the post-mortem process. You begin with an adverse outcome. In the case of the post-mortem, it’s an adverse outcome that actually occurred. In the case of the pre-mortem, it’s an adverse outcome that you predict could occur.
For the post-mortem, you usually are first looking into the guts of the system, and then depending upon what you find, you then take a look at the overall design. For a pre-mortem, we typically look at the design first, trying to find a means that the design itself could allow for the adverse outcome. If we find something amiss in the design, then it requires fixing the design and fixing whatever code or system elements are based on the design. Even if we cannot discern any means for the design to produce the adverse outcome, we still need to look at the code and the guts of the system, since it is feasible that the system itself has an error or issue that is otherwise not reflected in the design.
Take a look at Figure 2.
When looking for culprits in either the guts of the AI system or in the design, you would usually do so based on the overarching architecture of the AI system that was developed for the self-driving car. This usually consists of at least five major system components, namely the sensors, the sensor fusion, the virtual world model, the AI action plan, and the controls activation.
The sensors provide data about the world surrounding the self-driving car. There is software that collects data from the sensors and tries to interpret the data. This is then fed into the sensor fusion component, which takes the various sensory data and tries to figure out how to best combine it, dealing with some data that is bad, some data that conflicts with other data, and so on. The sensor fusion then leads into updating of the virtual world model. The virtual world model provides a point-in-time indication of the overall status of the self-driving car and its surroundings, as based on the inputs from the sensors and the sensor fusion. The AI then creates an action plan of what the self-driving car should do next, and sends commands via the control activation to the car driving controls. This might include commands to brake, or to speed-up, or to turn, etc.
If we were trying to figure out why the self-driving car ran smack into a wall, the first approach would be to try and find a single culprit. Maybe one of the sensors failed and it led to the catastrophic result (we’ve labeled as error E1-1). Maybe the sensor fusion had an error and thus misled the rest of the AI (error E2-1). It could be that the virtual world model has an error or issue (E3-1). Or it could be that the AI action plan contained an error or issue (E4-1). Or it could be that the control activation has some kind of error or issue (E5-1).
Sometimes the culprit might indeed be a single culprit. This though is often not the case, and it might be that multiple elements were involved. The nature of the AI of the self-driving car is that it is a quite complex system. There are numerous portions and lots of interconnections. During normal testing, while in system development, many of the single culprit errors or issues are more likely to be found. The tougher ones, the errors or issues involving multiple elements, those are harder to find. Furthermore, some development teams get worn out testing, or use up whatever testing time or resources they had, and so trying to find really obscure errors or issues is often not in the cards.
Rather than focusing on a single culprit, the next level of analyzing would be to look for the double culprit circumstance.
See Figure 3.
In Figure 3, you can see that there are situations where the error or issue might be found within both the sensors and the sensor fusion (error E01-2). It could be that a sensor reported bad data, the software did not catch it, this was fed into the sensor fusion, the sensor fusion got confused by the bad data, it had no provision of what to do, and thus fed into the virtual world model a false indication that the wall wasn’t there. This is a case where two wrongs don’t make a right.
You can have a situation where an error in one component happens to cause an error in a second component to arise. In other words, the second error would not have been found, except for the fact that the first component had an error. The two errors might not be directly related to each other. They might have been developed completely separately. That being said, it’s also possible that whatever led to the error in the first component, during development, might have also led to an error in the second component. If you have a developer Joe, and he made an error in the first component, and if he is someone that is error prone as a developer, and if he worked also on the second component, you might well have an error in the second component too.
What kind of adverse outcomes should you be considering for an AI self-driving car?
As shown in Figure 4, there are adverse outcomes that are directly caused by the self-driving car. The AI self-driving car might hit a non-car non-human object, such as a tree, a wall, a fire hydrant, and other such objects. You would want to postulate this happening, as a predicted adverse outcome, and try to walk back through the AI and the self-driving car system, in order to try and detect how this could possibly happen.
There are other such adverse outcomes. The self-driving car might hit another car. The other car might be stationary or moving. The self-driving car might be stationary or moving. The self-driving car might hit a pedestrian, or it might hit a bicyclist, or it might hit a motorcyclist. It would be important to also start layering in the conditions that might exist.
For example, we might postulate a scenario whereby the AI self-driving car is driving at night time, on dry roads, and going at a speed of 40 miles per hour, and it runs into a pedestrian. These are more specific conditions and it will make it more amenable to then trying to discern how the AI system could allow this to occur.
The pre-mortem can also involve examining adverse outcomes that aren’t necessarily a directly caused incident by the AI self-driving car.
See Figure 5.
Various indirect causes include that the AI self-driving car just suddenly seems to come to a stop, or suddenly seems to rush ahead, or suddenly seems to change lanes. Even though the self-driving car perhaps won’t hit and harm anyone due to these actions, it might cause another car to react and turn into a car crash. If the AI self-driving car suddenly switches lanes, and cuts off a car coming in that lane, the other car might swerve to avoid hitting the AI self-driving car, and then the swerving car loses control and rams into a telephone pole. The AI self-driving car was “innocent” of being in the accident, but was a factor in producing the accident. These are worthy of a pre-mortem assessment too.
Few of the auto makers and tech firms that are making AI self-driving car systems are doing pre-mortems. It’s an approach not widely used overall. But, for the case of systems that involve life-and-death kinds of systems, doing a pre-mortem adds more confidence to being able to field a system that has less chances of having disastrous errors or issues.
Some AI developers say to me that you can never fully find all errors or issues beforehand, and thus they seem to imply that there’s no point in doing things like a pre-mortem. I don’t buy into that logic. It’s the proverbial throwing the baby out with the bath water. We need to try and do as much testing of AI self-driving cars as we can. Shrugging your shoulders and waving your hands is not a valid method of testing. The pre-mortem is not a guarantee of eliminating hidden errors or issues, but it is handy tool to ferret out as many as we can. Better safe than sorry.
Copyright 2018 Dr. Lance Eliot
This content is originally posted on AI Trends.
You must be logged in to post a comment.