Self-Adapting Resiliency for AI Self-Driving Cars
Self-Adapting Resiliency for AI Self-Driving Cars

Self-Adapting Resiliency for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider

Have you ever seen a brittle star? I’m not talking about stars in the night time sky – I’m referring to the ocean-going type of star. The brittle star is an ophiuroid that crawls along the sea floor. The crawling motion is undertaken by the use of its five arms, each of which can ultimately grow to a size of about two feet. These arms are generally slender in shape and are used in a whipping like manner to bring about locomotion for the creature. You can take a look at YouTube videos of brittle stars crawling around, if you’d like to get a better sense of how it moves (I assure you this would be more interesting than watching another cat video!).

What makes the brittle star particularly intriguing is that if one of its arms gets torn off by a predator, the harmed brittle star is still able to crawl around. The brittle star appears to adjust itself to accommodate that there are only four arms left over rather than five. This adjustment happens nearly immediately, and the brittle star can continue crawling along without having to do much about the lose of one arm.  As far as we know, the brittle star does not have to ponder at length what to do about a lost limb. It seemingly readjusts almost spontaneously.

Researchers believe that the brittle star uses its decentralized control mechanism to self-coordinate the movement of its arms, and that mysteriously the decentralized control is able to readjust when an arm is no longer available. This is resiliency of design in nature. I say that it works mysteriously because there is still much open research going on about how biologically this occurs in the brittle star. In spite of our not knowing how it biologically happens, we can certainly witness that it does. A team of researchers in Japan recently developed a robot that acts like a brittle star and tries to showcase how to adapt to physical damage such as the loss of an arm (led by Professor Akio Ishiguro at Tohoku University’s Research Institute of Electrical Communication). This kind of biomimicry is a handy means to extend the capability of robots and we often borrow feature and functions that we see in living organisms to improve what robots and AI can do.

The researchers used a synthetic approach to try and deduce how the decentralized control mechanism works. By creating a prototype brittle star robot, they were able to try and mimic the anatomical and behavioral aspects of a real brittle star. Someday we might be able to delve deeply into the neurons and muscles of real brittle stars and gauge how they really work, but in the meantime the robotic brittle star uses some in-depth mathematics to determine the angles and motions to handle five arms and also adjust when there are only four arms.  

What does this have to do with AI self-driving cars?

At the Cybernetic Self-Driving Car Institute, we are developing self-adapting resiliency for AI self-driving cars.

Here’s what we mean by self-adapting resiliency for AI self-driving cars (allow me a moment to elaborate).

Suppose a self-driving car is going along a highway and all of a sudden the left front camera that detects nearby objects happens to fail.

Now, if you are questioning why it would fail, well, it could fail simply because hardware sensory devices are going to fail from time-to-time on self-driving cars just as any other kind of hardware device can fail. It could fail due to wearing out, it could fail due to perhaps weather conditions such as especially high heat or bitter cold, it could fail because something hits it like a rock thrown up from the roadway, and so on. You might as well face the harsh truth that the hardware on self-driving cars is going to fail over time. It will happen. Right now, the self-driving cars are being pampered by the auto makers and tech firms and so you never hear about sensors failing on those cars. Besides the fact that the self-driving cars are equipped with topnotch sensors and those sensors are nearly brand new, those self-driving cars are continually getting checked and rechecked like an airplane and the crews maintaining those experimental self-driving cars try to ensure that a sensor will not end-up failing in the field.

Once we have self-driving cars owned by the public, I assure you that those self-driving cars are not going to lead such a pampered life. Most people don’t take very good care of their conventional cars, including not making sure they do their oil changes regularly, and otherwise just assume their car will work until it doesn’t anymore. Self-driving cars are going to be chock full of dozens and dozens of finicky sensors and you can bet that those sensors will not be well kept and will ultimately fail while the self-driving car is in motion.

Okay, with that said, let’s go back to the notion that the left front camera suddenly fails while the self-driving car is rolling along a highway. We’ll assume that the camera was being used to detect near-term objects (meanwhile, let’s assume there are other near-term cameras on other parts of the self-driving car and also lots of other sensors such as sonar, LIDAR, radar, and the like).

What should the AI of the self-driving car do?

What Needs to Happen When a Component Fails

First, of course, it has to discover that the camera has failed. This should be a core aspect of the AI system, namely that it should be continually checking to see that the sensors are functioning. If a sensor does not respond to queries or is not providing sensor data, the AI system needs to know and needs to realize the impact of this failing sensor. Sadly, and scarily, some of the AI self-driving cars of today do not do much in the way of checking for failed sensor devices. Generally, they assume that all sensors are working all of the time, unless the sensor says otherwise.

Determining the failure of a sensor can be admittedly tricky. A sensor might be partially failing and so it is still providing data, but perhaps the data coming into the AI system is flaky. How is the AI system to realize that the data is flaky or faulty? Again, some of today’s AI self-driving cars don’t do any double-checking on the data coming from the sensors. There are some relatively simple ways to check and see if the data coming into the AI system from the sensor is at least “reasonable” in that it matches to what is expected to be coming from that sensor.

Some sensors are designed with their own internal error checking and so they are able to self-determine when they are faulting. This kind of capability is handy because then the device merely reports to the AI system that it is itself not working properly.  If the sensor device is faulty, the question then arises whether to continue to try and make use of whatever it provides, or maybe the AI system should instead decide that the device is suspect and so reject outright whatever it sends into the sensor fusion. Imagine if the left front camera is faulty and reports that a dog is just a few feet from the heading of the self-driving car, does the AI system opt to believe the sensor and so take evasive action, or, if the AI system already suspects the device is faulty should it just ignore the data.

As you can see, these kinds of aspects about an AI self-driving car are quite crucial. It can mean life or death for those occupants in the self-driving car and for those outside of the self-driving car. A self-driving car that cannot see a dog that is to the left of the self-driving car could run right into the dog. Or, a false reporting of a dog by a suspect sensor could cause the AI to take evasive actions that cause the car to run off the road and injure the occupants (while trying to save a dog ghost).

Our viewpoint is that AI self-driving cars need to be resilient, akin to the brittle star, and be able to actively ascertain when something has gone amiss. Furthermore, besides detecting that something is amiss, the AI needs to be prepared to do something about it. For every sensor on the self-driving car, the AI needs to have in-advance a strategy about how it will handle a failure of each sensor. In short, the AI needs to be self-adapting to achieve resiliency and do so to safely keep the self-driving car and its occupants from harm, or, at least attempt to minimize harm if no other recourse is viable.

We rate the AI of self-driving cars on a five-point scale:

  1. Not resilient
  2. Minimally resilient
  3. Moderately resilient
  4. Highly resilient
  5. Fully resilient

An advanced version of AI that has all the bells-and-whistles associated with resiliency, along with having been tested to show that it can actually work, earns the fully resilient top score. This requires a lot of effort and attention to go toward the resiliency factors of an AI self-driving car.

Not only does this pertain to the sensors, but it pertains to all other facets of the self-driving car. Keep in mind that a self-driving car is still a car, and therefore the AI needs to also be fully aware of whether the engine is working or faulty, whether the tires are working or faulty, whether the transmission is working or faulty, and so on. The whole kit and caboodle.

And, it even means that if the AI itself has faults, the AI needs to be aware of it and be ready to do something about it. You might question how the AI itself could become faulty, but you need to keep in mind that the AI is just software, and software will have bugs and problems, and so the AI needs to do a double-check on itself. Furthermore, the AI is running on microprocessors and using memory, all of which can have faults, and thus cause the AI system to also be faulty.

Some might say that it is impossible for the AI to be able to handle the unexpected. Suppose a tree falls down and clips the front of the self-driving car, and so now let’s assume that several sensors such as some radar and cameras are faulty or completely out of commission. How could the AI have known that a tree was going to fall on the self-driving car? Plus, how it could have predicted that the tree would knock out say six sensors specifically?

Well, unless we have AI that can see the future (don’t hold your breath on that one), I agree that it would not know necessarily that a tree was going to fall on the car and nor that the tree would knock out certain sensors. But, let’s not think of the word “unexpected” as though it means generally being unpredictable.

The AI could have been developed with the notion that at times there will be one or more sensors that will become faulty. It could be any combination of the various sensors on the self-driving car. The fact that a tree caused those sensors to become busted is not especially material to the overall aspect that the AI should be ready to deal with six sensors that are busted.  We can make a prediction that someday there will be those six sensors that fail, and do so without having to know why or how it happens.

To explore the possibilities of self-driving car failure points, we make use of a System Element Failure Matrix (SEFM), which provides a cross indication of which system elements can make-up for the loss of some other system element when it goes down. If a camera on the left that detects near-term objects is out, suppose that there is a side camera and the right-side camera that can be used to make-up for the loss of the left camera. Those two cameras, in combination with say the radar unit on the left, might still be able to deal with the same aspects that the faulty camera can no longer detect.

The AI needs to be able to ascertain what can still be done with the self-driving car, whenever any number of sensors or any other aspects of the self-driving car fails. I know that many of the self-driving car makers are saying that the self-driving car will be directed by the AI to slow down the self-driving car and pull over to the side of the road. That’s a nice approach, if it is actually feasible. In some cases, the self-driving car won’t be able to do such a nice idealized gradual slow down and a nice idealized pulling off to the side of the road.

Situational Action Choices Need to be Ready to Go

Indeed, the AI needs to have Situational Action Choices (SAC), ready to go. Perhaps the AI should direct the self-driving car to actually speed-up, maybe doing so to avoid an accident that would otherwise happen if it opted to slow down. The AI must have a range of choices available, and determine based on the situation which of those choices makes the most sense to employ, including whether to use an evasive maneuver, a defensive maneuver, or whatever is applicable.

In addition, the AI needs to take time into account on whatever it does. Sometimes, a failing element on the self-driving car will go out nearly instantaneously. In other cases, the element might be gradually getting worse over time. This all implies that in some instances there will be time available due to being warned that an element is going bad, while in other cases there might be almost no reaction time available.

Furthermore, one faulty element might not be the only of the faults that are going to occur. In some cases, a faulty device might happen to fail and only it fails, while in other cases it might be sequential with several devices failing one after another, and in other cases it might be cascading of both sequential and simultaneous failures. The AI cannot be setup to assume that only a singular failure will occur. Instead, it must be ready for multiple failures, and that those failures can happen all at once or take time to appear.

How do humans deal with car failures?

Our reaction time to failures while driving a car can be on the order of several seconds, which is precious time when a car is hurling forward at seventy miles an hour or about to hit someone or something. Fortunately, the AI has the capacity to make decisions much faster than the human reaction time, but, this is only a computational advantage and one that if not readied will not necessarily arise. In other words, yes, the AI system might be able to do things in milliseconds or nanoseconds or even much faster, though it all depends on what it is trying to do. How much processing is the AI going to do to ascertain what has happened and what must be done next?

Humans often react by trying to think things through, and they are trying to mentally sort out what is happening and what to do about it. Race car drivers are trained for this kind of thinking and so are relatively well honed for it. The average car driver usually though is not well mentally prepared. They often just react by doing what seems natural, namely jamming on the brakes. Humans often get mentally confused when car elements fail and are at times mentally thrown into shock.

We don’t want the AI for self-driving cars to go into “shock” or be baffled by what to do when self-driving car elements falter or fail. The AI needs to be more like the race car drivers. The AI system needs to have resiliency built into it and be able to self-adapt to whatever situation might arise.

If you’ve seen the movie Sully, there’s a great scene in the movie where Captain Chesley Sully Sullenberger (portrayed by Tom Hanks) makes some pointed remarks about simulations of the emergency landing that he made into the Hudson River after his US Airways flight had struck birds and both jet engines flamed out. The simulations seemed to show that he could have made it safely to a nearby airport, rather than ditching the plane into the water. He points out that the simulations were made to immediately turn back to the airport the moment the birds struck the plane. This is not the way of the real world, in that he and his co-pilot had to first determine what had happened and try to then decide what to do next. This took perhaps thirty seconds to do.

I liked this scene because it brings up in my mind the importance of AI being self-adapting and resilient with respect to a self-driving car — and that time is such a vital factor.

How much time will the AI have to figure out what has failed and what to do about it? The AI developers for self-driving cars need to push toward having the AI be able do this kind of figuring out in as fast a manner as feasible, and yet do so while still trying to determine as much as possible what has happened and what to do next.

These are aspects that it might not be able to ascertain to a certainty, and likely not due to the time crunch involved. Probabilities will be required. In a sense, “judgements” will need to be made by the AI (meaning, in this case, selecting a course of action under imperfect information about the situation). As such, the AI will need to be able to make “educated hunches” in some circumstances, lest the time required to make a “full and complete” decision would mean that the danger has gotten even worse or that there is no time left to avoid something quite untoward.

The public probably will find it disturbing to realize that AI in a self-driving had to take “shortcuts” in its processing in order to solve the problem in the time allotted, and will instead want an all-knowing AI that computes everything to an utter certainty. Don’t think that’s going to be the case. In whatever manner the AI does it, we must make sure that we as humans know how it did it, even if only being able to ask after-the-fact what the AI deliberated and decided to do. The auto makers and tech companies making AI self-driving cars need to consider how they will be achieving self-adapting resilient self-driving cars. I’ll go for a drive in one, if it’s the full resiliency level.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.