Expert Systems and AI Self-Driving Cars: Crucial Innovative Techniques
Expert Systems and AI Self-Driving Cars: Crucial Innovative Techniques

Expert Systems and AI Self-Driving Cars: Crucial Innovative Techniques

By Michael B. Eliot

Foreword by Dr. Lance Eliot:  As the AI Insider and a regular columnist for AI Trends, I am pleased to provide you with a posting that was written by a guest author, Michael B. Eliot. In his article, he discusses several key aspects about Expert Systems, a long-time and formative topic in AI, and identifies innovative and intriguing aspects that tie directly to the topic of AI self-driving cars. He is currently majoring in Computer Science at UC Berkeley, one of the world’s top Computer Science programs, and has work experience relevant to self-driving cars – having done software engineering work for Tesloop (including machine learning systems) and at the Autonomous Drone lab at Cal, along with being an avid entrepreneur and has successfully competed in some of the top university-level hackathons including the Yale Hackathon. He’s also been quite helpful in my efforts toward innovating in the self-driving car field — and happens to be my son. Yes, indeed, I couldn’t be prouder!

_______________

I’d like to introduce you to the AI-specialty of Expert Systems and then provide key indicators of how Expert Systems are pertinent to AI self-driving cars.

First, we must ask ourselves what an expert system is. Often considered one of the forerunners to modern day AI, an expert system is computer software that attempts to mimic the decision-making expertise of an expert in a given field. In the past, this typically meant creating interrelated tree structures to represent decision models. Now, with the advent of machine learning, neural networks, and more advanced AI, expert systems have entered a whole new level of capability that they had not previously exhibited.

Developing an expert-oriented system is in many respects at the root of the self-driving car field, and presents many challenges.  Let’s take a close look at some of the key problems involved in developing an expert system for this particular domain (i.e., driving a car), and then examine a few of the important methods that are currently being used to solve these problems.

In the context of this domain, what do we mean by an “expert” when it comes to car driving. Is your average adult driver on-the-road able to be considered an expert at what they do? I’ve driven through enough of the harried and congested Los Angeles traffic to veto that idea. What about a professional car driver, including the likes of a race car driver, a bus driver, or a truck driver? Possibly, but race car drivers don’t accurately depict the capabilities of everyway drivers, and the bus driver and truck driver don’t really equate to the same expertise of a race car driver.

Perhaps we can circumvent the need for a defined expert per se by simply understanding the necessary outputs. While this seems straightforward, this becomes problematic for two reasons. First, what are the outputs being referenced? If the outputs are generated by a human driver, you simply have the same problem as the previous points about race car drivers, bus drivers, and truck drivers. If they are a derived set of desired outputs then how do you know that this is considered the optimal decision, and how do you generate the data accordingly? The second point involves determining how you equate those desired outputs with a given set of inputs. Driving is a dynamically changing environment, so you can’t easily define a finite stated set of outputs for a set of inputs in all situations.

This point is a significant problem in the self-driving car world and brings us to an area of AI that is known as fuzzy decision making or more commonly fuzzy logic. Fuzzy logic has to do with deriving outputs based on probabilities and approximation, rather than using traditional non-probabilistic logical methods. Fuzzy logic is the “human” factor in expert systems. It is those gut feelings and hunches that separate the amateur from the expert, and for software engineers this can be quite difficult to identify and codify into a system. How does one code a gut feeling? One could utilize probability as an attempt to do so, but, when the life of the passenger in a self-driving car is on the line, is it safe to essentially guess at an answer? However we get there, adding in the human element of hunches and intuition is crucial to achieve a Level 4 or Level 5 self-driving car, and it is an extremely difficult aspect to pin down.

These questions are at the heart of all expert systems, and can help us articulate the challenges faced with self-driving cars. As recap, we’ve emphasized that there is a need to understand who the expert is, and what makes them an expert.

The major automakers are cognizant of this problem, and have been tackling these issues for some time. The new twist is not these problems themselves, but rather in articulating them through a reference frame of expert systems. It’s a new way of seeing old ideas. With that in mind, let’s now turn to how these companies are confronting these problems. We will dive into three key techniques being utilized in the self-driving car field, and discuss these techniques with reference to expert systems. The three key techniques are (1) Inverse Reinforcement Learning, (2) Generative Adversarial Networks or GAN, and (3) Dataset Aggregation or referred to as DAgger.

Let’s start with Inverse Reinforcement Learning, and begin by discussing reinforcement learning overall and then bring the “inverse” aspect into the model. Reinforcement learning is a simple process to describe. You reward behaviors that best yield the desired outcome. You don’t copy behaviors per se, instead you favor behaviors. This allows you to search for a most desired behavior over many iterations.

One straightforward example is merging lanes on the highway. If the merger is smooth, at uniform speed, and no cars must suddenly shift out of the way, then one could consider this a desirable behavior. We can then create a cost function that articulates these parameters for the AI of the self-driving car and adjust probability weights in the model.

Unfortunately, while some algorithms make great use of this, such as Google’s famous game playing system AlphaGo, for self-driving cars it is more complicated. Let’s revisit the example of a car merging while on a highway. There is a myriad of behaviors that dictate a successful merger, and you cannot reasonably simplify it to just a handful of key elements. Accounting for and fine tuning these parameters is nearly impossible. This leads us to inverse reinforcement learning as another approach. We’ll flip the model and use observed outputs to design our cost function.

We take a series of observed outputs, like successful lane mergers from human drivers, and have our model guess at a reward function. You then check this guessed model against the training instances. You might think this is a simple guess-and-check approach, but it’s more complicated in many ways. We still provide it the optimal goal, and give it some key state identifiers. For a car merging, the goal might be to shift over X amount of feet, and the nearby identifiers might be road markers or adjacent cars. The rest is up to the machine to figure out, though it still “learns” through given algorithms (such as the max-margin based Discriminative Feature Learning which uses complex probability mapping to determine key features from sample data).

Let’s briefly explore how this solves the two key problems earlier noted in expert systems. This circumvents the need for an expert by allowing us to use only successful attempts. While the issue of imperfect data still persists, it is mitigated since you don’t have to select perfect data. As long as you provide successes exclusively, the car can level out the imperfections of each individual instance by observing all of the data. Thus, we don’t need an expert, we simply need success. Does it matter who performed a surgery, if the surgery was successful? Perhaps it doesn’t sit well to not know the nature of the expert, but to a machine seeking to produce optimum output it doesn’t especially matter. We also avoid having to account for the human element, as the computer will intrinsically find features that allow it to mimic the process, performing better than a human in many cases.

Next, let’s move onto our second key aspect in expert systems, the Generative Adversarial Network (GAN).  Let’s focus on the second word of that phrase, namely the word adversarial. The key idea of generative adversarial networks involves the use of two competing neural networks. Both learn on a set of training data. However, their roles differ. One attempts to create synthetic or so-called “fake” data, while the other attempts to discern between the generated fake data and real data. In this competitive environment, both networks are constantly getting better. One attempting to create more realistic data, the other attempting to get better at discerning between real and synthetic data. In this case, generative simply means to be continually generating the fake data.

Let’s now look at some applications of this in self driving cars. One notable application is similar to inverse reinforcement learning, attempting to figure out what makes a human driver able to perform the driving task. We cannot necessarily tell what defines good driving, but perhaps we can discern what does not define good driving. This is where we can use GANs to our advantage. We use one network to create synthetic human driver data, while the other attempts to discern between real and fake data. This can allow us to hone our network onto which features best define a good driver. If you are having difficulty understanding how we can fake human data, just imagine a simple state transition matrix. The faking part would be the structure of the graph and the probabilities/rules associated with moving through the graph.

Another interesting example of GAN is for generating fully fake simulated environments with which to train models on. The objective is to create environments so close to reality that you can then generate all the data that you want for different self-driving car use cases. This is a relatively new idea, and came recently from Apple’s SimGan, which utilizes simulated and unsupervised learning methods to generate excellent synthetic data. The beauty in this is the number of use cases. You can generate one round of synthetic data, and utilize it for any number of individual problems in the self-driving car domain, ranging from merging to accident avoidance. You can even generate different perspectives on the same state to try and further observe features.

Let’s once again connect this into solving expert systems issues. Rather than focus on an expert themselves, we look primarily at the results of their work, and attempt to determine what is indicative of their work and what is not. We can use GANs to accomplish this goal.

Our last key technique is the Dataset Aggregation or DAgger algorithm. DAgger is a response to applying limited data to a new scenario. In many cases, this process is very difficult.

Suppose you had a race car that is being driven on a racetrack. Imagine you had a racetrack A where you had a single lap from an expert as training data. How could you apply this to a new racetrack B? With a lack of data, a neural network would have insufficient data to apply toward learning from it. In addition, if the neural network fails, it won’t be able to locate a minimum error since almost every result ends in a failure state. This is a data mismatch issue. In terms of expert systems, the DAgger algorithm is tackling how we can take an expert with only a scarce amount of data from them, and apply the limited data to a new problem.

What DAgger does is aggregate data. It examines what the expert did and attempts to derive a policy from this. It then uses this policy to deterministically generate a new potential state for the next time it attempts a similar behavior. DAgger then adds these new potential states to its dataset, and considers this the new “expert,” deciding the next state based upon the prior ones. This allows you to progressively build a dataset of inputs that the final policy is likely to encounter.

As an example, suppose you are attempting to teach a self-driving car how to make a right turn on a racetrack B and have setup a simulation to do so. You have the expert’s data that was collected from racetrack A, so the computer has a limited sense of what a right turn is. At iteration one of the simulation, the car is not quite sure how much it should turn the steering wheel, so it fails to adequately steer the car and the car crashes into the left side wall. Quite unfortunate, but, the system had made a deterministic guess on what adjustments it should make to succeed in achieving the turn. Since it tried to make a right turn and failed, perhaps it might next guess that it should turn more to the right, and adds a series of such guesses to the progressively expanding “expert” dataset. With enough simulated iterations, it should end-up with a deep enough and finely tuned dataset to correctly make the right turn. In addition, it is better able to adapt to new situations based upon this collected dataset of learned experience.

Similar to the other two techniques, the Dagger algorithm is aiding in resolving the problems in developing expert systems for self-driving cars. You can see how this process mimics the trial-and-error that a normal expert would undergo. You make a mistake, and then you make a correction. This algorithm aids in building expert intuition into AI self-driving cars.

The aforementioned three techniques are ways to improve the capabilities of AI self-driving cars and do so via an augmented expert systems approach. It is an interesting way to understand the problems faced by the auto and tech industry in developing true self-driving cars, and indicates how computer science is advancing toward solving these problems.

This content is originally posted on AI Trends.