Occam’s Razor and AI Machine Learning Self-Driving Cars: Zebra Too
Occam’s Razor and AI Machine Learning Self-Driving Cars: Zebra Too

Occam’s Razor and AI Machine Learning Self-Driving Cars: Zebra Too

By Dr. Lance B. Eliot, the AI Trends Insider

Let me begin by saying that I believe in Occam’s razor. A variant is also known as Zebra. I’ll explain all of this in a moment, but first, a bit of a preamble to warm you up for the rest of the story.

Self-driving cars are complex.

Besides all of the various automotive parts and vehicular components that would be needed for any conventional car, a self-driving car is also loaded down with dozens of specialized sensory devices, potentially hundreds of microprocessors, ECU’s, online storage devices, a myriad of communications devices within the vehicle and for internal and external communications, and so on. It’s a veritable bazaar of electronic and computational elements. Imagine the latest NASA rocket ship or an advanced jet fighter plane, and you are starting to see the magnitude of what is within the scope of a true self-driving car.

The big question is whether or not the complexity will undermine achieving a true self-driving car.

That’s right, I dared to say that we might be heading toward a system that becomes so complex that it either won’t work, or it will work but will have serious and potentially lethal problems, or that it might work but do so in a manner that no one can really know whether it has hidden within it some fatal flaw that will reveal itself at the worst of times.

I am not seeking to be alarmist. I am just pointing out that we are moving forward with conventional cars and adding more and more complexity onto them. There are some auto designers that think we are building a skyscraper onto the top of a tall building and so we are asking for trouble. They believe that self-driving cars should go back to the beginning and from the ground-up redesign what a car consists of. In that sense, they believe that we need to reinvent the car, doing so with the guise of what we desire a self-driving car to be able to do.

This is a very important and serious point. Right now, there are some auto makers and tech companies that are making add-ons for conventional cars that will presumably turn them into self-driving cars. Most of the auto makers and tech companies are integrating specialized systems into conventional cars to produce self-driving cars. Almost no one is taking the route of restarting altogether what a car should be and from scratch making it into a self-driving car (this is mainly an experimental or research approach).

It makes sense that we would want to just add a self-driving car capability onto what we already can do with conventional cars. Rather than starting with nothing, why not use what we already have. We know that conventional cars work. If you try to start over, you face two daunting challenges, namely making a car that works and then also making it be self-driving. From a cost perspective, it is less expensive to toss onto a conventional car the self-driving car capabilities. From a time factor, it is faster to take that same approach. A blank slate approach for developing a self-driving car is going to take a lot longer to get to market. Besides, who would be able to support such a car, including getting parts for it, etc.

That being said, a few contrarians say that we will never be able to graft onto a conventional car the needed capabilities to make a true Level 5 self-driving car. They argue that the auto makers and tech companies will perhaps achieve a Level 4 self-driving car, but then get stymied and unable to make it to a Level 5. Meanwhile, those working in their garages and research labs that took the route of starting from scratch will suddenly become the limelight of Level 5 achievement. They will have labored all those years in the darkness without any accolades, and maybe even have faced ridicule for their quiet efforts, and suddenly find themselves the heroes of getting us to Level 5.

Let’s though get back to the focus here, which is that self-driving cars are getting increasingly complex. We are barely into Level 2 and Level 3, and already self-driving cars have gone up nearly exponentially in complexity. Level 4 is presumably another lurch upward. Level 5, well, we’re not sure how high up that might be in terms of complexity.

Why does complexity matter? As mentioned earlier, with immense complexity it becomes harder and harder to ascertain whether a self-driving car will work as intended. The testing that is done prior to putting the self-driving car on the road can only get you so far. The number of paths and variations of what a self-driving car and the AI will do is huge, and lab based testing is only going to uncover a fraction of whatever weaknesses or bugs might lurk within the system.

The complexity gets even more obscured due to the machine learning aspects of the AI and the self-driving car. Test the self-driving car and AI as much as you like, but the moment it is driving on the roads, it is already changing. The learning aspects will lead to the system doing something differently than what you had earlier tested. A self-driving car with one hundred hours of roadway time is going to be quite different from the same self-driving car that has only one hour of roadway time. For those AI systems using neural networks, the neural network connections, weights, and the like, will be changing as the self-driving car collects more data and gleans more experiences under actual driving conditions and situations.

When a self-driving car and its AI goes awry, how will the developers identify the source of the problem? The complexity of interaction between the sensory devices, the sensor fusion, the strategic AI driving elements, the tactical AI driving elements, the ECU’s, and the other aspect will confound and hide where the problem resides.

Let’s say Zebra.

Allow me to explain.

In the medical domain, they have a saying known as “Zebra” that traces back to the 1940’s when Dr. Theodore Woodward at the University of Maryland told interns: “When you hear hoofbeats, think of horses, not zebras.” What he was trying to convey was that when trying to do a medical diagnosis, the interns often were looking for the most obscure of medical illnesses to accommodate the diagnosis.

Patient has a runny nose, fever, and rashes on their neck, this might be the rare Zamboni disease that only one-hundredth of one percent of people get. Hogwash, one might say. It is just someone with the common cold.  Dr. Woodward emphasized that in Maryland, if you hear the sounds of hoofs, the odds are much higher that it is a horse, than if it were a zebra (about the only chance of it being a zebra is if you were at the Maryland zoo).

For a self-driving car, and when it has a problem, which for sure they will have problems, the question will be whether it is something obvious that has gone astray, or whether it is something buried deep within a tiny component hidden within a stack of fifty other components. The inherent complexity of self-driving cars is going to make it hard to know. Will the sound of a hoofbeat mean it is a horse or is it a zebra? We won’t have the same kind of statistical bases to go on, unlike the medical domain and knowing what the likelihood of various illnesses are.

At the Cybernetic Self-Driving Car Institute, we are developing AI self-driving software and trying to abide by Occam’s razor as we do so.

Occam’s razor is a well-known principle that derives from the notion that simplicity matters. In the sciences, many times there have been occasions of theories that were developed to explain some phenomena of nature, and those theories were quite complex. If someone could derive a similar theory that was simpler, and yet still provided the same explanation, it was considered that the simpler version was the better version. As Einstein emphasized: “Everything should be kept as simple as possible, but no simpler.”

William of Ockham in the early 1300’s had put forth long before Einstein that among competing hypotheses, whichever hypothesis has the least number of assumptions ought to be the winning hypothesis.  In his own words, he had said: “Entities are not to be multiplied without necessity” (translated from the Latin of non sunt multiplicanda entia sine necessitate). The razor part of Occam’s razor is that he advocated essentially reducing or shaving away at assumptions until you got to the barest set needed. By the way, it is permitted to say that it is Ockham’s razor, if you want to abide closely to the spelling of his proper name, but by widespread acceptance it is usually indicated as Occam’s razor.

You can go even further back in time and attribute this same important concept to Aristotle. Based on translation, he had said that: “Nature operates in the shortest way possible.” If that’s not enough for you, he also was known for this: “We may assume the superiority ceteris paribus (other things being equal) of the demonstration which derives from fewer postulates or hypotheses.” Overall, there have been quite a number of well-known scientists, philosophers, architects, designers, and others that have warned about the dangers of over-complicating things.

For those of you that are AI developers, you likely already know that Bayesian inference, an important aspect of dealing with probabilities in AI systems, also makes use of the same Occam’s razor principle. Indeed, we already recognize that with each introduction of another variable or assumption, it increases the potential for added errors. You can also look to the Turing machine as a kind of Occam’s razor. The Turing machine makes use of a minimal set of instructions. Presumably enough to be able to have a useful construct, but no more so than needed to achieve it.

In the realm of machine learning and neural networks, it is important to be mindful of Occam’s razor. I say this because with large data sets and at times mindless attempts to use massive neural networks to identify and catch onto patterns, there is the danger of doing overfitting. The complex neural network can possibly be impacted by statistical noise in the data. A less complex neural network might actually do a better job of fit, and be more generalizable to other circumstances.

For a self-driving car, we need to be cognizant of Occam’s razor.

The designers of the AI systems and the self-driving car should be continually assessing whether the complexity that they are shaping is absolutely necessary. Might there be a more parsimonious way to structure the system? Can you do the same actions with less code, or less modules, or otherwise reduce the size of the system?

Many of the self-driving car AI code has arisen from AI researchers and research labs. In those circumstances, the complexity hasn’t particularly been a topic of concern. When you are first trying to see if you can construct something, it is likely to have all sorts of variants as you were experimenting with one aspect after another. Rather than carrying those variants into a self-driving car that is going to actually be on-the-road and in mass production, it is helpful and indeed crucial to take a step back and relook at it.

I’ve personally inspected a lot of open source code for self-driving cars that is the proverbial spaghetti code. This is programming code that has been written, rewritten, rewritten again, and after a multitude of tries finally gotten to work. Within the morass, there is something that works. But, it is hidden and obscured by the other aspects that are no longer genuinely needed. Taking the time to prune it is worthy to do. Of course, there are some that would say if it works, leave it alone. Only touch those things that are broken.

If you are under pressure to get the AI software going for a self-driving car, admittedly you aren’t going to be motivated to clean-up your code and make it simpler and more pristine. All you care about is getting it to work. There’s an old saying in the programming profession, you don’t need to have style in a street fight. Do whatever is needed to win the fight. As such, toiling night after night and day after day to get the AI for the self-driving car to work, it’s hard to then also say let’s make it simpler and wring out the complexity. No one is likely to care at the time. But, once it is in production, and once problems surface, there will be many that will care then, since the effort and time to debug and ferret out the problems, and find solutions, will be enormous.

There’s another popular expression in the software field that applies to self-driving cars and the complexity of their AI systems. It’s this, don’t be paving the cow paths. This refers to the aspect that if you’ve ever been to Boston, you might have noticed that the streets there are crazily designed. There are one-way streets that zig and zag. Streets intersect with other streets in odds places and at strange angles. When you compare the streets in Boston to the design of the streets in New York, you begin to appreciate how New York City makes use of a grid shape and has avenues and streets that resemble an Excel spreadsheet type of shape.

How did Boston’s streets get so oddly designed? The story is that during the early days of Boston, they would bring the cows into town. The cows would go whichever way that they wanted to go. They would weave here and there. The dirt roads were made by the cows wanting to go that way or this way. Then, later on, when cars started to come along, the easiest way to pave the streets was to use the dirt paths that had already been formulated essentially as streets. Thus, rather than redesigning, they just paved what had been there before.

Are we doing the same with the AI systems for self-driving cars? Rather than starting from scratch, though using what we now know about the needs and nature of such AI systems, are we better off to proceed as we are now, building upon building of what we have already forged? Doing so tends to push complexity up. We’ve seen that many believe that complexity should be reduced, if feasible, and that simpler is better.

You might be surprised to know that there is a counter movement to the Occam’s razor, the anti-razors, which say that the razor proponents have put an undue focus on complexity, which they argue is pretty much a red herring. They cite many times in history where there was a movement toward a simpler explanation or simpler theory, and it backfired. Some point to theories of continental drift, and even theories about the atom, and emphasize that there were attempts to simplify that in the end were dead-ends and led us astray.

There are also those that question how you can even measure and determine complexity versus simplicity. If my AI software for a self-driving car has 50 modules, and yours has 100, does this ergo imply that mine is less complex than yours? Not really. It could be that I have 50 modules each of which is tremendously complex, while maybe you’ve flattened out the complexity and therefore have 100 modules. Or, of course, it could be the other way too, namely that I was able to reduce the 100 complex ones into 50 simpler ones.

We need to be careful about what we mean by the words complexity and simplicity. I know of many AI developers that say they know it when they see it. It’s like art. Though this is catchy, it also should be pointed out that there are many well-developed software metrics that can help to identify complexity and we can use those as a straw man for trying to determine complexity versus simplicity in self-driving car systems.

For auto makers and tech companies that are designing, developing, and planning to field self-driving cars, I urge you to take a look at the nature of the complexity you are putting in place. It might not seem important now, but when those self-driving cars are on the roads, and when we begin to see problems emerge and cannot discern where in the system the problem sits, it could be the death knell of the self-driving car. I don’t want to seem overly simplistic, but let’s go with the notion that complexity equals bad, and simplicity equals good, assuming that all else is otherwise equal.  

Now that I’ve said that, the anti-razors are going to be crying foul, and so let me augment my remarks. Sometimes complexity is bad, and simplicity is better, while sometimes complexity is good and simplicity is worse. Either way, you need to be cognizant of the roles of complexity and simplicity, and be aware of what you are doing. Don’t fall blindly into complexity, and don’t fall blindly into simplicity. Know what you are doing.

This content is originally posted to AI Trends.