By Lance Eliot, the AI Trends Insider
When my daughter was quite young, we would go over to a nearby playground and she would gleefully enjoy going on the swings and the slides. One day, it had been raining, and so we went over to the playground but everything seemed too wet to play on. She noticed though that there was a little stream of water making its way across the playground, being driven by the leftover rain water on the grass that was draining along the now-made stream and flowing out to the street.
She decided to pick a particular spot along the small stream, and began to make a tiny bridge made from twigs that were on the ground. This seemed like a quite inventive way to turn our otherwise rained-out visit to the playground into something fun to do. I watched in fascination as she tried to hook together twigs and make them into an arch over the inches-wide stream of rain water. At her young age, this seemed like quite a mental feat in that she had to discern how to intertwine the twigs, how to try and arch them over the water, and do so without having the structure fall into the stream itself.
Well, it turns out that the structure became top heavy and collapsed into the stream. Away went most of the twigs. I thought she might get upset or at least stand up and walk away in disgust. Instead, she pondered the result of her efforts. For about a solid minute, she looked at the stream, she looked at the twigs, she looked around the playground, and was lost in deep thought. I had no idea what she might be thinking. Suddenly snapping into action, she went over and collected some scattered tree branches that had fallen during the storm, and also got other of nature’s leftovers such as fallen leaves. She then began to build anew a bridge across that stream. This version was going to likely last a thousand years.
It was exciting to see that she had the tenacity to continue her quest. She did not give up. She reviewed what had occurred and carefully analyzed the situation. She re-examined the resources available to her. She re-planned what to do next. She carried out her plan. For a small child, these are wonderful signs of thoughtfulness and mindfulness. As you can imagine, I was elated to think that this was a precursor to what she would become as an adult. Indeed, this is how she turned out!
Anchors Can Weigh Us Down
Why did I tell this story? Sometimes, we need to start over. It could be a system that you are coding at work and it has reached a point that there seems to be a dead end. Or, maybe you have a home project that has gotten bent out of shape. You could try to continue with what you started, but at times this can be worse than just starting over. When you build upon something already started, you often need to bend over backwards to make the new stuff fit. You become anchored to what was already done. If the already done aspects are not very good, you can become trapped and mired in what was there before.
It’s the curse of the legacy.
Whether it is just because it exists, or sometimes due to tradition, or for whatever reason, the past can cause our future to become pinched. Meanwhile, someone else comes along and sees a new future, and so they jump past the old ways. Some would say that Uber did this in the taxi business. The taxi business was mired in the old ways, you had to call to get a taxi, the taxi was often dirty and ugly, the driver was difficult to deal with, etc. Uber opted to skip past the conventional taxi and use a new form of taxi, everyday people driving in their everyday cars, and which you could contact via an app on your smartphone. It was a jump that many at the time thought was crazy. They weren’t the only ones to see a different future, and so I want to make clear they alone did not have this vision, but nonetheless they were able to carry it out in a substantial way.
Let’s revisit this whole notion about starting over and recast it into the field of Artificial Intelligence (AI).
In an earlier era of the AI field, there was a great deal of fanfare and mania about so-called expert systems, sometimes also referred to as knowledge-based systems or rules-based systems. There was an entire sub-industry that sprang up to provide automated tools for knowledge acquisition, for knowledge encoding, and so on. The hope at the time was that this was the breakthrough towards building true AI systems that could exhibit intelligence and intelligent behavior.
For expert system and AI self-driving cars, please see the article: https://aitrends.com/selfdrivingcars/expert-systems-ai-self-driving-cars-crucial-innovative-techniques/
Though there was a lot accomplished during that era, it eventually became more apparent that it was not going to get us to the true sense of AI. Some of you might recall my writings and speeches from that era that said as such, and for which I cautioned that we shouldn’t be jumping the gun on what expert systems would ultimately gain us. For some people, they now say that we went from the AI spring to the AI winter, wherein after a let-down of expert systems there was a sense that we were still a long ways from true AI.
During the expert systems heyday, there was also some action in the machine learning realm. The use of artificial neural networks was just beginning to come out of the research labs. Most of the neural networks were being built as prototypes. They were relatively small in size. Just a handful of layers. The number of neurons would number maybe in the hundreds at most. The mathematical properties were still being developed and explored. Tools to build neural networks tended to be clumsy and awkward to use. It was pretty much an arcane part of the AI field.
The grand convergence of lower cost processors, higher performance processing, ready access to large data sets, and other factors then prompted artificial neural networks to regain attention. And, indeed, it has become the darling of the AI field. Seemingly impressive feats involving vision processing. Used for doing foreign language translation. Winning at games such as Go. Etc.
Generally, most that are in-the-know would agree that these artificial neural networks are not a breakthrough in the sense that it has yet to be shown that this approach will lead us to truly intelligent systems. They are an incremental advance and appear to provide progress forward.
Where in this is the ability to exhibit common sense?
See my article about common sense reasoning and AI self-driving cars: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/
Where in this is the “spark” that we consider the nature of human intelligence?
See my article about the Turing Test and AI self-driving cars: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/
Is It Just Scale Or Something More
There are proponents of the artificial neural network approach that say we just haven’t hit scale as yet. We have some seemingly large-scale neural networks involving thousands upon thousands of artificial neurons. But, the human brain is estimated to have 100 billion neurons, and an estimated 100 trillion connections.
No man-made artificial neural network yet approaches that magnitude. The question many are asking is whether if we can actually create an artificial neural network of that same size, will we suddenly have ourselves a functioning mind?
In essence, if you have two things that are about the same size, one a biological embodiment and the other some kind of machine or automation embodiment, will the machine version then be as capable as the biological version?
We don’t know, but we do know that the way in which we are simulating neurons in an artificial neural network is not the same as the biological implementation in an actual brain, and so already we have the aspect of a difference between the two. Presumably, the artificial one being inherently inferior just by the “mechanics” of things alone, and so right away one might doubt the efficacy of by sheer size of counts of neurons and connections alone that this simulated approach will be unlikely to rise to the same pinnacle.
Even if we could get the housing to be the same, what about the contents?
Indeed, there is belief that the human brain is more than just a bucket of neurons and connections. There is perhaps some kind of pre-wiring and pre-setup that makes this collection of Lego’s into something special that can ultimately showcase intelligence. If that’s the case, we need to somehow get our artificial version to become more like that. Or, we can hope that perhaps there is more than one way to skin a cat, meaning that maybe we can achieve intelligence but do so via some other means than the approach that we are aware of today (the wetware brain).
Are we though maybe trapped already in our ways?
For the artificial neural networks of today, we need to present them with sometimes millions of data instances to get them to pattern onto something. Want to be able to find cats in photos, first run millions of cat photos into the neural network for training purposes. You might then get a good “cat” image detector. But, it is possible that with just a few pixel changes, you could present to the trained neural network an image that does contain a cat, and for which a human would detect it, but for which the neural network might not.
Neural networks as we know them so far are brittle. They also require an immense amount of training data samples. They are very narrow in their focus.
Does a small child need to look at millions of cat images to figure out what a cat looks like? Don’t think so. How does a small child know what a cat looks like, if they haven’t seen millions or even thousands of cats to pattern after? Somehow, the small child accomplishes such a feat. No one knows how.
There has been a long time assumption that the human brain when a baby is first born contains nothing much in it. It is pretty much devoid of what we consider knowledge. Then, as the baby encounters the world around them, the brain soaks in the information and begins to formulate intelligence. In a seemingly magical way, the baby increases in intelligence and becomes a child, and the child gradually increases in intelligence and becomes the adult.
Is indeed the baby’s brain a true blank slate? Is it simply a collection of neurons that are empty and then become formulated as the intelligence fermentation process gets underway?
There are cognitive scientists that would say that even the youngest baby has some form of neural wiring that provides them with an innate ability to do things like object representation, they have an approximate number sense for counting purposes, they have some kind of geometric navigation built-in, they have something that enables their use of language, and so on. We might not be able to communicate with a young baby per se, since their language skills and motor skills don’t readily allow it, but nonetheless inside that head is pre-wiring that already out-the-gate gives that human a huge step towards intelligence.
It would be as though we made sure that every artificial neural network started-up with an underlying structure and content that was essential for moving forward. A kind of bootstrap as it were. It would be an “innate” capability and allow that neural network to go beyond some narrow focus of only being able to play the game Go or detect the image of a cat.
But, we don’t have anything remotely like that as yet.
This then returns me to my story about how my daughter took a look at the fallen twigs and stepped back to re-think how to solve the problem at hand. In the machine learning realm, maybe we should be putting our efforts towards figuring out the bootstrap. Until we have that figured out, the rest of this stuff that we are doing with neural networks might be essentially for not. We are trying to use the neural network structures that we know today, and building upon that to get towards intelligence.
Maybe, we are building in a manner that requires a redo. Is there a means to develop the innate core, and once we’ve nailed that, the rest of what we want to accomplish will be stepwise and come to us like so many dominos that fall one after another?
AI Self-Driving Cars Reboot
What does this have to do with AI self-driving cars?
At the Cybernetic AI Self-Driving Car Institute, we are pursuing both the conventional approach towards developing AI self-driving cars, and we are simultaneously pursuing the “outlier” notion that the only way we might really get to a true AI self-driving car is by a more radical approach to doing so.
Let’s suppose that all the auto makers and tech firms that are trying to create AI to achieve true Level 5 self-driving cars are not quite able to achieve it. We all keep pushing the existing approaches of machine learning as we know it today, we keep putting in faster and faster processors, and yet we don’t get to the true Level 5. A true Level 5 is a self-driving car that can do whatever driving that a human can do, and not need any human intervention.
Maybe we get to a Level 5 self-driving car that’s pretty good, and seems to cover most of what a human driver can do, but not really all that a human driver can do. All of us reach a point of not getting any further. We end-up with self-driving cars that can handle say 95% of the driving task, and there’s still that 5% leftover. Upon realizing that we can’t get that last 5%, we all agree that AI self-driving cars need to be separated into their own lanes and be treated like a theme park ride, or take other protective measures.
As an overall framework, an AI self-driving car involves these major aspects:
- Sensor Data Collection
- Sensor Fusion
- Virtual World Model
- AI Action Plan Updating
- Car Controls Commands
See my article about a framework for AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
In terms of sensor data collection, this appears to be a principally physical kind of perception task. It might be likened to the human sensory capabilities. There is though mental processing involved when we humans see things with our eyes and hear sounds with our ears. Thus, mental processing comes to play even in something that otherwise seems like a straightforward peripheral device kind of way. Does a baby come with a brain that has already been pre-wired to best make use of the peripheral devices and also integrate this into the rest of the mental processing of thinking?
For most of the AI self-driving cars to-date, the sensory devices don’t do much beyond collecting raw data and at times doing some minimal activities such as compression or transformations. When sensor fusion happens, the data collected from the myriad of sensors needs to be compared to each other and used to create some kind of unified indication about what is going on in the world external to the self-driving car. This is then fed into a virtual world model that is keeping track of where the self-driving car is, where it is trying to go, and other facets. The AI action plans are then updated or devised, and ultimately the AI instructs the self-driving car to take some form of action.
If you take a teenager and try to teach them to drive, do they require thousands upon thousands of driving journeys to figure out how to drive a car? Nope. They can often readily mentally grasp the nature of the driving task, and it becomes more of an effort to coordinate their body to the driving task and less so the mental aspects of the driving task.
The conventional approach to developing an AI self-driving car is to take a blank-slate and try to make it into something that can drive a car. Without an underlying innate capability of the nature that the human brain seems to start with, we might be going at this in the wrong way. We might need to first solve the innate problem, and once that’s happened, layering on top to drive a car might be relatively easy and get us to the 100% goal of doing the driving task.
In that manner, perhaps we need to start over and solve the innate capability problem first, and then do the self-driving car aspects. This though would potentially mean that we wouldn’t see self-driving cars right away and we might all be discouraged since it would not provide an immediate solution to the problem at hand of wanting to have self-driving cars. For the moment, we’re all kind of locked into making a bridge with twigs, and if it turns out that bridge isn’t sturdy enough and won’t really do the job, we might need to take another look around and try instead to solve the innate capability problem first. By the way, anyone that cracks the code on the innate capability, they’ll likely win a Nobel prize and open the door to AI that we envision we someday all want to see arise.
Copyright 2018 Dr. Lance Eliot
This content is originally posted on AI Trends.
You must be logged in to post a comment.