Egocentric Design and AI Self-Driving Cars
Egocentric Design and AI Self-Driving Cars

Egocentric Design and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider

You might find of interest the social psychology aspect known as the actor-observer effect. Before I explain what it is, allow me to provide you with a small story of something that happened the other day.

I was chatting with an AI developer that is creating software for an auto maker and he had approached me after I had finished talking at an industry conference. During my speech, I had mentioned several so-called “edge” cases involving AI self-driving cars. These edge cases involved aspects such as an AI self-driving car being able to navigate safely and properly a roundabout or traffic circle, and being able to navigate safely an accident scene, and so on.

See my article about AI self-driving cars and roundabouts: https://aitrends.com/selfdrivingcars/solving-roundabouts-traffic-circle-traversal-problem-self-driving-cars/

See my article about accident scene traversal and AI self-driving cars: https://aitrends.com/selfdrivingcars/accident-scene-traversal-self-driving-cars/

See my article about edge problems for AI self-driving cars:  https://aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars and also advising other firms about the matter too. Thus, we’re working on quite a number of edge problems.

Well, the AI developer was curious why I cared about the “edge” problems in AI self-driving cars.

An edge problem is one that is not considered at the core of a system. It is considered less vital and an aspect that you can presumably come around and solve at a later time, after you’ve finished up the core. This is not a hard-and-fast rule in the sense that something that one person thinks is an edge might truly be part of the core. Or, something that is an edge might not be at the core but that otherwise without the edge you are going to have a very limited and potentially brittle core.

Edges are often in the eyes of the beholder. Thus, be careful when someone tosses out there that some feature or capability or issue is an “edge” problem. It’s an easy means to deflect attention and distract you from realizing that maybe the edge is truly needed, or that the core you are going to be getting will fall apart because it fails to solve an edge aspect. I’ve seen many people be dismissive of something important by trying to label it as an edge problem. This is a sneaky way at times to avoid having to cover an aspect, and instead pretend that it is inconsequential. Oh, that’s just an edge, someone will assert, and then walk away from the conversation or drop the microphone, as it were.

Anyway, back to my chat with the AI developer. I asked him why he was curious that I was so serious about the various edge problems of AI self-driving cars. It seemed relatively self-evident that these are aspects that can occur in real-world driving situations and that if we are going to have AI self-driving cars that are on our roadways we ought to be able to expect that those self-driving cars can handle everyday driving circumstances. Self-driving cars are a life-and-death matter. If an AI self-driving car cannot handle the driving tasks at-hand, things can get mighty dangerous.

His reply was that there was no particular need to deal with these various “edge” problems. As an example, I asked him what would his AI do when it encountered a driving situation involving a roundabout or traffic circle (I’m sure you know what these are — they are areas where cars go around a circle to then get to an exit radiating from the circle)?

He replied that it wouldn’t have to deal with it. The GPS would have alerted his AI that a roundabout was upcoming, and his AI would simply route itself another way. By avoiding the “edge” problem, he said that it no longer mattered.

Really, I asked?

I pointed out that suppose the GPS did not have it marked and thus the self-driving car went into the roundabout anyway? Or, suppose the GPS wasn’t working properly and so the AI self-driving car blindly went to the roundabout. Even if the GPS did indicate it was there, suppose that there was no viable alternative route and that the self-driving car would have to proceed through the roundabout? Was it supposed to always take the long way, assuming that such a path was even available? This reminded me of a teenage driver that I knew that avoided roundabouts because he was scared of them.

He insisted that none of these aspects would occur. He stood steadfast that there was no need to worry about it. He said I might as well say that suppose aliens from Mars came down to earth. Should he need to have his AI cope with that too?

The Pogo Stick Problem

This brings up another example that’s been making the hallways of AI developers for auto makers and tech firms doing self-driving cars systems. It’s the pogo stick problem. The AI self-driving car is going down a street, minding its own business (so to speak), and all of a sudden a human on a pogo stick bounces into the road and directly in front of the self-driving car. What does the AI do?

One answer that some have asserted is that this will never happen. They retort that the odds of a person being on a pogo stick is extremely remote. If there was such a circumstance, the odds that the person on the pogo stick would go out into the street is even further remote. And, the odds that they would do this just as a car was approaching was even more so remote, since why would someone be stupid enough to pogo stick into traffic and endanger getting hit?

In this viewpoint, we are at some kind of odds that are like getting hit by lightning. In fact, they would say it’s even lesser odds and more like getting hit by lightning twice in a row.

I am not so sure that the probability of this happening is quite as low as they would claim. They are also suggesting or implying that the probability is zero. This seems like a false suggestion since I think we can all agree there is a chance it could happen. No matter how small a chance, it is definitely more than zero.

Those that buy into the zero probability belief will then refuse to discuss the matter any further. They say it is like discussing the tooth fairy, so why waste time on something that will never happen. There are some that I can at least get them to consider that suppose it did happen, even if really remote odds. What then?

They then seem to divide into one of two camps. There’s the camp that says if the human was stupid enough to pogo stick into the road and directly in front of the self-driving car, whatever happens next is their fault. If the AI detects them and screeches the car to a halt and still hits them, because there wasn’t enough distance between them and the self-driving car, that’s the fault of the stupid human on the pogo stick. Case closed.

The other camp says that we shouldn’t allow humans on pogo sticks to go out onto the road. They believe that the matter should be a legal one, outlawing people from using pogo sticks on streets. When I point out that even if there was such a law, it is conceivable that a “law breaker” (like say a child on the pogo stick, which I guess might be facing a life of crime by using it in the streets), might wander unknowingly into the street. What then? The reply to that is that we need to put up barriers to prevent pogo stick riding humans from going out into the streets. All I can say is imagine a world in which we have tall barriers on all streets across all of the United States so that we won’t have pogo stick wandering kids. Imagine that!

If you think these kinds of arguments seem somewhat foolish in that why not just make the AI of the self-driving car so it can deal with a pogo stick riding human, you are perhaps starting to see what I call egocentric design of AI self-driving cars.

There are some firms and some AI developers that look at the world through the eyes of the self-driving car. What’s best for the self-driving car is the way that the world should be, in their view. If pogo riding humans are a pest for self-driving cars, get rid of the pests, so to speak, by outlawing those humans or do something like erecting a barrier to keep them from becoming a problem. Why should the AI need to shoulder the hassle of those pogo stick riding humans? Solve the problem by instead controlling the environment.

For those of you that are standing outside of this kind of viewpoint, you likely find it to be somewhat a bizarre perspective. It likely seems to you that it is real-world impractical to consider controlling the environment. The environment is what it is. Take it as a given. Make your darned AI good enough to deal with it. Expect that humans on pogo sticks are going to happen. Live with it.

What’s even more damming is that there are lots of variants beyond just a pogo stick riding human that could fall into the same classification of sorts. Suppose a human on a scooter suddenly went into the street in front of the self-driving car? Isn’t that the same class of problem? And, isn’t it pretty good odds that with the recent advent of ridesharing scooters that we’ll see this happening more and more?

If you are perplexed that anybody of their right mind could somehow believe that the AI of a self-driving car does not need to deal with the pogo stick riding human, and worse still the scooter riding human that is more likely prevalent, you might be interested in the actor-observer effect.

Here’s the background about the actor-observer effect.

Suppose we put someone into a room to do some work, let’s make it office type of work. We’ll have a one-way mirror that allows you to stand outside the room and watch what the person is doing. Let’s pretend that the person in the room is unaware that they are being observed. We’ll refer to the person in the room as an “actor” and we’ll refer to you standing outside the room as the “observer.”

At first, there will be work brought into the room, some kind of paperwork to be done, and it will be given to the actor. They are supposed to work on this paperwork task. You are watching them and so far all seems relatively normal and benign. They do the work. You can see that they are doing the work. The work is getting accomplished.

Next, the amount of work brought into the room starts to increase. The actor begins to sweat as they are genuinely trying to keep up with the volume of paperwork to be processed. Even more paperwork is brought into the room. Now the actor starts to get frantic. It’s way too much work. It is beginning to pile up. The actor is getting strained and you can see that they are obviously unable to get the work completed.

We stop the experiment.

If we were to ask you what happened, as an observer you would likely say that the person doing the work was incapable to keep up with the work required. The actor was low performing. Had the actor done a better job, they presumably could have kept up. They didn’t seem to know or find a means to be efficient enough to get the work done.

If we were to ask the actor what happened, they would likely say that they were doing good at the start, but then the environment went wacky. They were inundated with an unfair amount of paperwork. Nobody could have coped with it. They did the best they could do.

Which of these is right – the actor or the observer?

Perspective Determines What is Seen

It’s not so much about right or wrong, as it is the perspective of the matter. Usually, an actor or the person in the middle or midst of an activity tends to look at themselves as the stable part and the environment as the uncontrollable part. Meanwhile, the observer tends to see the environment as the part that is given, and it is the actor that becomes the focus of attention.

If you are a manager, you might have encountered this same kind of phenomena when you first started managing other people. You have someone working for you that seems to not be keeping up. They argue that it is because they are being given an unfair amount of work to do. You meanwhile believe they are being given a fair amount of work and it is their performance that’s at fault. You, and the person you are managing, can end-up endlessly going round and round about this, caught in a nearly hopeless deadlock. Each of you likely becoming increasingly insistent that the other one is not seeing things the right way.

It is likely due to the actor-observer effect, namely:

  •         When you are in an observer position, you tend to see the environment as a given. The thing that needs to change is the actor.
  •         When you are in the actor position, you tend to see the environment as something that needs to be changed, and you are the given.

Until both parties realize the impact of this effect, it becomes very hard to carry on a balanced discussion. Otherwise, it’s like looking at a painting that one of you insists is red, and the other insists is blue. Neither of you will be able to discuss the painting in other more useful terms until you realize that you each are seeing a particular color that maybe makes sense depending upon the nature of your eyes and your cornea.

Let’s then revisit the AI developer that I spoke with at the conference. Recall that he was insistent that the edge problems were not important. For the pogo stick riding human example, the “problem” at hand was the stupid human. I was saying that the problem was that the AI was insufficient to cope with the pogo stick riding human.  Why did we not see eye to eye?

His focus was on the self-driving car. In a sense, he’s like the actor in the actor-observer effect. His view was that the environment was the problem and so all you need to do is change the wacky environment. My view was that of the “observer” in that I assert the environment is a given, and you need to make the “actor” up to snuff to deal with that environment.

This then brings us to the egocentric design of AI self-driving cars. There are many auto makers and tech firms that are filled with AI developers and teams that view the world from the perspective of the AI self-driving car. They want the world to fit to what their AI self-driving car can do. This could be considered “egocentric” because it elevates the AI of the self-driving car in terms of being the focus. It does what it does. What it can’t do, that’s tough for the rest of us. Live with it.

For the rest of us, we tend to say wait a second, they need to make the AI self-driving car do whatever the environment requires. Putting an AI self-driving car onto our roadways is something that is a privilege and they need to consider it as such. It is on the shoulders of the AI developers and the auto makers and tech firms to make that AI self-driving car deal with whatever comes its way.

Believe it or not, I’ve had some of these auto makers and tech firms that have said we ought to have special roads just for AI self-driving cars. The reason for this is that whenever I point out that self-driving cars will need to mix with human driven cars, and so the AI needs to know how to deal with cars around it that are being driven by “unpredictable” humans, the answer I get is that we should devote special roads for AI self-driving cars. Divide the AI self-driving cars from those pesky human drivers.

There are some AI developers that dream wishfully of the day that there are only AI self-driving cars on our roadways. I point out that’s not going to happen for a very long time. In the United States alone we have 200 million conventional cars. Those are not going away overnight. If we are going to be introducing true Level 5 self-driving cars onto our roadways, it is going to be done in a mixture with human driven cars. As such, the AI has to assume there will be human driven cars and needs to be able to cope with those human driven cars.

The solution voiced by some AI developers is to separate the AI self-driving cars from the human driven cars. For example, convert the HOV lanes into AI self-driving car only lanes. I then ask them what happens when a human driven car decides to swerve into the HOV lane that has AI self-driving cars? Their answer is that the HOV lanes need to have barriers to prevent this from happening. And so on, with the reply always dealing with changing the environment to make this feasible. What about motorcycles? Answer, make sure the barriers will prevent motorcycles from going into the HOV lane. What about animals that wander onto the highway? Answer, the barriers should prevent animals or put up other additional barriers on the sides of the highway to prevent animals from wandering in.

After seeing how far they’ll go on this, I eventually get them to a point that I ask if maybe we ought to consider the AI self-driving car to be similar to a train. Right now, we usually cordon off train tracks. We put barriers to prevent anything from wandering into the path of the train. We put up signs warning about the train is coming. Isn’t that what they are arguing for? Namely, AI self-driving cars are to be treated like trains?

But, if that’s the case, I don’t quite then see where the AI part of the self-driving cars enters into things. Why not just make some kind of simpleton software that treats each car like it is part of a train. You then have these semi-automated cars that come together and collect into a series of cars like a train does. They then proceed along as a train. There are some that have even proposed this, though I’ll grant them that at least they view this as something like a “smart” colony of self-driving cars that come together when needed, but then still are individual “intelligent” self-driving once they leave the hive.

See my article about swarm intelligence and AI: https://aitrends.com/selfdrivingcars/swarm-intelligence-ai-self-driving-cars-stigmergy-boids/

See my framework about AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

Those that are making AI self-driving cars need to look past an egocentric view. We are not going to have true AI self-driving cars if we continue to try and limit the environment. A true level 5 self-driving car is supposed to be able to drive a car like a human would. If that’s the case, we then ought to not have to change anything per se about the existing driving environment. If humans can drive it, the AI should be able to do the same. I tried to explain this to the AI developer. I’m not sure that my words made much sense, since I think he was still seeing the painting as entirely in the color of red, while I was talking about the color blue. Maybe my words herein about the actor-observer effect might aid him in seeing the situation from both sides. I certainly hope so.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.