Human Back-up Drivers for AI Self-Driving Cars
Human Back-up Drivers for AI Self-Driving Cars

Human Back-up Drivers for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider

How would you like to work as a human back-up driver in a state-of-the-art AI self-driving car?

Sounds glamorous. You can impress your friends and colleagues by bragging about going around town in the future of automobiles. You are the future. In a sense, you feel like an astronaut that is taking us to new planets and to new horizons.

Or, there’s another view.

You sit in a car all day long, waiting to see if you need to do anything. Most of the time, you essentially do nothing. You are a cog in the great AI machine. Machines are taking over, and you are helping this to happen. You are the enemy of humanity. In the parlance of Star Trek, you are a dunsel (this was a term used in the fictional Star Trek series and was a word used by the Federation to refer to someone that had no particular useful purpose).

What’s the truth?

Pretty much the job is more towards the less glamorous side. Indeed, as I’ll explain next, it’s a thankless kind of job that has high stress, and to do it right you need to have nerves of steel, incredible patience, and be on your toes at all times.  This is not for the faint of heart. It is often long stretches of monotonous boredom, punctuated by moments of pure terror and semi-panic.

Unfortunately, the manner in which some auto makers and tech firms are selecting and fielding their human back-up drivers is not of the utmost attention. This is unfortunate since the notion of the human back-up driver is that when the AI cannot handle a situation or when it falters, the human back-up driver takes over the controls.

Thus, this is a life-or-death kind of job.

If the human back-up driver or operator does not do their task at the right time, it could mean that the self-driving car will crash or otherwise get into dire trouble. This could harm or kill the human back-up driver, it could harm or kill any other human occupants, it could harm or kill humans in other nearby cars by colliding with them, it could harm or kill pedestrians, it could harm or kill bicyclists, and so on.

You might at first say that any Uber or Lyft driver could readily do this job. Not exactly. If you are a hired ridesharing driver, your job consists of driving a car. You know that all of the time you are behind the wheel, you are driving the car. Your attention is likely focused on the driving task. It’s what we normally consider the act of driving.

In contrast, as a human back-up driver, you are behind the wheel, but you are not actively driving the car. You are supposed to be pretending that you are driving the car, in the sense that your attention is riveted to the road and the driving environment, and you are poised like a cat, ready to pounce and take over the controls. Let’s pretend that you try doing this for one hour. During that hour, you aren’t actively driving the car, but you know that any a moment’s notice you might need to do so.

If you cared about this, you’d likely be exhausted at the end of the hour. It’s like a deadly game of having knives being thrown at you. You watch them coming, you need to decide in a split second whether any will hit you, and you might need to suddenly jump into action to catch one before it does. You can’t predict beforehand how many knives are going to be coming at you. It all happens in real-time. They are endlessly coming at you. One after another, after another, etc.

Suppose that you did this for one hour, and then I asked you to do it for say eight hours at a stretch. And, I asked you to do this for five days a week. And I asked you to do this week after week.

What would happen?

For some, the stress would be overwhelming and they’d likely have a difficult time continuing with this job.

For others, they might actually like being on-the-edge and maybe become proficient.

For many, it becomes a job that you eventually start to let your guard down. Suppose that rather than a continuous stream of knives coming at you, instead I tossed one at you from time-to-time, but completely unpredictable times. The odds are that you’d become complacent. You would know that most of the time you wouldn’t need to dodge or catch the knives. It would only happen on occasion, and so you would naturally begin to be less attentive to the task at hand.

Suppose further that I sometimes threw a knife at you, but other times I threw a relatively harmless water balloon. The water balloon might smack you, it might sting a little bit when it hits you, and you’d get wet. Overall, though, you’d be OK. Now, what happens to your attention span? You don’t know when something is coming at you, and in some cases it is scary and possibly a killer (the knife), while other times it’s going to hurt just mildly (the water balloon).

If you are the human back-up driver, there are going to be cases whereby you take over the control of the self-driving car when it was heading to a near death situation, such as maybe veering over a cliff or ramming into another car. There might also though be situations where the self-driving car starts to go faster than the speed limit, and perhaps you are supposed to prevent it from violating the traffic laws, and so you need to take over in those circumstances too. These are like the knives versus the water balloons.

You might be interested to know that there’s been lots of studies of rats and what happens to then under stressful situations. Experiments have done similar kinds of low stress and high stress tests of rats, doing so by randomly shocking them severely versus doing a puff of irritating air at them. As you might guess, the rats eventually become erratic in their responses since they can’t well anticipate what is going to happen and how or when to react to it. Please know that I am not suggesting that human back-up drivers are akin to rats, and I am simply saying that behavior of even the simplest can become befuddled by these kinds of hit-and-miss high-stress situations.

At the Cybernetic Self-Driving Car Institute, we are developing ways to systematically aid the human back-up drivers in their driving efforts, along with providing guidelines as to how to best identify, select, hire, train, field, and keep engaged these crucial self-driving car operators.

Notice that I said that these human back-up operators are crucial. Here’s why.

If we are going to have self-driving cars learning to drive while on our public roadways, we need to be comfortable that the risks of the self-driving car going awry are low enough that we are willing to have the self-driving cars mixing with society. You can think of this like having novice teenage drivers on the roadways, but even less so in terms of proficiency and awareness. Though the teenager might have emotion that inadvertently causes them to make a wrong move, the emotionless AI software does not have the same smarts as a teenager and so can make mistakes in plenty of other ways.

Usually with a beginning novice teenage driver, there will be an adult in the car with them (a normal requirement for most states). The adult is supposed to be ready to take over the controls of the car. This is in any practical sense unlikely as the adult is seated beyond the actual controls of the car. Yes, the adult can reach over and try to do the controls, but in reality this is not very easy to do. In that sense, we generally accept that the adult is there more so to provide coaching to the student driver, rather than truly be there to immediately handle the controls when needed.

For the self-driving car, the human back-up driver is sitting directly at the driving controls, which is much better than when having the adult seated next to a student driver. The human back-up driver has unfettered access to drive the car. They can respond immediately as needed, assuming they are paying attention and alert to do so.

The human back-up operator is our last line of defense.

If the AI of the self-driving car falters or fails, it is the human back-up driver that is intended to prevent a calamity. Without the human back-up driver, we would all be at much higher risk of having unattended AI self-driving cars that could go wacky and there’d be no immediate way to stop it.  As an aside, I know that some of you will bring up the topic of remote operators as an alternative to the in-the-car human back-up driver – please note I’ve previously covered in other of my writings the topic of remote operators and I won’t repeat those aspects herein.

So, we definitely need the human back-up drivers for us to proceed with AI self-driving cars on our public roadways. Without the human back-up drivers, we either would need to accept a much higher risk of calamity, or we would need to decide that self-driving cars can only go on private roads until they are so well proven that they are permitted onto public roadways. There are some that believe we should be confining self-driving cars to private roads, but the counter-argument by the auto makers and the tech firms is that you’ll either need years and years of this before we’d have self-driving cars perfected, or that you’d never be able to “perfect” a self-driving car at all without it’s encountering the variety of situations faced on our public roadways.

Disengagements

One of the most dreaded words for human back-up drivers is the word “disengagement” since it means that a human operator had to take over the AI self-driving car.

You might think that the human back-up operator should be happy to count disengagements, because it presumably means that the human driver was able to prevent an AI self-driving car from doing something untoward. The operator did what they were supposed to do. They saved themselves and the rest of us from a calamity.

The bad news is that disengagements are considered a problem for the auto makers and tech firms because it is considered a black mark. In theory, if the AI self-driving car is working perfectly, there should never be any disengagements. Therefore, the goal is to have zero disengagements. Thus, if the human driver brings about a disengagement, it tacitly is a sign that the AI is not perfected. It means that there’s something wrong with the AI self-driving car.

You might say that it is unfair to consider disengagements in this manner. For example, suppose that the AI self-driving car blew a tire, which has presumably nothing to do with the car being a self-driving car, and suppose the human driver took over. Well, the counter-argument is that the AI should have figured out how to deal with the blown tire. The AI is supposed to be able to do anything a human driver can do, therefore, there should never be a need for a human driver to take over. If the human driver takes over, it means that the AI wasn’t as good as a human driver.

This produces added stress onto the shoulders of the human back-up driver. They know that if they do a disengagement, it generally means that something is amiss with the AI self-driving car, but the auto maker and tech firm don’t want that to happen. On the other hand, if the human driver does not do a disengagement, and if the self-driving car slams into a wall, it’s death and destruction as a result. Certainly the human back-up driver does not want that to happen.

Shouldn’t the AI developers be enthusiast to have a disengagement in the sense that it possibly tells them that there’s something in the AI that needs to be further developed or enhanced? Wouldn’t they want to know? The whole point of the roadway testing is to find the bugs and imperfections, learn from them, fix and improve the AI, until the point at which it becomes a true self-driving car. The more clues provided, the sooner this can get accomplished.

Well, it’s not that easy. Right now, many of the regulations require the auto makers and tech firms to publish the counts of disengagements. Sadly, the media has grabbed hold of these numbers and uses them to pound away at whether the auto maker or tech firm is getting closer or further away from having a true self-driving car. As such, it behooves the auto maker and tech firm to try and keep the number of disengagements as small as possible.  Of course, they already presumably want a small as possible number of disengagements anyway, since it suggests that their self-driving car is getting closer to be ready to be a true self-driving car.

But, this also distorts possibly the nature of the testing. It’s reminiscent of the public relations nightmares faced by companies that make rockets. When they do a rocket test, the media will howl to the rafters when the rockets go amiss or explode on the pad. This proves that the rocket is not ready for prime time, says the media. The stock price of the rocket company plummets. How are they supposed to be able to do genuine testing if they are going to get castigated each time that a test shows something useful to aid them toward perfecting the rocket?

You might then be tempted to only test rockets that you know will work perfectly, even if it means that you aren’t readily making progress toward making a better rocket. This same logic can be applied for self-driving cars. It might be easier to just have your self-driving car be driven in situations that there’s little chance of a disengagement. Have the self-driving car drive around a small town that has little variety in terms of pedestrians that dart into the street or wild human drivers, and so this will hopefully reduce the number of disengagements.

The media then when reporting disengagements would make it seem that one self-driving car is obviously better than another, simply due to the lesser number of disengagements. This can be misleading and foolish because we aren’t comparing the number of disengagements per capita, such as per mile driven or some such metric. Even there, though, miles driven in a small town are not the same as miles driven in a big city with tight streets and tons of traffic.

One issue is that there’s not a clear cut standardized definition that everyone is using for a disengagement. The simplest definition is that it involves the human taking over the control of the self-driving car. But, it does not provide for any kind of why they did so. Some states require the why, some do not. Some allow it to be any kind of open text, and so it is difficult to gauge what the reason really was and it is problematic to compare it to others that are also reporting disengagements.

We also would likely want to know what the circumstance was and the length of time of the disengagement. If the human driver took over for a split second, it presumably might mean that the AI was just needing a nudge, while if the human driver took over for 20 minutes it might mean that something more serious was afoot with the AI.  But, this is also hard to compare, since some firms have a policy that once a disengagement occurs, the human driver is supposed to continue doing the driving and bring the self-driving car to a spot where the developers can inspect it or otherwise review the self-driving car.

Consider another important aspect about disengagements, namely, was the disengagement a valid one or an invalid one?

Suppose the human driver opts to do a disengagement, doing so because they perceived that an accident was about to occur. How do they prove this? The AI developers might say that there was nothing wrong with the AI and it could have handled the situation. The human driver insists that they felt that the AI wasn’t slowing down or swerving, or whatever, and so they judged that it was time to take over. But, the AI team might insist that this was mistaken by the human and the human should have allowed the AI to see things through.

You tell me, who’s right and who’s wrong?

It is hard to be able to “prove” that something bad could have happened, and so the human driver is once again under great stress. They not only don’t know when the moment will arise to take over, they might also be second guessed as to why they did the takeover. Furthermore, they will likely be considered as skittish if they do too many takeovers. The odds are that a high number of takeovers or disengagements could lead to them getting fired.

The auto maker or tech firm would likely say that someone with excessive disengagements is not a good back-up driver because they are needlessly stopping the AI from driving the car. Ideally, the human back-up driver should only be doing valid disengagements and not doing any invalid disengagements.

This is the formula that is at times used:

Optimal # of disengagements = Maximum (Valid disengagements) – Minimum (Invalid Disengagements)

In theory, we want the human back-up driver to always do a valid disengagement, presumably therefore saving the self-driving car from getting into a calamity, and we want to minimize the number of invalid disengagements, preferably being zero. We’d of course also like to have the maximum number of valid disengagements be zero, since this means that no disengagements were needed at all.

Take a look at Figure 1.

I show a disengagement curve that depicts over time the number of driving incidents and the frequency of disengagements. What should be happening is that at first the frequency is high, and gradually after those are fixed, the number beings to drop. At some point, there are fewer and fewer left. The remaining ones are often in some obscure aspects of the AI that rarely occur.

There is a stress line that rises from left to right. The stress at the start is not quite so high because the disengagements are occurring with high frequency and easy for the human driver to identify and undertake. Gradually, as the frequency drops, the stress rises due to the aspect that now the human driver does not know when the takeover will need to happen. As mentioned earlier, they enter into a point of not knowing when to be alert and when they can relax.

There are two zones, the first zone is the predictable repeats, zone A. The second zone is the unpredictable intermittent instances. When a human driver gets into zone B, they are at a high stress level as they await that sudden moment at which they’ll need to spring to action. It probably won’t be something obvious. It will likely be a driving situation that’s oddball and will occur seemingly out of left field.

At this juncture too, the AI developers are likely hoping that there won’t be any disengagements. The perception is that things are coming along swimmingly. This adds pressure to the human driver. The human wants to avoid making an invalid disengagement, but not incur the injury or death that might happen if they avoid making a valid disengagement.

I’ve seen some crazy things like one company that tied their pay of the human drivers to the number of disengagements. Imagine that you know that you’ll get paid more if you avoid doing a disengagement, and so now you’ll figure that it’s worth it to take those chances like running with the bulls in Pamplona in order to have a fatter paycheck (you calculate the odds of getting injured or killed in a different manner due to the pay aspects). Trying to tie pay to having a high number of disengagements is equally problematic because then the human driver will just keep doing disengagements right and left to get paid more.

There was one firm that provided a quota to the disengagements. During your 8-hour shift, we expect 2.5 disengagements, the human drivers were told. What do you do with that? Do you wait as long as possible during your shift, and then force the 2 or 3 disengagements if you’ve otherwise had none? A quota or threshold for this kind of work has little in the way of being practical.

Human Engineers

Some firms opt to have the human back-up driver be accompanied by a human engineer in the self-driving car. They usually sit in the backseat and will have some form of monitoring equipment to detect what the self-driving car is doing. This can be handy since the engineer can quickly assess when a disengagement occurred as to what the self-driving car was doing. The engineer might not be able to do a full diagnosis by themselves, but at least they’ll have had direct exposure to whatever was going on with the self-driving car and the driving situation.

I say this because it can be hard after a self-driving car journey to recreate what happened when the human driver took over. Sure, you can inspect the camera footage and the radar data, and so on, but there’s an element of having been there, being in the moment, which can add valuable insight that any second guesser sitting in an office or lab three days later is not going to have handy.

These engineers serve another purpose which is often unstated and unheralded. They can talk with the human driver and keep them company. This can be a big boost to the human back-up driver. It can spur the back-up driver to be more attentive and stay attune to the driving situation. Otherwise, things can get pretty lonely for the human back-up driver. The human back-up driver is more apt to let their minds wander when there isn’t an engineer present.

You could counter-argue that maybe the engineer will distract the back-up human driver. Maybe it’s better to allow the human back-up driver to be solitary and remain utterly focused on the driving task. I’d say that might make sense for very short periods of time, but when you are thinking about a 4 hour shift or an 8-hour shift, I’d tend to go with having that engineer in there.

Some auto makers or tech firms that view the engineer as only there for purposes of monitoring are tempted to say that they should get rid of those engineers from being in the self-driving car. Just be recording whatever happens and you can always play it back later on. Plus, the cost of having that engineer in the self-driving car just drives up your costs, seemingly needlessly. The problem there is that the viewpoint is based on only seeing the engineer as a monitoring tool, and not as a fellow human that can interact with the human back-up driver.  You need to consider the full sense of benefits of having the engineer, and weigh that against the added cost, and so if you undervalue the benefits then it does mistakenly seem like the added cost is not worthwhile.

Attention of the Back-up Driver

Some say that the solution to ensuring the attention of the human back-up driver involves adding automation to keep track of the human driver and spark them to remain focused to the driving task. For example, the steering wheel can have a mechanism that keeps track of the driver’s hands, and then alerting them to keep their hands on the steering wheel. Another is facial recognition to detect that their head is facing forward. Another is eye movement recognition to detect that their eyes are locked on the road ahead and not looking off to the side or downward.

These are certainly valuable ways to help keep the human driver glued to the driving of the self-driving car. We are seeing these same kinds of systems being placed into the Level 3 and Level 4 self-driving cars, for which the human driver is still responsible for the driving task, even if the automation is performing some aspects of the driving task.

Still, as mentioned before, it’s hard to remain alert when you are not actually driving the car. Yes, you are seated in the driver’s seat. Yes, your head is facing forward. Yes, your eyes are on the road. Yes, your hands are at the ready on the steering wheel (but not actually steering). Does this though provide sufficient engagement to ensure that the human back-up driver is ready to take over the self-driving car?

We also need to consider the Human Computer Interface (HCI) aspects of the human back-up driver and the AI of the self-driving car. Will the AI alert the human back-up driver when something is starting to go amiss, or is the back-up driver expected to figure this out on their own? If the AI does alert the human back-up driver, in what manner does it do so, such as via audio tone, flashing lights, or verbal messages? What is the time delay between trying to inform the back-up driver and them being able to comprehend what the AI is trying to tell them?

There is the possibility that the AI will try to convey one thing, such as there’s a kid on a bike to the left of the car and so watch out, when maybe the real problem is that a huge truck is coming at the self-driving car from the right and will take out everyone and everything. The back-up driver won’t know for sure that the AI knows what is really happening, and nor whether it is conveying something relevant to the back-up driver.

If the AI doesn’t provide any kind of warnings to the back-up driver, this means that the back-up driver has no idea whether the AI is comfortable with the driving situation or not. The back-up operator needs to second guess the AI. Maybe the AI knows what to do. Maybe the AI has no idea what to do. The back-up operator has no immediate means to ascertain those aspects.

We have been experimenting with the back-up operator carrying on a conversation with the AI of the self-driving car, similar to if the back-up driver was talking to a teenage novice driver, thus allowing the human driver to find out what the AI is doing, and also further engages the human driver into the driving task.

There have also been suggestions of using gamification to engage the human driver. One approach involves having a Heads Up Display (HUD), and the human driver is watching it and kind of playing a game of being able to keep up with what it shows. This HUD is showing aspects of the roadway and so it directly pertains to the task at hand of keeping aware of what the situation of the self-driving car is.

These are ways to keep the human back-up driver physically engaged, such as their hands and their head posture, and ways to keep the human back-up driver cognitively engaged (keeping their mind on the driving situation).

For training purposes, some of the auto makers or tech firms do barely any training of their human back-up drivers. Pretty much, if you can breathe and can drive a car, they let them do this task. Others take this a bit more seriously and train them on what the self-driving car is doing, thus increasing the chances of making sounder decisions about disengagements. I tend toward wanting to try and get the human back-up driver to feel that they are indeed part of the solution toward achieving self-driving cars, rather than just a kind of bus driver that maybe will take the wheel but otherwise has no real importance to the matter. I’d say that motivation can be a big plus for having an engaged human back-up driver.

How safe are we?

If the auto makers and tech firms don’t do a good job of identifying, selecting, training, fielding, and updating their human back-up drivers, they are pretty much putting us all at a heightened risk. There will be a false sense of being “risk free” simply because a human is sitting in the self-driving car and ready to drive. The reality is that these human back-up drivers are key to preventing calamities, which can make-or-break the advent of self-driving cars. Tossing anyone into this role, paying them minimum wage, and pretending that you have human back-up operators is both a sham and a shame.

It’s not much of a glamorous job. We all will only hear about the human back-up operators when a self-driving car goes awry and the human driver did nothing or took the wrong action. The rest of the time, they are out-of-sight and out-of-mind. I’d implore the auto makers and tech firms to not treat this role as something insignificant. They are the back-up to your self-driving car, and to the future of self-driving cars, along with my safety and everyone else’s safety while you are testing your AI self-driving cars on our public roadways. That’s a big deal.

This content is originally posted on AI Trends.