Canarying and AI Autonomous Cars
Canarying and AI Autonomous Cars

Canarying and AI Autonomous Cars

By Lance Eliot, the AI Trends Insider

The coal miner looked furtively at the cage holding a canary.

There was a chance that in this new part of the mine, there could be seeping carbon monoxide gas.

Carbon monoxide is colorless and odorless, and other miners had died by being caught unawares that they were breathing it in. By having the canary, it was hoped that the canary would signal first that there was poisonous gas, and possibly (sadly) even die, but at least the workers in the mine would be able to scuttle out and stay alive and the canary would have saved human lives.

You probably are aware that canaries have been used in mines, as depicted on TV shows and movies that showcase mining as it used to be.

Starting around 1911, it was John Scott Haldane, considered the father of oxygen therapy, who proposed that canaries be used as an early detector of poisonous gases for miners. Miners would often enjoy the sounds of the canary whistling and otherwise be comforted knowing that the canary was there to help the miners to stay alive. Around the 1980’s and 1990’s, canaries were gradually no longer used and instead automated methods of gas detection were incorporated into mining.

If you are wondering why use canaries rather than say mice, it was Haldane’s research that emphasized the anatomical advantages of using a canary as a detector.

For flight purposes, the canary has extra air sacs and manages to draw in air when both inhaling and exhaling. This abundance of air sampling and the likely rapid deterioration of the canary in the presence of poisonous gas made these birds nearly ideal for the task. They are also lightweight and relatively small in size, making it easy to carry them into the mines. They didn’t require too much care, were relatively inexpensive, and could readily be seen in the semi-darkness of the mines. You can pretty much at a glance know whether the canary is alive or not.

Today, we often use the analogy of having a canary in a cage to suggest that it is important to have an early warning for anytime we might be in a potentially dangerous situation.

This doesn’t literally mean that you have to use a canary, and instead implies that something should be put in place to act as a detector.

The detector will hopefully prevent a calamity or at least forewarn when something untoward might soon occur.

Use of Canary Analysis In The Computer Field

In the computer field, you might already know about the use of so-called canary analysis.

This is a technique of trying to reduce the risks associated with moving something from a test environment into a live production environment.

We’ve all had code that we updated and then pushed into production, and then found out that oops there were bugs that hadn’t been caught during testing or the new code introduced other conflicts or difficulties into the production environment. In theory, testing should have caught those bugs beforehand and also determined whether the new code is compatible with the production environment. But, the world is not a perfect place and so in spite of what might be even very exhaustive testing and preparations, it is still possible to have problems once an update has gone into live use.

The normal approach to canary analysis consists of opting to parcel out some of your production users (such as say 1% of the users), and they get the changed production system, which becomes the canary part of this, and then a baseline with the same setup, meanwhile the existing production instance remains as is. You then compare the new baseline to the canary, collecting and comparing various performance metrics. If the canary seems to be OK, you can proceed with the full roll-out into production. It’s like a classic A/B type of testing.

I realize you might debate whether this is truly similar to the notion of the canary in the coal mine.

Here, this canary is not going to “die” per se.

The canary in this case is presumably going to reveal issues or other facets when comparing the performance benchmarks. I know that you might complain that the canary analogy is only loosely applicable, but hey, it is a vivid imagery of the matter and kind of handy to borrow the canary tale as the concept underlying any early detection mechanism.

There are various automated canary analysis systems in the computer field.

Perhaps one of those most notable and popularized would be the Google and Netflix efforts of Kayenta.  Kayenta is an open-source automated canary analysis system. The concept is to be able to release software changes at what is considered “high velocity” – meaning that when you want to relatively continuously push stuff into production.

This is the agile way of doing systems.

In the olden days, we’d bunch together tons of changes and try to do a big-bang approach to placing into production.

Nowadays, it is more of on-the-fly and get new changes into production ASAP. This though also introduces the potential for lots of bad code getting into production, given the pressures to test and get it out the door, and also not knowing how it will really interact with the myriad of other elements of the production system. When I say bad code, it isn’t even necessarily that the code itself has bugs or problems, but instead could be that the code introduces some new conflict with other existing aspects of the production system. Thus, the value of using an automated canary analysis to try and detect and prevent unintentional issues emerging in the production environment.

AI Autonomous Cars And Canary Analysis

What does this discussion about canaries have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are leveraging the canary notion but for a slightly different angle on how it can be applied to technology and AI.

AI self-driving cars are increasingly becoming complex machinery.

There are a slew of processors and a large body of AI software that runs on-board the vehicle. For a Level 5 self-driving car, which is the level at which the self-driving car is supposed to be able to drive itself and not need any human intervention, the human occupants are totally dependent upon the AI to drive the car. For levels less than 5, even though a human driver is required, and presumably responsible for the self-driving car’s actions, the human driver is still quite dependent upon the AI. If the AI falters, it might hand-over the car controls at the worst of times and the human driver might not so readily be able to take corrective action in time.

Every time that you get into a self-driving car, you’ll need to ask yourself one question – do you trust the AI of that self-driving car?

Right now, most polls show that people are dubious about the trustworthiness of AI self-driving cars.

Once self-driving cars are prevalent, people will be daily putting their lives into the hands of the AI on-board that car. You might assume that the AI can generally drive the self-driving car, but what about those rare instances involving a situation like being on a mountain road and it’s nighttime and the road is wet from the rain and a deer suddenly appears in front of the self-driving car? Just because the AI can handle the general aspects of driving does not necessarily guarantee that it can handle more obscure use cases.

Many of the existing AI systems being developed for self-driving cars are tending to focus on trying to catch issues at the time that they arise.

That’s important, but it also can get the self-driving car into a situation that can be dire or untoward. You’d rather try to catch beforehand if something is amiss or could arise that is amiss.

For airplanes, it is standard practice to do a preflight check.

During the preflight check, there is an inspection of the exterior of the airplane to make sure it appears to be airworthy. There is also an interior check. The controls are checked. The wings are checked. Etc. This is sometimes done in a somewhat cursory manner if the plane has been operational and it’s simply getting ready for continuing on a journey it had already started. In other cases, the preflight check is quite in-depth. This can be due to the plane not having been in use lately, or it can be due to the plane having lots of flying time and periodically you look for subtler cracks and clues of wear. The airlines refer to  various levels of checks, ranging from level A to level D.

Our approach is to undertake what we consider an automated “canary” precheck of the AI self-driving car.

It is an added layer of a system that tries to analyze and exercise the AI self-driving car to ensure as best possible that the AI self-driving car is ready for use. Similar to an airline preflight check, the canary can do a full-length and deep analysis, or it can do a lighter partial analysis. The fuller version takes longer to do. The human owner that wants to use the self-driving car can choose which magnitude of pre-check to undertake.  We call this added feature PFCC (Pre-Flight Canary Check).

This is essentially a self-diagnostic to try and validate and verify that the AI system and the self-driving car are seemingly ready for travel.

I say seemingly because there is only so much that can be pre-checked. Similar to an airplane, in spite of whatever pre-check or pre-flight is undertaken, there is still the chance that once underway that something will emerge that disrupts the journey. There are aspects that might have been able to be found via the pre-check, while there are other aspects that might arise at a later time, during the journey, and no pre-check would have detected.

Let’s consider the major stages of an AI self-driving car’s actions and how the pre-check takes place. There are these core stages:

  • Sensor Data Collection & Analysis
  • Sensor Fusion
  • Virtual World Model Updating
  • AI Action Plans Updating
  • Car Control Commands Issuance

Look at my article about a framework for AI self-driving cars as helpful background: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

Applying Canary Analysis To Self-Driving Cars

First, the PFCC tries to test each of the sensory devices and detect whether they are in working order.

Some of the sensory devices have their own built-in self-check, which the PFCC invokes and then determines what the outcome is. Other sensory devices might not have anything already in-place and so the PFCC needs to have specialized components to exercise those sensory devices. In addition to checking device by device, it is also useful to check for multiple devices being used at the same time. A one-at-a-time check might not detect that when more than one at a time are working that a conflict is created or other kinds of issues might arise.

Next, the PFCC tries to check the sensor fusion modules.

This involves feeding pre-canned results of data from the sensors as though the self-driving car was already underway. It is a kind of a simulated set of data to see whether the sensor fusion is working properly. Known results are compared to what the sensor fusion currently has to say about the data. There are some non-deterministic aspects that can arise and so checking the latest results to the expected results needs to be done with a certain amount of latitude. It is not necessarily a simplistic pure match comparison.

For the virtual world model, the pre-check is similar to the sensor fusion in that there is a pre-canned virtual world model that is momentarily established, and then updates are pumped to the modules. These modules are responsible for updating the virtual world model. The results are compared to a pre-canned expected set of results.

The AI action plans modules are more challenging to test.

They have the greater variability in terms of for any set of inputs what their expected set of outputs will be. So, the PFCC provides a range of canned paths and goals, in order to see whether the AI action plan updates seem to be reasonable. This is a reasonableness form of testing.

In terms of the car controls commands, those are more straightforward for doing the canary check. Based on AI action plan directives, the car controls commands are relatively predictable. This also though requires pre-seeding the modules with the status of the car such that the car controls commands are within the allowed limits expected.

When the PFCC has finished, the question arises as to what to do next. If all is well, as best as can be ascertained, this should be conveyed such that the human owner or occupants knows that the self-driving car and the AI seem ready to proceed. If all is not well, this raises the question of not only notification but also whether the PFCC should indicate that the AI of the self-driving car is so out-of-whack that the self-driving car should not be permitted to proceed at all.

For some minor aspects, the AI of the self-driving car might already have been developed such that it can handle when minor anomalies exist. Thus, the PFCC can feed to the AI that there are now known issues and let the AI proceed accordingly. If the AI itself has issues, the PFCC might need to override the AI system and prevent it from trying to drive when it is not suitable to do so.

The PFCC is something that does not remain static.

When the AI of the self-driving car is updated, usually via OTA (Over The Air updates), the PFCC is unlikely to still match to the nature of the AI system and therefore the PFCC will likely need updates too. It also taps into the OTA so that it can be updated as needed to match with the updates of the AI system.

Each type of AI self-driving car, based on brands and models, has its own set of sensory devices, sensor fusion, virtual world model structures, AI action plan structures and processing, and car controls commands. As such, there isn’t a universal PFCC per se. Instead, the PFCC needs to be established for that particular brand and model.

One consideration about a pre-flight canary check involves whether it might produce a false negative.

Suppose the PFCC reports that the LIDAR is not functioning, but it really is able to function properly. What then? The notion is that it is likely safer to err on the side of caution. The human owner or occupant will be notified and might end-up taking the self-driving car to the repair shop and discover that the PFCC falsely reported an issue. This though is able to be reported and via the OTA will be collected and determined as to whether it requires a global change to the PFCC or other changes are needed.

The more worrisome aspect would be a false positive. Let’s suppose the canary could not detect any issues with the forward facing radar, but there really are issues. This is bad. Of course, as stated earlier, the canary cannot guarantee that it will find all anomalies. In any case, during a journey, the AI system is intended to be keeping a log of anomalies discovered during the journey, and this is later on used by the PFCC to try and determine whether there were any issues that arose during the journey that might have been able to be detected earlier on.

Conclusion

Coal miners loved having their canaries.

For AI self-driving cars, right now the notion of having a “canary” that can do a pre-flight check is considered an “edge” problem.

An edge problem is one that is not at the core of a problem. For the core, most automakers and tech firms are focused on getting an AI self-driving car to properly drive on the streets, navigate traffic, etc. The use of an extensive and devoted effort of doing a pre-flight check is prudent and ultimately will be valued. Right now, most of the AI self-driving cars on our streets are pampered by the automaker or tech firm, but we’ll eventually have AI self-driving cars being used day-to-day by the everyday consumer.

Getting a professional quality pre-flight check is bound to make them as happy as a cheery chirping bird.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]