Coopetition and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Competitors usually fight tooth and nail for every inch of ground they can gain over the other. It’s a dog eat dog world and if you can gain an advantage over your competition, so the better you shall be. If you can even somehow drive your competition out […]

By Lance Eliot, the AI Trends Insider

Competitors usually fight tooth and nail for every inch of ground they can gain over the other. It’s a dog eat dog world and if you can gain an advantage over your competition, so the better you shall be. If you can even somehow drive your competition out of business, well, as long as it happened legally, there’s more of the pie for you.

Given this rather obvious and strident desire to beat your competition, it might seem like heresy to suggest that you might at times consider backing down from being at each other’s throats and instead, dare I say, possibly cooperate with your competition. You might not be aware that the US Postal Service (USPS) has cooperative arrangements with FedEx and UPS – on the surface this seems wild to think that these competitors, obviously all directly competing as shippers, would consider working together rather than solely battling each other.

Here’s another example, Wintel. For those of you in the tech arena, you know well that Microsoft and Intel have seemingly forever cooperated with each other. The Windows and Intel mash-up, Wintel, has been pretty good for each of them respectively and collectively. When Intel’s chips became more powerful, it aided Microsoft in speeding up Windows and being able to add more features and heavier ones. As people used Windows and wanted faster speed and greater capabilities, it sparked Intel to boost their chips, knowing there was a place to sell them, and make more money by doing so. You could say it is a synergistic relationship between those two firms that in combination has aided them both.

Now, I realize you might object somewhat and insist that Microsoft and Intel are not competitors per se, thus, the suggestion that this was two competitors that found a means to cooperate seems either an unfair characterization or a false one.  You’d be somewhat on the mark to have noticed that they don’t seem to be direct competitors, though they could be if they wanted to do so (Microsoft could easily get into the chip business, Intel could easily get into the OS business, and they’ve both dabbled in each other’s pond from time-to-time). Certainly, though it’s not as strong straight-ahead competition example as would be the USPS, FedEx, UPS kind of cooperative arrangement.

There’s a word used to depict the mash-up of competition and cooperation, namely coopetition.

The word coopetition grew into prominence in the 1990s. Some people instantly react to the notion of being both a competitor and a cooperator as though it’s a crazy idea. What, give away my secrets to my competition, are you nuts? Indeed, trying to pull-off a coopetition can be tricky, as I’ll describe further herein. Please also be aware that occasionally you’ll see the use of the more informal phrasing of “frenemy” to depict a similar notion (another kind of mash-up, this one being between the word “friend” and the word “enemy”).

There are those that instantly recoil in horror at the idea of coopetition and their knee jerk reaction is that it must be utterly illegal. They assume that there must be laws that prevent such a thing. Generally, depending upon how the coopetition is arranged, there’s nothing illegal about it per se. The coopetition can though veer in a direction that raises legal concerns and thus the participants need to be especially careful about what they do, how they do it, and what impact it has on the marketplace.

It’s not particularly the potential for legal difficulties that tends to keep coopetition from happening. By and large, the means to structure a coopetition arrangement, via say putting together a consortium, it can be done with relatively little effort and cost. The real question and the bigger difficulty is whether the competing firms are able to find middle ground that allows them to enter into a coopetition agreement.

Think about today’s major high-tech firms.

Most of them are run by strong CEO’s or founders that relish being bold and love smashing their competition. They often drive their firm to have a kind of intense “hatred” for the competition and want their firm to crush the competition. Within a firm, there is often a cultural milieu formed that their firm is far superior, and the competition is unquestionably inferior. Your firm is a winner, the competing firm is a loser. That being said, they don’t want you to let down your guard, in the sense that though the other firm is an alleged loser, they can pop-up at any moment and be on the attack, so you need to be on your guard. To some degree, there’s a begrudging respect for the competition, paradoxically mixed with disdain for the competition.

These strong personalities will generally tend to keep the competitive juices going and not permit the possibility of a coopetition option. On the other hand, even these strong personalities can be motivated to consider the coopetition approach, if the circumstances or the deal looks attractive enough. With a desire to get bigger and stronger, if it seems like a coopetition could get you there, the most egocentric of leaders is willing to give the matter some thought. Of course, it’s got to be incredibly compelling, but at least it is worthy of consideration and not out of hand to float the idea.

What could be compelling?

Here’s a number for you, $7 trillion dollars.

Allow me to explain.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. We do so because it’s going to be a gargantuan market, and because it’s exciting to be creating something that’s on par with a moonshot.

See my article about how making AI self-driving cars is like a moonshot:https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

See my article that provides a framework about AI self-driving cars:https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

Total AI Self-Driving Car Market Estimated at $7 Trillion

Suppose you were the head of a car maker, or the head of a high-tech firm that wanted or is making tech for cars, and I told you that the potential market for AI self-driving cars is estimated at $7 trillion dollars by the year 2050 (as predicted in Fortune magazine, see: http://fortune.com/2017/06/03/autonomous-vehicles-market/).

That’s right, I said $7 trillion dollars. It’s a lot of money. It’s a boatload, and more, of money. The odds are that you would want to do whatever you could to get a piece of that action. Even a small slice, let’s say just a few percentages, would make your firm huge.

Furthermore, consider things from the other side of that coin. Suppose you don’t get a piece of that pie. Whatever else you are doing is likely to become crumbs. If you are making conventional cars, the odds are that few will want to buy them anymore. There are some AI self-driving car pundits that are even suggesting that conventional cars would be outlawed by 2050. The logic is that if you have conventional cars being driven by humans on our roadways in the 2050’s, it will muck up the potential nirvana of having all AI self-driving cars that presumably will be able to work in unison and thus get us to the vaunted zero fatalities goal.

For my article that debunks the zero fatalities goal, see:https://aitrends.com/selfdrivingcars/self-driving-cars-zero-fatalities-zero-chance/

If you are a high-tech firm and you’ve not gotten into the AI self-driving car realm, your fear is that you’ll also miss out on the $7 trillion dollar prize. Suppose that your high-tech competitor got into AI self-driving cars early on and they became the standard, kind of like how there was a fight between VHS and Betamax. Maybe it’s wisest to get into things early and become the standard.

Or, alternatively, maybe the early arrivers will waste a lot of money trying to figure out what to do, so instead of falling into that trap, you wait on the periphery, avoiding the drain of resources, and then jump in once the others have flailed around. Many in Silicon Valley seem to believe that you have to be the first into a new realm. This is actually a false awareness since many of the most prominent firms in many areas weren’t there first, they instead came along somewhat after others had poked and tried and based on the heels of those true first attempts did the other firm step in and become a household name.

Let’s return to the notion of coopetition. I assume we can agree that generally the auto makers aren’t very likely to want to be cooperative with each other and usually consider themselves head-on competitors. I realize there have been exceptions, such as the deal that PSA Peugeot Citroen and Toyota made to produce the Peugeot 107 and the Toyota Aygo, but those such arrangements are somewhat sparse. Likewise, the high-tech firms tend to strive towards being competitive with each other, rather than cooperative. Again, there are exceptions such as a willingness to serve on groups that are putting together standards and protocols for various architectural and interface aspects (think of the World Wide Web Consortium, W3C, as an example).

We’ve certainly already seen that auto makers and high-tech firms are willing to team-up for the AI self-driving cars realm.

In that sense, it’s kind of akin to the Wintel type of arrangement. I don’t think we’d infer they are true coopetition arrangements since they weren’t especially competing to begin with. Google’s Waymo has teamed up with Chrysler to outfit the Pacifica minivans with AI self-driving car aspects. Those two firms weren’t especially competitors. I realize you could assert that Google could get into the car business and be an auto maker if it wanted to, which is quite the case and they could buy their way in or even start something from scratch. You could also assert that Chrysler is doing its own work on high-tech aspects for AI self-driving cars and in that manner might be competing with Waymo. It just doesn’t though quite add-up to them being true competitors per se, at least not right now.

So, let’s put to the side the myriad of auto maker and high-tech firm cooperatives underway and say that we aren’t going to label those as coopetitions. Again, I realize you can argue the point and might say that even if they aren’t competitors today, they could become competitors a decade from now. Yes, I get that. Just go along with me on this for now and we can keep in mind the future possibilities too.

Consider these thought provoking questions:

  •         Could we get the auto makers to come together into a coopetition arrangement to establish the basis for AI self-driving cars?
  •         Could we get the high-tech firms to come together into a coopetition arrangement to establish the basis for AI self-driving cars?
  •         Could we get the auto makers and tech firms that are already in bed with each other to altogether come together to enter into a coopetition arrangement?

I get asked these questions during a number of my industry talks. There are some that believe the goal of achieving AI self-driving cars is so crucial for society, so important for the benefit of mankind, that it would be best if all of these firms could come together, shake hands, and forge the basis for AI self-driving cars.

For my article about idealists in AI self-driving cars, see:https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

Why would these firms be willing to do this? Shouldn’t they instead be wanting to “win” and become the standard for AI self-driving cars? The tempting $7 trillion dollars is a pretty alluring pot of gold. Seems premature to already throw in the towel and allow other firms to grab a piece of the pie. Maybe your efforts will knock them out of the picture. You’ll have the whole kit and caboodle yourself.

Those proposing a coopetition notion for AI self-driving cars are worried that the rather “isolated” attempts by each of the auto makers and the tech firms is going either lead to failure in terms of true AI self-driving cars, or it will stretch out for a much longer time than needed. Suppose you could have true AI self-driving cars by the year 2030, if you did a coopetition deal, versus that suppose it wasn’t until 2050 or 2060 that true AI self-driving cars would emerge. This means that for perhaps 20 or 30 years there could have been true AI self-driving cars, doing so to the benefit of us all, and yet we let it slip off due to being “selfish” and allowing the AI self-driving car makers to duke it out.

For selfishness and AI self-driving cars, see my article:https://aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/

You’ve likely see science fictions movies about a giant meteor that is going to strike earth and destroy all that we have, or an alien force from Mars that is heading to earth and likely to enslave us all. In those cases, there has been a larger foe to contend with. As such, it got all of the countries of the world to set aside their differences and band together to try and defeat the larger foe. I’m not saying that would happen in real life, and perhaps instead everyone would tear each other apart, but anyway, let’s go with the happy face scenario and say that when faced with tough times, we could get together those that otherwise despise each other or see each other as their enemies, and they would become cooperative.

That’s what some want to have happen in the AI self-driving cars realm. The bigger foe is the number of annual fatalities due to car accidents. The bigger foe also includes the issue of a lack of democratization of mobility, which is what it is hoped that AI self-driving cars will bring forth, a greater democratization. The bigger foe is the need to increase mobility for those that aren’t able to be mobile. In other words, the basket of benefits for AI self-driving cars, and the basket of woes that it will overturn, the belief is that for those reasons the auto makers and tech firms should band together into a coopetition.

Zero-Sum Versus Coopetition in Game Theory

Game theory comes to play in coopetition.

If you believe in a zero-sum game, whereby the pie is just one size and those that get a bigger piece of the pie are doing so at the loss of others that will get a smaller piece of the pie, the win-lose perspective makes it hard to consider participating in a coopetition. On the other hand, if it could be a win-win possibility, whereby the pie can be made bigger, and thus the participants each get sizable pieces of pie, it makes being in the coopetition seemingly more sensible.

How would things fare in the AI self-driving cars realm? Suppose that an auto maker X that has teamed up with high-tech firm Y, they are the XY team, and they are frantically trying to be the first with a true AI self-driving car. Meanwhile, we’ve got auto maker Q and its high-tech partner firm Z, and so the QZ team is also frantically trying to put together a true AI self-driving car.

Would XY be willing to get into a coopetition with QZ, and would QZ want to get into a coopetition with XY?

If XY believes they need no help and will be able to achieve an AI self-driving car and do so on a timely basis and possibly beat the competition, it seems unlikely they would perceive value in doing the coopetition. You can say the same about QZ, namely, if they think they are going to be the winner, there’s little incentive to get into the coopetition.

Some would argue that they could potentially shave on costs of trying to achieve an AI self-driving car by joining together. Pool resources. Do R&D together. They could possibly do some kind of technology transfer amongst each other, with one having gotten more advanced in some area than the other, and thus they trade with each on the things they each have gotten farthest along on. There’s a steep learning curve on the latest in AI and so the XY and QZ could perhaps boost each other up that learning curve. Seems like the benefits of being in a coopetition are convincing.

And, it is already the case that these auto makers and tech firms are eyeing each other. They each are intently desirous of knowing how far along the other is. They are hiring away key people from each other. Some would even say there is industrial espionage underway. Plus, in some cases, there are AI self-driving car developers that appear to have stepped over the line and stolen secrets about AI self-driving cars.

See my article about the stealing of secrets of AI self-driving cars:https://aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

This coopetition is not so easy to arrange, let alone to even consider. You are the CEO of the auto maker X, which has already forged a relationship with the high-tech firm Y. The marketplace perceives that you are doing the right thing and moving forward with AI self-driving cars. This is a crucial perception for any auto maker, since we’ve already seen that the auto makers will get drummed by the marketplace, such as their shares dropping, if they don’t seem to be committed to achieving an AI self-driving car. It’s become a key determiner for the auto maker and its leadership.

The marketplace figures that your firm, you the auto maker, will be able to achieve AI self-driving cars and that consumers will flock to your cars. Consumers will be delighted that you have AI self-driving cars. The other auto makers will fall far behind in terms of sales as everyone switches over to you. In light of that expectation, it would be somewhat risky to come out and say that you’ve decided to do a coopetition with your major competitors.

I’d bet that there would be a stock drop as the marketplace reacted to this approach. If all the auto makers were in the coopetition, I suppose you could say that the money couldn’t flow anywhere else anyway.

On the other hand, if only some of the auto makers were in the coopetition, it would force the marketplace into making a bet. You might put your money into the auto makers that are in the coopetition, under the belief they will succeed first, or you might put your money into the other auto makers that are outside the coopetition, under the belief they will win and win bigger because they aren’t having to share the pie.

Speaking of which, what would be the arrangement for the coopetition? Would all of the members participating have equal use of the AI self-driving car technologies developed? Would they be in the coopetition forever or only until a true AI self-driving car was achieved, or until some other time or ending state? Could they take whatever they got from the coopetition and use it in whatever they wanted, or would there be restrictions? And so on.

I’d bet that the coopetition would have a lot of tension. There is always bound to be professional differences of opinion. A member of the coopetition might believe that LIDAR is essential to achieving a true AI self-driving car, while some other member says they don’t believe in LIDAR and see it as a false hope and a waste of time. How would the coopetition deal with this?

For other aspects about differences in opinions about AI self-driving car designs, see my article: https://aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

Also, see my article about egocentric designs:https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

Normally, a coopetition is likely to be formulated when the competitors are willing to find a common means to contend with something that is relatively non-strategic to their core business. If you believe that AI self-driving cars are the future of the automobile, it’s hard to see that it wouldn’t be considered strategic to the core business. Indeed, even though today we don’t necessarily think of AI self-driving cars as a strategic core per se, because it’s still so early in the life cycle, anyone with a bit of vision can see that soon enough it will be.

If the auto makers did get together in a coopetition, and they all ended-up with the same AI self-driving car technology, how else would they differentiate themselves in the marketplace? I realize you can say that even today the auto makers are pretty much the same in the sense that they offer a car that has an engine and has a transmission, etc. The “technology” you might say is about the same, and yet they do seem to differentiate each other. Often, the differentiation is more on style of the car, the looks of the car, rather than the tech side of things.

For how auto makers might be marketing AI self-driving cars in the future, see my article: https://aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/

For those that believe that the AI part of the self-driving car will end-up being the same for cars of the future, and it won’t be a differentiator to the marketplace, this admittedly makes the case for banding into a coopetition on the high-tech stuff. If the auto makers believe that the AI will be a commodity item, why not get into a coopetition, figure this arcane high-tech AI stuff out, and be done with it. No sense in fighting over something that anyway is going to be generic across the board.

At this time, it appears that the auto makers believe they can reach a higher value by creating their own AI self-driving car, doing so in conjunction with a particular high-tech firm that they’ve chosen, rather than doing so via a coopetition. Some have wondered if we’ll see a high-tech firm that opts to build its own car, maybe from scratch, but so far that doesn’t seem to be the case (in spite of the rumors about Apple, for example). There are some firms that are developing both the car and the high-tech themselves, such as Tesla, and see no need to band with another firm, as yet.

Right now, the forces appear to be swayed toward the don’t side of doing a coopetition. Things could change. Suppose that no one is able to achieve a true AI self-driving car? It could be that the pressures become large enough (the bigger foe) that they auto makers and tech firms consider the coopetition notion. Or, maybe the government decides to step in and forces some kind of coopetition, doing so under the belief that it is a societal matter and regulatory guidance is needed to get us to true AI self-driving cars. Or, maybe indeed aliens from Mars start to head here and we realize that if we just had AI self-driving cars we’d be able to fend them off.

For my piece about conspiracy theories and AI self-driving cars, see:https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

There’s the old line about if you can’t beat them, join them. For the moment, it’s assumed that the ability to beat them is greater than the join them alternative. The year 2050 is still off in the future and anything might happen on the path to that $7 trillion dollars.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

Ensemble Machine Learning for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider How do you learn something? That’s the same question that we need to ask when trying to achieve Machine Learning (ML). In what way can we undertake “learning” for a computer and seek to “teach” the system to do things of an intelligent nature. That’s a holy grail […]

By Lance Eliot, the AI Trends Insider

How do you learn something?

That’s the same question that we need to ask when trying to achieve Machine Learning (ML). In what way can we undertake “learning” for a computer and seek to “teach” the system to do things of an intelligent nature. That’s a holy grail for those in AI that are aiming to avoid having to program their way into intelligent behavior. Instead, the notion is to be able to somehow get a computer to learn what to do and not need to explicitly write out every step or knowledge aspect required.

Allow me a moment to share with you a story about the nature of learning.

Earlier in my career, I started out as a professor and was excited to teach classes for both undergraduate students and graduate level students. Those first few lectures were my chance to aid those students in learning about computer science and AI. Before each lecture I spent a lot of time to prepare my lecture notes and was ready to fill the classroom whiteboard with all the key principles they’d need to know. Sure enough, I’d stride into the classroom and start writing on the board and kept doing so until the bell went off that the class session was finished.

After doing this for about a week or two, a student came to my office hours and asked if there was a textbook they could use to study from. I was taken aback since I had purposely not chosen a textbook in order to save the students money. I figured that my copious notes on the board would be better than some stodgy textbook and averted them from having to spend a fortune on costly books. The student explained that though they welcomed my approach, they were the type of person that found it easier to learn by reading a book. Trying not to offend me, the student gingerly inquired as to whether my lecture notes could be augmented by a textbook.

I considered this suggestion and sure enough found a textbook that I thought would be pretty good to recommend, and at the next session of the class mentioned it to the students, indicating that it was optional and not mandatory for the class.

While walking across the campus after a class session, another student came up to me and asked if there were any videos of my lectures. I was suspicious that the student wanted to skip coming to lecture and figured they could just watch a video instead, but this student sincerely convinced me that she found that watching a video allowed her to start and stop the lecture while trying to study the material after class sessions. She said that my fast pace during class didn’t allow time for her to really soak in the points and that by having a video she would be able to do so at a measured pace on her own time.

I considered this suggestion and provided to the class links to some videos that were pertinent to the lectures that I was giving.

Yet another student came to see me about another facet of my classes. For the undergrad lectures, I spoke the entire time and didn’t allow for any classroom discussion or interaction. This seemed sensible because the classes were large lecture halls that had hundreds of students attending. I figured it would not be feasible to carry on a Socratic dialogue similar to what I was doing in the graduate level courses where I had many 15-20 students per class. I had even been told by some of the senior faculty that trying to engage undergrads in discussion was a waste of time anyway since those newbie students were neophytes and it would be ineffective to allow any kind of Q&A with them.

Well, an undergrad student came to see me and asked if I was ever going to allow Q&A during my lectures. When I started to discuss this with the student, I inquired as to what kinds of questions was he thinking of asking. Turns out that we had a very vigorous back-and-forth on some meaty aspects of AI and it made me realize that there were perhaps students in the lecture hall that could indeed engage in a hearty dialogue during class. At my next lecture, I opted to stop every twenty minutes and gauge the reaction from the students and see if I could get a brief and useful interaction going with them. It worked, and I noticed that many of the students became much more interested in the lectures by this added feature of allowing for Q&A (even for so-called “lowly” undergraduate students, which was how my fellow faculty seemed to think of them).

Why do I tell you this story about my initial days of being a professor?

I found out pretty quickly that using only one method or approach to learning is not necessarily very wise. My initial impetus to do fast paced all-spoken lectures was perhaps sufficient for some students, but not for all. Furthermore, even the students that were OK with that narrow singular approach were likely to tap into other means of learning if I was able to provide it. By augmenting my lectures with videos, with textbooks, and by allowing for in-classroom discussion, I was providing a multitude of means to learn.

You’ll be happy to know that I learned that learning is best done via offering multiple ways to learn. Allow the learner to select which approach best fits to them. When I say this, also keep in mind that the situation might determine which mode is best at that time. In other words, don’t assume that someone that prefers learning via in-person lecture is always going to find that to be the best learning method for them. They might switch to a preference for say video or textbook, depending upon the circumstance.

And, don’t assume that each learner will learn via only one method. Student A might find that using lectures and the textbook is their best fit. Student B might find lectures to be unsuitable for learning and prefer dialogue and videos. Each learner will have their own one-or-more learning approaches that work best for them, and this varies by the nature of the topic being learned.

I kept all of this in mind for the rest of my professorial days and always tried to provide multiple learning methods to the students, so they could choose the best fit for them.

Ensemble Learning Employs Multiple Methods, Approaches

A phrase sometimes used to refer to this notion of multiple learning methods is known as ensemble learning. When you consider the word “ensemble” you tend to think of multiples of something, such as multiple musicians in an orchestra or multiple actors in a play. They each have their own role, and yet they also combine together to create a whole.

Ensemble machine learning is the same kind of concept. Rather than using only one method or approach to “teach” a computer to do something, we might use multiple methods or approaches. These multiple methods or approaches are intended to somehow ultimately work together so as to form a group. In other words, we don’t want the learning methods to be so disparate that they don’t end-up working together. It’s like musicians that are supposed to play the same song together. The hope is that the multiple learning methods are going to lead to a greater chance at having the learner learn, which in this case is the computer system as the learner.

At the Cybernetic AI Self-Driving Car Institute, we are using ensemble machine learning as part of our approach to developing AI for self-driving cars.

Allow me to further elaborate.

Suppose I was trying to get a computer system to learn some aspect of how to drive a car. One approach might be to use artificial neural networks (ANN). This is very popular and a relatively standardized way to “teach” the computer about certain driving task aspects. That’s just one approach though. I might also try to use genetic algorithms (GA). I might also use support vector machines (SVM). And so on. These could be done in an ensemble manner, meaning that I’m trying to “teach” the same thing but using multiple learning techniques to do so.

For the use of genetic algorithms in AI self-driving cars see my article: https://aitrends.com/selfdrivingcars/genetic-algorithms-self-driving-cars-darwinism-optimization/

For my article about support vector machines in AI self-driving cars see: https://aitrends.com/selfdrivingcars/support-vector-machines-svm-ai-self-driving-cars/

For my articles about machine learning for AI self-driving cars see:

Benchmarks and machine learning: https://aitrends.com/ai-insider/machine-learning-benchmarks-and-ai-self-driving-cars/

Federated machine learning: https://aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/

Explanation-based machine learning: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/

Deep reinforcement learning: https://aitrends.com/ai-insider/human-aided-training-deep-reinforcement-learning-ai-self-driving-cars/

Deep compression pruning in machine learning: https://aitrends.com/selfdrivingcars/deep-compression-pruning-machine-learning-ai-self-driving-cars-using-convolutional-neural-networks-cnn/

Simulations and machine learning: https://aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/

Training data and machine learning: https://aitrends.com/machine-learning/machine-learning-data-self-driving-cars-shared-proprietary/

Now you don’t normally just toss together an ensemble. When you put together a musical band, you probably would be astute to pick musicians that have particular musical skills and play particular musical instruments. You’d want them to end-up being complimentary with each other. Sure, some might be duplicative, such as you might have more than one guitar player, but that could be because one guitarist will be the lead guitar and the other perhaps the bass guitar player.

The same is said for doing ensemble machine learning. You’ll want to select machine learning approaches or methods that seem to make sense when considered in the totality as a group of such machine learning approaches. What is the strength of each ML chosen for the ensemble? What is the weakness of the ML chosen? By having multiple learning methods, hopefully you’ll be able to either find the “best” one for the given learning circumstance at hand, or you might be able to combine them together in a manner that offers a synergistic outcome beyond each of them performing individually.

So, you could select some N number of machine learning approaches, train them on some data, and then see which of them learned the best, as based on some kind of metrics. You might after training feed the MLs with new data and see which does the best job. For example, suppose I’m trying to train toward being able to discern street signs. So, I feed a bunch of pictures of street signs into these each ML’s of my ensemble. After they’ve each used their own respective learning approach, I then test them. I do so by feeding new pictures of street signs and see which of them most consistently can identify a stop sign versus a speed limit sign.

See my article about street signs and AI self-driving cars: https://aitrends.com/selfdrivingcars/making-ai-sense-of-road-signs/

Out of my N number of machine learning approaches that I selected for this street sign learning task, suppose that the SVM turns out to be the “best” as based on my testing after the learning has occurred. I might then decide that for the street sign interpretation I’m going to exclusively use SVM for my AI self-driving car system. This aspect of selecting a particular model out of a set of models is sometimes referred to as the “bucket of models” approach, wherein you have a bucket of models in the ensemble and you choose one out of them. Your selection is based on a kind of “bake-off” as to which is the better choice.

But, suppose that I discover that of the N machine learning approaches, sometimes the SVM is the “best” and meanwhile there are other times that the GA is better. I don’t necessarily need to confine myself to choosing only one of the learning methods for the system. What I might do is opt to use both SVM and GA, and be aware beforehand of when each is preferred to come to play. This is akin to having the two guitarists in my musical band, and each has their own strengths and weaknesses, so if I’m thoughtful about how to arrange my band when they play a concert I’ll put them each into a part of the music playing that seems best for their capabilities.  Maybe one of them starts the song, and the other ends the song. Or however arranging them seems most suitable to their capabilities.

Thus, we might choose N number of machine learning approaches for our ensemble, train them, and then decide that some subset Q are chosen to become part of the actual system we are putting together. Q might be 1, in that maybe there’s only one of the machine learning approaches that seemed appropriate to move forward with, or Q might be 2, or 3, and so on up to the number N. If we do select more than just one, the question then arises as to when and how to use the Q number of chosen machine learning approaches.

In some cases, you might use each separately, such as maybe machine learning approach Q1 is good at detecting stop signs, while Q2 is good at detecting speed limit signs. Therefore, you put Q1 and Q2 into the real system and when it is working you are going to rely upon Q1 for stop sign detection and Q2 for speed limit sign detection.

In other cases, you might decide to combine together the machine learning approaches that have been successful to get into the set Q. I might decide that whenever a street sign is being analyzed, I’ll see what Q1 has to indicate about it, and what Q2 has to indicate about it. If they both agree that it is a stop sign, I’ll be satisfied that it’s likely a stop sign, and especially if Q1 is very sure of it. If they both agree that it is speed limit sign, and especially if Q2 is very sure of it, I’ll then be comfortable assuming that it is a speed limit sign.

Various Ways to Combine the Q Sets

There are various ways you might combine together the Q’s. You could simply consider them all equal in terms of their voting power, which is generally called “bagging” or bootstrap aggregation. Or, you could consider them to be unequal in their voting power. In this case, we’re going with the idea that Q1 is better at stop sign detection, so I’ll add a weighting to its results that if it’s interpretation is a stop sign then I’ll give it a lot of weight, while if Q2 detects a stop sign I’ll give it a lower weighting because I already know beforehand it’s not so good at stop sign detection.

These machine learning approaches that are chosen for the ensemble are often referred to as individual learners. You can have any N number of these individual learners and it all depends on what you are trying to achieve and how many machine learning approaches you want to consider for the matter at-hand. Some also refer to these individual learners as base learners. A base or individual learner can be whatever machine learning approach you know and are comfortable with, and that matches to the learning task at hand, and as mentioned earlier can be ANN, SVM, GA, decision trees, etc.

Some believe that to make the learning task fair, you should provide essentially the same training data to the machine learning approaches that you’ve chosen for the matter at-hand. Thus, I might select one sample of training data that I feed into each of the N machine learning approaches. I then see how each of those machine learning approaches did based on the sample data. For example, I select a thousand street sign images and feed them into my N machine learning approaches which in this case I’ve chosen say three, ANN, SVM, GA.

Or, instead, I might take a series of samples of the training data. Let’s refer to one such sample as S1, consisting of a thousand images randomly chosen from a population of 50,000 images, and feed the sample S1 into machine learning approach Q1. I might then select another sample of training data, let’s call it S2, consisting of another randomly selected set of a thousand images, and feed it into machine learning approach Q2. And so on for each of the N machine learning approaches that I’ve selected.

I could then see how each of the machine learning approaches did on their respective sample data. I might then opt to keep all of the machine learning approaches for my actual system, or I might selectively choose which ones will go into my actual system. And, as mentioned earlier, if I have selected multiple machine learning approaches for the actual system then I’ll want to figure out how to possibly combine together their results.

You can further advance the ensemble learning technique by adding learning upon learning. Suppose I have a base set of individual learners. I might feed their results into a second-level of machine learning approaches that act as meta-learners. In a sense, you can use the first-level to do some initial screening and scanning, and then potentially have a second-level that then aims at getting into further refinement of what the first-level found. For example, suppose my first-level identified that a street sign is a speed limit sign, but the first-level isn’t capable to then determine what the speed limit numbers are. I might feed the results into a second-level that is adept at ascertaining the numbers on the speed limit sign and be able to detect what the actual speed limit is as posted on the sign.

The ensemble approach to machine learning allows for a lot of flexibility in how you undertake it. There’s no particular standardized way in which you are supposed to do ensemble machine learning. It’s an area still evolving as to what works best and how to most effectively and efficiently use it.

Some might be tempted to throw every machine learning approach into an ensemble under the blind hope that it will then showcase which is the best for your matter at-hand. This is not as easy as it seems. You need to know what the machine learning approach does and there’s an effort involved in setting it up and giving it a fair chance. In essence, there are costs to undertaking this and you shouldn’t be using a scattergun style way of doing so.

For any particular matter, there are going to be so-called weak learners and strong learners. Some of the machine learning approaches are very good in some situations and quite poor in others. You also need to be thinking about the generalizability of the machine learning approaches. You could be fooled when feeding sample data into the machine learning approaches that say one of them looks really good, but it turns out maybe it has overfitted to the sample data. This might not then do you much good once you start feeding new data into the mix.

Another aspect is the value of diversity. If you have no-diversity, such as only one machine learning approach that you are using, there are likely to be situations wherein it isn’t as good as some other machine learning approach, and you should consider having diversity. Therefore, by having more than one machine learning approach in your mix, you are gaining diversity which will hopefully pay-off for varying circumstances. As with anything else, if you have too many though of the machine learning approaches it can lead to muddled results and you might not be able to know which one to believe for a given result provided.

Keep in mind that any ensemble that you put together will require computational effort, in essence computing power, in order to not only do the training but more importantly when involved in receiving new data and responding accordingly. Thus, if you opt to have a slew of machine learning approaches that are going to become part of your Q final set, and if you are expecting them to run in real-time on-board an AI self-driving car, this is going to be something you need to carefully assess. The amount of memory consumed and the processing power consumed might be prohibitive. There’s a big difference between using an ensemble for a research-oriented task, wherein you might not have any particular time constraints, and versus when using in an AI self-driving car that has severe time constraints and also limits on computational processing available.

For those of you familiar with Python, you might consider trying using the Python-oriented scikit-learn machine learning library and try out various ensemble machine learning aspects to get an understanding of how to use an ensemble learning approach.

If we’re going to have true AI systems, and especially AI self-driving cars, the odds are that we’ll need to deploy multiple machine learning models. Trying to only program directly our way to full AI is unlikely to be feasible. As Benjamin Franklin is famous for saying: “Tell me and I forget. Teach me and I remember. Involve me and I learn.” Using an ensemble learning approach is to-date a vital technique to get us toward that involve me and learn goal. We might still need even better machine learning models, but the chances are that no matter what we discover for better ML’s, we’ll end-up needing to combine them into an ensemble. That’s how the music will come out sounding robust and fulfilling for achieving ultimate AI.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Code Obfuscation for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Earlier in my career, I was hired to reverse engineer a million lines of code for a system that the original developer had long since disappeared. He had left behind no documentation. The firm had at least gotten him to provide a copy of the source code. Nobody […]

By Lance Eliot, the AI Trends Insider

Earlier in my career, I was hired to reverse engineer a million lines of code for a system that the original developer had long since disappeared. He had left behind no documentation. The firm had at least gotten him to provide a copy of the source code. Nobody at the firm knew anything about how the code itself worked. The firm was dependent upon the compiled code executing right and they simply hoped and prayed that they would not need to make any changes to the system.

Not a very good spot to be in.

I was told that the project was a hush-hush one and that I should not tell anyone else what I was doing. They would only let me see the source code while physically at their office, and otherwise I wasn’t to make a copy of it or take it off the premises. They even gave me a private room to work in, rather than sitting in a cubicle or other area where fellow staffers were. I became my own miniature skunk works, of sorts.

There was a mixture of excitement and trepidation for me about this project. I had done other reverse engineering efforts before and knew how tough it could be to figure out someone else’s code. Any morsels of “documentation” were always welcomed, even if the former developer(s) had only written things onto napkins or the back of recycled sheets of paper. Also, I usually had someone that kind of knew something about the structure of the code or at least had heard rumors by water cooler chats with the tech team. In this case, the only thing I had available were the end-users that used the system. I was able to converse with them and find out what the system was supposed to do, how they interacted with it, the outputs it produced, etc.

For a million lines of code, and with supposedly just one developer, he presumably was churning out a lot of lines of code for being just one person. I was told that he was a “coding genius” and that he was always able to “magically” make the system do whatever they needed. He was a great resource, they said. He was willing to make changes on the fly. He would come in during weekends to make changes. They felt like they had been given the “hacker from heaven” (with the word hacker in this case meaning a proficient programmer, and not the nowadays more common use as a criminal or cyber hacker).

I gently pointed out that if he was such a great developer, dare I say software engineer, how come he hadn’t documented his work? How come no one else was ever able to lay eyes on his work? How come he was the only one that knew what it did? I pointed out that they had painted themselves into a corner. If this heavenly hacker got hit by a bus (and floated upstairs, if you know what I mean), what then?

Well, they sheepishly admitted that I must be some kind of mind reader because he had one day just gotten up and left the company. There were stories that his girlfriend had gotten kidnapped in some foreign country and that he had arranged for mercenaries to rescue her, and that he personally was going there to be part of the rescue team. My mouth gaped open at this story. Sure, I suppose it could be true. I kind of doubted it. Seemed bogus.

The whole thing smelled like the classic case of someone that was protective of their work, and also maybe wanted a bit of job security. It’s pretty common that some developers will purposely aim to not document their code and make it as obscure as they can, in hopes of staving off losing their job. The idea is that if you are the only one that knows the secret sauce, the firm won’t dare get rid of you. You will have them trapped. Many companies have gotten themselves into that same predicament. And, though it seems like an obvious ploy to you and me, these firms often are clueless about what is taking place and fall into the trap without any awareness. When the person suddenly departs, the firm wakes up “shockingly” to what they’ve allowed to happen.

Some developers that get themselves into this posture will also at times try to push their luck. They demand that the firm pay them more money. They demand that the firm let them have some special perks. They keep upping the ante figuring that they’ll see how far they can push their leverage. This will at times trigger a firm to realize that things aren’t so kosher. At that point, they often aren’t sure of what to do. I’ve been hired as a “code mercenary” to parachute into such situations and try to help bail out the firm. As you might guess, the original developer, if still around, becomes nearly impossible to deal with and will refuse to lift a finger to help share or explain the secret sauce.

When I’ve discussed these situations with the programmer that had led things in that direction, they usually justified it. They would tell me that the firm at first paid them less than what a McDonald’s hamburger slinger would get. They got no respect for having finely honed programming skills. If the firm was stupid enough to then allow things to get into a posture whereby the programmer now had the upper hand, it seems like fair play. The company was willing to “cheat” him, so why shouldn’t he do likewise back to the company. The world’s a tough place and we each need to make our own choices, is what I was usually told.

Besides, it often played out over months and sometimes years, and the firm could have at any time opted to do something to prevent the continuing and deepening dependency. One such programmer told me that he had “saved” the company a lot of money. The doing of documentation would have required more hours and more billable time. The act of showing the code to others and teaching them about how it worked, once again more billable time. Furthermore, just like the case that I began to describe herein, he had worked evenings and weekends, being at the beck and call of the firm. They had gotten a great deal and had no right to complain.

Anyway, I’ll put to the side for the moment the ethics involved in all of this.

For those of you interested in the ethical aspects of programmers, please see my article: https://aitrends.com/selfdrivingcars/algorithmic-transparency-self-driving-cars-call-action/

When I took a look at the code of the “man that went to save his girlfriend in a strange land,” here’s what I found:   Ludwig Van Beethoven, Wolfgang Amadeus Mozart, Johann Sebastian Bach, Richard Wagner, Joseph Haydn, Johannes Brahms, Franz Schubert, Peter Ilyich Tchaikovsky, etc.

Huh?

Allow me to elaborate. The entire source code consisted of variables with names of famous musical composers, and likewise all of the structure and objects and subroutines were named after such composers or were based on titles of their songs. Instead of seeing something like LoopCounter = LoopCounter + 1, it would say Mozart = Mozart + 1. Imagine a financial banking application that instead of referring to Account Name, Account Balance, Account Type, it instead said Bach, Wagner, and Brahms, respectively.

So, when trying to figure out the code, you’d need to tease out of the code that whenever you see the use of “Bach” it really means the Account Name field. When you see the use of Wagner it really means the Account Balance. And so on.

I was kind of curious about this seeming fascination with musical composers. When I asked if the developer was known for perhaps having a passion for classical music, I was told that maybe so, but not that anyone noticed.

I’d guess that it wasn’t so much his personal tastes in composers, and instead it was more likely his interest in code obfuscation.

You might not be aware that some programmers will purposely write their code in a manner to obfuscate it. They will do exactly what this developer had done. Instead of using naming that would be logically befitting the circumstance, they would make-up other names. The idea was that this would make it much harder for anyone else to figure out the code. This ties back to my earlier point about the potential desire to become the only person that can do the maintenance and upkeep on the code. By making things as obfuscated as you can, it causes anyone else to be either be baffled or have to climb up a steep learning curve to divine your secret sauce code.

If the person’s hand was forced by the company insisting that they share the code with Joe or Samantha, the programmer could say, sure, I’ll do so, and then hand them something that seems like utter mush. Here you go, have fun, the developer would say. If Joe and Samantha had not seen this kind of trickery before, they would likely roll their eyes and report back to management that it was going to be a long time to ferret out how the thing works.

I had the CEO of a software company that when this very thing happened, and when it was me that told him the programmer had made the code obfuscated, the CEO nearly blew his top. We’ll sue him for every dime we ever paid him, the CEO exclaimed. We’ll hang him out to dry and tell any future prospective employer that he’s poison and don’t ever hire him. And so on. Of course, trying to go after the programmer for this is going to be somewhat problematic. Did the code work? Yes. Did it do what the firm wanted? Yes. Did the firm ever say anything about the code having to be more transparently written? No.

Motivations for Code Obfuscation Vary

I realize that some of you have dealt with code that appears to be the product of obfuscation, and yet you might say that it wasn’t done intentionally. Yes, I agree that sometimes the code obfuscation can occur by happenstance. A programmer that doesn’t consider the ramifications of their coding practices might indeed write such code. They maybe didn’t intend to write something obfuscated, it just turned out that way. Suppose this programmer loved the classics and the composers, and when he started the coding he opted to use their names. That was well and good for say the first thousand lines of code.

He then kept building upon the initial base of code. Might as well continue the theme of using composer names. After a while, the whole darned thing is shaped in that way. It can happen, bit by bit. At each point in time, you think it doesn’t make sense to redo what you’ve already done, and so you just keep going. It might be like constructing a building that you first laid down some wood beams for, and even if maybe you should be using steel instead because that building is actually ultimately going to be a skyscraper, you started with wood, you kept adding into it with wood, and so wood it is.

For those of you that have pride as a software engineer, these stories often make you ill to your stomach. It’s those seat-of-the-pants programmers that give software development and software developers a bad name. Code obfuscation for a true software engineer is the antithesis of what they try to achieve. It’s like seeing a bridge with rivets and struts made of paper and you know the whole thing was done in a jury rigged manner. That’s not how you believe good and proper software is written.

I think we can anyway say this, code obfuscation can happen for a number of reasons, including possibly:

  •         Unintentionally and without awareness of it as a concern
  •         Unintentionally and by step at a time falling into it
  •         Intentionally and with some loathsome intent to obfuscate
  •         Intentionally but with an innocent or good meaning intent

So far, the intent to obfuscate has been suggested as something being done for job security or other personal reasons that have seemed somewhat untoward. There’s another reason to want to obfuscate the code, namely for code security or privacy, and rightfully so.

Suppose you are worried that someone else might find the code. This someone is not supposed to have it. You want the code to remain relatively private and you are hopeful of securing it so that no one else can rip it off or otherwise see what’s in it. This could be rightfully the case, since you’ve written the code and the Intellectual Property (IP) rights belong to you of it. Companies often invest millions of dollars into developing proprietary code and they obviously would like to prevent others from readily taking it or stealing it.

You might opt to encrypt the file that contains the source code. Thus, if someone gets the file, they need to find a means to decrypt it to see the contents. You can use some really strong form of encryption and hopefully the person wanting to inappropriately decrypt the file will have a hard time doing so and might be unable to do so or give up trying.

Using encryption is a pretty much an on-or-off kind of thing. In the encrypted state, no sense can be made of the contents, presumably. Suppose though that you realize that one way or another, someone has a chance of actually getting to the source code and being able to read what it says. Either they decrypt the file, or they happen to come along when it is otherwise in a decrypted state and grab up a copy of it, maybe they wander over to the programmer’s desktop and put in a USB stick and quickly get a copy while it is in plaintext format.

So, another layer of protection would be to obfuscate the code. You render the code less understandable. This can be done by altering the semantics of the code. The example of the musical composer names showcases how you might do this obfuscation. The musical composer names are written in English and readily read. But, from a logical perspective, in the context of this code, it wouldn’t have any meaning to someone else. The programmer(s) working on the code might have agreed that they all accept the idea that Bach means Account Name and Wagner means Account Balance.

Anyone else that somehow gets their hands on the code will be perplexed. What does Bach mean here? What does Wagner refer to? It puts those interlopers at a disadvantage. Rather than just picking up the code and immediately comprehending it, now they need to carefully study it and try to “reverse engineer” what it seems to be doing and how it is working.

This might require a laborious line-by-line inspection. It might take lots of time to figure out. Maybe it is so well obfuscated that there’s no reasonable way to figure it out at all.

The code obfuscation can also act like a watermark. Suppose that someone else grabs your code, and they opt to reuse it in their own system. They go around telling everyone that it is their own code, written from scratch, and no one else’s. Meanwhile, you come along and are able to take a look at their code. Imagine that you look at their code and observe that the code has musical composer names for all of the key objects in the code. Coincidence? Maybe, maybe not. It could be a means to try and argue that the code was ripped off from your code.

There are ways to programmatically make code obfuscated. Thus, you don’t necessarily need to do so by hand. You can use a tool to do the code obfuscation. Likewise, there are tools to help you crack a code obfuscation. Thus, you don’t necessarily need to do so entirely by hand.

In the case of the musical composer names, I might simply substitute the word “Bach” with the words “Account Name” and so on, which might make the code more comprehensible. The reality is that it isn’t quite that easy, and there are lots of clever ways to make the code obfuscated that it is very hard to render it fully un-obfuscated. There is still often a lot of by-hand effort required.

In this sense, the use of code obfuscation can be by purposeful design. You are trying to achieve the so-called “security by obscurity” kind of trickery. If you can make something obscure, it tends to make it harder to figure out and break into. At my house, I might put a key outside in my backyard so that I can get in whenever I want, but of course a burglar can now do the same. I might put the key under the doormat, but that’s pretty minimal obscurity. If I instead put the key inside a fake rock and I put it amongst a whole dirt area of rocks, the obfuscation is a lot stronger.

One thing about the source code obfuscation that needs to be kept in mind is that you don’t want to alter the code such that it computationally does something different than what it otherwise was going to do. That’s not usually considered in the realm of obfuscation. In other words, you can change the appearance of the code, you can possibly change around the code so that it doesn’t seem as recognizable, but if you’ve now made it that the code can no longer calculate the person’s banking balance, or if you’ve changed it such that the banking balance now gets calculated in a different way, you aren’t doing just code obfuscation.

In quick recap, here’s some aspects about code obfuscation:

  •         You are changing up the semantics and the look, but not the computational effect
  •         Code obfuscation can be done by-hand and/or by the use of tools
  •         Trying to reverse engineer the obfuscation can be done by-hand and/or by the use of tools
  •         There is weak obfuscation that doesn’t do an extensive code obfuscation
  •         There is strong obfuscation that makes the code obfuscation deep and arcane to unwind
  •         Code obfuscation can serve an additional purpose of trying to act like a watermark

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. And, like many of the auto makers and tech firms, we consider the source code to be proprietary and worthy of protecting.

One means for the auto makers and tech firms to try and achieve some “security via obscurity” is to go ahead and apply code obfuscation to their precious and highly costly source code.

This will help too for circumstances where someone somehow gets a copy of the source code. It could be an insider that opts to leak it to another firm or sell it to a competitor. Or, it could be that an breach took place into the systems holding the source code and a determined attacker managed to grab it. At some later point in time, if the matter gets exposed and there is a legal dispute, it’s possible that the code obfuscation aspects could come to play as a type of watermark of the original code.

For my article about the stealing of secrets and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

For my article about the egocentric designs of AI self-driving cars, see:  https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

If you are considering using code obfuscation for this kind of purpose, you’ll obviously want to make sure that the rest of the team involved in the code development is on-board with the notion too. Some developers will like the idea, some will not. Some firms will say that when you check-out the code from a versioning system, they will have it automatically undo the code obfuscation, and only when it is resting in the code management system will it be in the code obfuscation form. Anyway, there are lots of issues to be considered before jumping into this.

For my article about AI developers and groupthink, see: https://aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

For the dangers of making an AI system into a Frankenstein, see my article: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

Let’s also remember that there are other ways that one can end-up with code obfuscation. For some of the auto makers and tech firms, and with some of the open source code that has been posted for AI self-driving cars, I’ve right away noticed a certain amount of code obfuscation that has crept into the code when I’ve gotten an opportunity to inspect it.

As mentioned earlier, it could be that the natural inclination of the programmers or AI developers involves writing code that has code obfuscation in it. This can be especially true for some of the AI developers that were working in university research labs and now they have taken a job at an auto maker or tech firm that is creating AI software for self-driving cars. In the academic environment, often any kind of code you want to sling is fine, no need to “pretty it up” since it usually is done as a one-off to do an experiment or provide some kind of proof about an algorithm.

Self-Driving Car Software Needs to be Well-Built

The software intended to run a self-driving car ought to be better made than that – lives are at stake.

In some cases, the AI developers are under such immense pressures to churn out code for a self-driving car, due to the auto maker or tech firm having unimaginable or unattainable deadlines, they inadvertently write code no matter whether it seems clear cut or not. As often has been said, there is no style in a knife fight. There can also be AI developers that aren’t given guidance to write clearer code, or not given the time to do so, or not rewarded for doing so, and thus all of those reasons can come to play in code obfuscation too.

See my article about AI developer burnout: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

See my article about API’s and AI self-driving cars: https://aitrends.com/selfdrivingcars/apis-and-ai-self-driving-cars/

Per my framework about AI self-driving cars, these are the major tasks involved in the AI driving the car:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action plan formulation
  •         Car controls command issuance

See my framework at: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

There is a lot of code involved in each of those tasks. This is a real-time system that must be able to act and react quickly. The code needs to be tightly done so that it can run in optimal time. Meanwhile, the code needs to be understandable since the humans that wrote the code will need to find bugs in it, when they appear (which they will), and the humans need to update the code (such as when new sensors are added), and so on.

Some of the elements are based on “non-code” such as a machine learning model. Let’s agree to carve that out of the code obfuscation topic for the moment, though there are certainly ways to craft a machine learning model that can be more transparent or less transparent. In any case, taking out those pre-canned portions, I assure you that there’s a lot of code still leftover.

See my article about machine learning models and AI self-driving cars: https://aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/

The auto makers and tech firms are in a mixed bag right now with some of them developing AI software for self-driving cars that is well written, robust, and ready for being maintained and updated. Others are rushing to write the code, or are unaware of the ramifications of writing obfuscated code, and might not realize the err of their ways until further along in the life cycle of advancing their self-driving cars. There are even some AI developers that are like the music man that wrote his code with musical composers in mind, for which it could be an unintentional act or an intentional act. In any case, it might be “good” for them right now, but likely later on will most likely turn out to be “bad” for them and others too.

Here’s then the final rules for today’s discussion on code obfuscation for AI self-driving cars:

  •         If it is happening and you don’t realize it, please wake-up and decide what to overtly be doing
  •         If you are using it as a rightful technique for security by obscurity, please make sure you do so aptly
  •         If you are using it for nefarious purposes, just be aware that what goes around comes around
  •         If you aren’t using it, decide explicitly whether to consider it or not, making a calculated decision about the value and ROI of using code obfuscation

For those of you reading this article, please be aware that in thirty seconds this text will self-obfuscate into English language obfuscation and the article will no longer appear to be about code obfuscation and instead will be about underwater basket weaving. The secrets of code obfuscation herein will no longer be visible. Voila!

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Affordability of AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider They’ll cost too much. They will only be for the elite. Having one will be a sign of prestige. It’s a rich person’s toy. The “have nots” will not be able to get one. People are going to rise-up in resentment that the general population can’t get one. […]

By Lance Eliot, the AI Trends Insider

They’ll cost too much. They will only be for the elite. Having one will be a sign of prestige. It’s a rich person’s toy. The “have nots” will not be able to get one. People are going to rise-up in resentment that the general population can’t get one. Maybe the government should step in and control the pricing. Refuse to get into one as a form of protest. Ban them because if the rest of us cannot have one, nobody should.

What’s this all about?

It’s some of the comments that are already being voiced about the potential affordability (or lack thereof) of AI self-driving cars.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and we get asked quite frequently about whether AI self-driving cars will be affordable or not. I thought you might find of interest my answer (read on).

When people clamor about the potential sky reaching cost of AI self-driving cars, you might at first wonder if people are maybe talking about flying cars, rather than AI self-driving cars. I mention this because there are some that say that flying cars will be very pricy and I think we all pretty much accept that notion. We know that jet planes are pricey, so why shouldn’t a flying car be pricey. But, an earth-based car that rolls on the ground and cannot fly in the air, nor can it submerge like a submarine, we openly question how much such a seemingly “ordinary” car should cost.

It is said that a Rolls-Royce Sweptail is priced upwards of $13 million dollars. Have there been mass protests about this? Are we upset that only a few that are wealthy can afford such a car? Not really. It is pretty much taken for granted that there are cars that are indeed very expensive. Of course, we might all consider it rather foolish of those that are willing to pump hard-earned millions of dollars into such a car. We might think them pretentious for doing so. Or, we might envy them that they have the means to buy such a car. Either way, the Rolls-Royce and other such top to-end cars are over-the-top pricey and most people not especially complain or argue about it.

Part of the reason that people seem to object to the possible high price tag on an AI self-driving car is that the AI self-driving car is being touted as a means to benefit society. AI self-driving cars are ultimately hopefully going to cut down on the number of annual driving related deaths. AI self-driving cars will provide mobility to those that need it, and that cannot otherwise achieve it, such as the poor and the elderly. If an AI self-driving car has such tremendous societal benefits, then we want to as a society ensure that society as a whole gets those benefits and that those benefits will presumably apply across the board. It’s a car of the people, for the people.

What kind of pricing then, for an AI self-driving car, are people apparently thinking of? Some that don’t have any clue of what the price might be are leaving the price tag unknown and thus it makes things easier to get into a lather about how expensive it is. It could be a zillion dollars. Or more. This though seems like a rather vacuous way to discuss the topic. It would seem that we might be better off if we start tossing around some actual numbers and then see if that’s prohibitive or not to buy an AI self-driving car.

The average transaction price (ATP) for a traditional passenger car in the United States for this year is so far around $36,000 according to various published statistics. That’s the national average.

When AI self-driving cars first get started a few years ago, the cost of the added sensors and other specialized gear for achieving self-driving capabilities was estimated at somewhere around $100,000. Meanwhile, since then, the price on those self-driving car specialized components aspects has steadily come down. As with most high-tech, the cost starts “high” and then as it is perfected and the costs to make it wringed out of the process, the price heads downward. In any case, some at the time were saying that an AI self-driving car might be around $150,000 to $200,000, though that’s a wild guess and we don’t yet know what the real pricing will be. Will it be a million dollars for an AI self-driving car? That doesn’t seem to be in anyone’s estimates at this time.

Of course, any time a new car comes out, particularly one that has new innovations, there is usually a premium price placed on the car. It’s a novelty item at first. The number of such cars is usually scarce initially, and so the usual laws of supply and demand help to punch up the price. If the car is able to be eventually mass produced, gradually the price starts to come down as more of those cars enter into the marketplace. If there are competitors that provide equivalent alternatives, the competition of the marketplace tends to drive down the price. You can refer to the Tesla models as prime examples of this kind of marketplace phenomena.

Will True AI Self-Driving Cas Be Within Financial Reach?

Suppose indeed that the first true AI self-driving cars in the low hundreds of thousands of dollars. Does that mean that those cars are out of the reach of the everyday person?

Before we jump into the answer for that question, let’s clarify what I mean by true AI self-driving cars. There are levels of self-driving cars. The topmost level is Level 5. A Level 5 AI self-driving car is able to be driven by the AI without any human intervention. In fact, there is not a human driver needed in a Level 5 car. So much so that there is unlikely to be any driving controls in a Level 5 self-driving car for a human to operate even if the human wanted to try and drive it. In theory, the AI of the Level 5 self-driving car is supposed to be able to drive the car as a human could.

Let’s therefore not consider in this affordability discussion the AI self-driving cars that are less than a Level 5. A less than level 5 self-driving car is a lot like a conventional car, though augmented in a manner that allows for co-sharing of the driving task. This means that there must be a human driver in a car that is classified as a less than Level 5 self-driving car. In spite of having whatever kind of AI in such a self-driving car, the driving task is still considered the responsibility of the human driver. No matter whether the human driver opts to take their eyes off the road, which can be an easy trap to fall into when in a less than level 5 self-driving car, and if the AI were to suddenly toss the control aspects to that human driver, it is nonetheless the human driver considered to be responsible for the driving. I’ve warned many times about the dangers this creates in the driving task.

For my article about the levels of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For my framework about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the dangers of co-shared driving and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

We’ll focus herein on the true Level 5 self-driving car. This is the self-driving car that has the full bells and whistles and really is a self-driving car. No human driver needed. This is the one that those referring to a driving utopia are actually meaning to bring up. The less than level 5’s aren’t quite so exciting, though they might well be important and perhaps stepping stones to the level 5.

Now, let’s get back to the question at hand – will a true Level 5 AI self-driving car be affordable?

We can first quibble about the word “affordable” in this context. If by affordability we mean that it should be around the same price tag as the ATP $36,000 of today’s average passenger car in the United States, I’d say that we aren’t going to see Level 5 Ai self-driving cars at that price for likely a long time until after they are quite prevalent. In other words, out the gate, it isn’t going to be that kind of price (it will be much higher). After years of growth of more and more AI self-driving cars coming into the marketplace, sure, it could possibly eventually come down to that range. Keep in mind that today there are around 200 million conventional cars in the United States, and presumably over time those cars will get replaced by AI self-driving cars. It won’t happen overnight. It will be a gradual wind down of the old ways, and a gradual wind-up of the new ways.

Imagine that the first sets of AI self-driving cars will cost in the neighborhood of several hundreds of thousands of dollars. Obviously, that price is outside the range of the average person. No argument there.

But, that’s if you only look at the problem or question in just one simple way, namely purchasing the car for purely personal use. That’s the mental trap that most fall into. They perceive of the AI self-driving car as a personal car and nothing more. I’d suggest you reconsider that notion.

It is generally predicted and accepted that AI self-driving cars are likely to be running 24×7. You can have your self-driving car going all the time, pretty much. Today’s conventional cars are only used around 5% of their available time. This makes sense because you drive your personal car to work, you park it, you work all day, you drive home. Over ninety percent of the day it is sitting and not doing anything other than being a paperweight, if you will.

For AI self-driving cars, you have an electronic chauffeur that will drive the car whenever you want. But, are you actually going to want to be going in your AI self-driving car all day long? I doubt it. So, you will have extra available driving capacity that is unused. You could just chock it up and say that’s the way the ball bounces. More than likely, you would realize that you could turn that idle time into personal revenue.

See my article about the non-stop use of AI self-driving cars: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

Here’s what is most likely to actually happen.

We all generally agree that the advent of the AI self-driving car will spur the ridesharing industry. In fact, some say that the AI self-driving car will shift our society into a ridesharing-as-an-economy model. This is why the Uber and Lyft and other existing ridesharing firms are so frantic about AI self-driving cars. Right now, ridesharing firms are able to justify what they do because they are able to connect together human drivers with cars to those that need a lift. If you eliminate the human driver out of the equation, what then if the ridesharing firm doing? That’s the scary proposition for the ridesharing firms.

This all implies that ridesharing-as-a-service will now be possible by the masses. It doesn’t matter if you have a full-time job and cannot spare the time to be a ridesharing driver, because instead you just let your AI self-driving car be your ridesharing service. You mainly need to get connected up with people that need a ridesharing lift. How will that occur? Uber and Lyft are hopeful it will occur via their platform, but it could instead be say a Facebook wherein the people are already there in the billions. This is all going to be a big shakeout coming.

Meanwhile, you buy yourself an AI self-driving car, and you use it for some portion of the time, and the rest of the time you have it earning some extra dough as a ridesharing vehicle. Nice!

This then ties into the affordability question posed earlier.

If you are going to have revenue generated by your AI self-driving car, you can then look at it as a small business of sorts. You then should consider your AI self-driving car as an investment. You are making an investment in an asset that you can put to work and earn revenue. As such, you should then consider what the revenue might be and what the cost might be to achieve that revenue.

Self-Driving Car Revenue Potential Opens Door to Affordability

This opens the door towards being able to afford an otherwise seemingly unaffordable car. Even if the AI self-driving car costs you say several hundreds of thousands of dollars, which seems doubtful as a price tag, but let’s use it as an example, you can weigh against that the revenue you can earn from that car.

For tax purposes (depending on how taxes will be regulated in the era of AI self-driving cars), you can usually deduct a car loan interest when using a car for business purposes (the deduction is only with respect to the portion of it used for business purposes). So, suppose you use your AI self-driving car for 15% of the time, and the other 85% of the time you use it for your ridesharing business, you can deduct the car loan interest normally for the 85% portion.

You can also do deductions for tax purposes, sometimes using the federal standard mileage rate, or also with actual vehicle expenses including:

  •         Depreciation
  •         Licenses
  •         Gas and oil
  •         Tolls
  •         Lease payments
  •         Insurance
  •         Garage rent
  •         Parking fees
  •         Registration fees
  •         Repairs
  •         Tires

Therefore, you need to rethink the cost of an AI self-driving car. It becomes a potential money maker and you need to consider the cost to purchase the car, the cost of ongoing maintenance and support, the cost of special taxes, the cost of undertaking the ridesharing services, and other such associated costs.

These costs are weighed in comparison to the potential revenue. You might at first only be thinking of the revenue derived from the riders that use your AI self-driving car. You might also consider that there is the opportunity for in-car entertainment that you could possibly charge a fee for (access to streaming movies, etc.), perhaps in-car provided food (you might stock the self-driving car with a small refrigerator and have other food in it), etc. You can also possibly use your AI self-driving car for doing advertising and get money from advertisers based on how many eyeballs see their ads while people are going around in your AI self-driving car.

And, this all then becomes part of your budding small business. You get various tax breaks. You might also then expand your business into other areas of related operations or even beyond AI self-driving cars entirely.

One related tie-in might be with the companies that are providing ridesharing scooters and bicycles. Suppose someone gets into your AI self-driving car and they indicate that when they reach their destination, they’d like to have a bicycle to rent. Your ridesharing service might have an arrangement with a firm that does those kinds of ridesharing services, and you get a piece of the action accordingly.

Will the average person be ready to be their own AI self-driving car mogul?

Likely not. But, fear not, a cottage industry will quickly arise that will support the emergence of small businesses that are doing ridesharing with AI self-driving cars. I’ll bet there will be seminars on how to setup your own corporation for these purposes. How to keep your ridesharing AI self-driving car always on the go. Accountants will promote their tax services to the ridesharing start-ups. There will be auto maintenance and repair shops that will seek to be your primary go-to for keeping your ridesharing money maker going. And so on.

In that sense, there will be a ridesharing-as-a-business business that booms to help new entrepreneurs on how to tap into the ridesharing-as-a-service economy. Make millions off your AI self-driving car, will be the late night TV infomercials. You’ll see ads on YouTube of a smiling person that says until they got their AI self-driving car they were stuck in a dead-end job, but now, with their money producing AI self-driving car, they are so wealthy they don’t know where to put all the money they are making. The big bonanza is on its way.

This approach of being a solo entrepreneur to afford an AI self-driving car is only one of several possible approaches. I’d guess it will be perhaps the most popular.

I’ll caution though that it is not a guaranteed path to riches. There will be some that manage to get themselves an AI self-driving car and then discover that it is not being put to ridesharing use as much as they thought. It could be that they live in an area swamped with other AI self-driving cars and so they get just leftover crumbs of ridesharing requests. Or, they are in an area that has other mass transit and no one needs ridesharing. Or, maybe few will trust using an AI self-driving car and so there won’t be many that are willing to use it for ridesharing. Another angle is that you get such a car and do so under the assumption it will be ridesharing for 85% of the time, but you instead use it for personal purposes 70% of the time and this leaves only 30% of the time for the ridesharing (cutting down on the revenue potential).

Meanwhile, there are some other alternatives, let’s briefly consider them:

  •         Solo ridesharing business as a money maker (discussed so far) of an AI self-driving car
  •         Pooling an AI self-driving car
  •         Timeshare an AI self-driving car
  •         Personal use exclusively of an AI self-driving car
  •         Other

In the case of pooling an AI self-driving car, imagine that your next door neighbor would like an AI self-driving car and so would you. The two of you realize that since the neighbor starts work at 7 a.m., while you start work at 8 a.m., and the kids of both families start school at 9 a.m., here’s what you could do. You and the neighbor split the cost of an AI self-driving car. It takes your neighbor to work at 7 a.m., comes back and takes you to work at 8 a.m., comes back and takes the kids to school by 9 a.m. In essence, you all pool the use of the AI self-driving car. There’s no revenue aspects, it’s all just being used for personal use, on a group basis. This could be done with more than just one neighbor.

The pooling would then allow you to split the cost of the AI self-driving car, making it more affordable per person. Suppose you have 3 people and they decided to evenly split the cost, this would make it so that you’d only need to afford one-third of whatever the prevailing cost would be of an AI self-driving car at that time. Voila, the cost is less, seemingly so. But, you’d need to figure out the sharing aspects and I realize it could get heated as to who gets to use the AI self-driving car when needed. It’s like having only one TV and it might be difficult at times to balance the aspect that someone wants to watch one show and someone else wants another one – say you need the AI self-driving car to take you to the store, while the kids need it to get to the ballpark.

In the case of the timeshare approach, you buy into an AI self-driving car like you would if buying into a condo in San Carlo. You purchase a time-based portion of the AI self-driving car. You can use it for whatever is the agreed amount of time. Potentially, you can opt to “invest” in more than one at a time, perhaps getting a timeshare in a passenger car that’s an AI self-driving car, and also investing in an RV that’s an AI self-driving vehicle. You would use them each at different times for their suitable purposes. With any kind of timesharing arrangement, watch out for the details and whether you can get out of it or it might have other such limitations.

There’s the purely personal use of an AI self-driving car option too, which we started this discussion by saying it might be too much for the average person to afford. Even that is somewhat malleable in that there are likely to be car loans that take into account that you are buying an AI self-driving car. The loans might be very affordable in the sense that there’s the collateral of the car, plus the AI self-driving car if needed can be repossessed and then turned into a potential money maker. The auto makers and the banks and others might be willing to cut some pretty good loans to get you into your very own AI self-driving car. As always, watch out for the interest and any onerous loan terms!

Well, before we get too far ahead of ourselves, the main point to be made is that even if AI self-driving cars are priced “high” in comparison to today’s conventional cars, it does not necessarily mean that those AI self-driving cars are only going to be only for the very rich. Instead, those AI self-driving cars are actually going to be a means to help augment the wealth of those that see this as an opportunity. Not everyone will be ready or willing to go the small business route. For many, it will be a means to not only enjoy the benefits of AI self-driving cars, but also spark them towards becoming entrepreneurs. Let’s see how this all plays out and maybe it adds another potential benefit to the emergence of AI self-driving cars.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Family Road Trip and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Have you ever taken a road trip across the United States with your family? It’s considered a core part of Americana to make such a trip. Somewhat immortalized by the now classic movie National Lampoon’s Vacation, the film showcased the doting scatter brained father Clark Griswold with his […]

By Lance Eliot, the AI Trends Insider

Have you ever taken a road trip across the United States with your family? It’s considered a core part of Americana to make such a trip. Somewhat immortalized by the now classic movie National Lampoon’s Vacation, the film showcased the doting scatter brained father Clark Griswold with his caring wife, Ellen, and their vacation-with-your-parents trapped children, Rusty and Audrey, as they all at times either enjoyed or managed to endure a cross-country expedition of a lifetime.

As is typically portrayed in such situations, the father drives the car for most of the trip and serves as the taskmaster to keep the trip moving forward, the mother provides soothing care for the family and tries to keep things on an even keel, and the children must contend with parents that are out-of-touch with reality and that are jointly determined that come heck-or-high-water their kids will presumably have a good time (at least by the definition of the parents). The move was released in 1983 and became a blockbuster that spawned other variants. Today, we can find fault with how the nuclear family is portrayed and the stereotypes used throughout the movie, but nonetheless it put on film what generally is known as the family road trip.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars and doing so with an eye towards how people will want to use AI self-driving cars. It is important to consider the behavior of how human occupants will be while inside an AI self-driving car and therefore astutely design and build AI self-driving cars accordingly.

In a conventional car, for a family road trip, it is pretty much the case that the parents sit in the front seats of the car. This makes sense since either the father or the mother will be the drivers of the car, often times switching off the driving task from one to the other. In prior times the driving task was considered to be “manly” and so usually the husband was shown driving the car. In contemporary times, whatever the nature of and gender of the parents, the point is that the licensed driving adults are most likely to be seated in the front of the car.

If there are two parents, why have both in the front seat, you might ask? Couldn’t you put one of the children up in the front passenger seat, next to the parent or adult that is driving the car? You can certainly arrange things that way, but the usual notion about having the front passenger be another adult or parent is that they can be watching the roadway, serving as an extra pair of eyes for the driver. The driver might be preoccupied with the traffic in front of the car, and meanwhile the front passenger notices that further up ahead there is a bridge-out sign warning that approaching cars need to be cautious. The front passenger is a kind of co-pilot, though they don’t have ready access to the car controls and must instead verbally provide advice to the driver.

The front passenger is not always shown though in movies as a dispassionate observer that thoughtfully aids the driver. Humorous anecdotes are often shown as the front passenger suddenly points at a cow and screams out load for everyone to look. The driver could be distracted by such an exclamation and inadvertently drive off the road at the sudden yelling and pointing. Another commonly portrayed scenario is the front passenger that insists the driver take the next right turn ahead, but offering such a verbal instruction once the car is nearly past the available turn. The driver is then torn between making a radical and dangerous turn, or passing the turn entirely and then likely getting berated by the front seat passenger.

Does this seem familiar to you?

If so, you are likely a veteran of family road trips. Congratulations.

What about the children that are seated in the back seat of the car? One portrayal would be of young children with impressionable minds that are carefully studying their parents and learning the wise ways of life, doing so during the vacation and they will become more learned young adults because of the experience. Of course, this is not the stuff of reality.

Kids Converse with Out-of-Touch Parents

Instead, the movies show something that pertains more closely to reality. The kids often feel trapped. Their parents are forcing them along on a trip. It’s a trip the parents want, but not necessarily what the kids want. At times feeling like prisoners, they need to occupy themselves for hours at time on long stretches of highway. Though at first it might be keen to see an open highway and the mountains and blue skies, it is something that won’t last your attention for hours upon hours, days upon days. Boredom sets in. Conversation with the parents also can only last so long. The parents are out-of-touch with the interests, musical tastes, and other facets of the younger generation.

The classic indication is that ultimately the kids will get into a fight. Not a fisticuffs fight per se, more like an arms waving and hands slapping kind of fight. And the parents then need to turn their heads and look at the kids with laser like eyes, and tell the kids in overtly stern terms, stop that fighting back there or it will be heck to pay. No more ice cream, no more allowance, or whatever other levers the parents can use to threaten the kids to behave. Don’t make me come back there, is the usual refrain.

Sometimes one or more of the kids will start crying. Could be for just about any reason. They are tired of the trip and want it to end. They got hit by their brother or sister and want the parents to know so. Etc. The parents will often retort that the kids need to stop crying. Or, as they are want to say, they’ll give them a true reason to cry (a veiled threat). If the kids are complaining incessantly about the trip, this will likely produce the other classic veiled threat of “I’d better not hear another peep out of you!”

Does the above suggest that the togetherness of the family road trip is perhaps hollow and we should abandon the pretense of having a family trip? I don’t think so. It’s more like showing how family trips really happen. In that sense, the movie National Lampoon’s Vacation was a more apt portrayal than a Leave It To Beaver kind of portrayal, at least in more modern times.

Indeed, today’s family road trips are replete with gadgets and electronics in the car. The kids are likely to be focusing on their smartphones and tablets. The car probably has WiFi, though at times only getting intermittent reception as the trip across some of the more barren parts of the United States takes place. There might be TV’s built into the headrests so the kids can watch movies that way. One of the more popular and cynical portrayals of today’s family road trips is that there is no actual human-to-human interaction inside the car, since everyone is tuned into their own electronic device.

Given the above description of how the family road trip seems to occur, what can we anticipate for the future?

First, it is important to point out that there are varying levels of self-driving cars. The topmost level, a level 5 self-driving car, consists of having AI that can drive the car without any human intervention. This means there is no need for a human driver. The AI should be able to do all of the driving, in the same manner that a human could drive the car. At the levels less than 5, there is and must be a human driver in the car. The self-driving car is not expected to be able to drive entirely on its own and relies upon having a human driver that is at-the-ready to take over the car controls.

See my article about the levels of AI self-driving cars: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

See my article that indicates my framework for AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels less than 5, the AI self-driving car is essentially going to be a lot like a conventional car in terms of what happens during the family road trip. Admittedly, the human driver will be able to have a direct “co-pilot” of sorts to co-share in the driving task via the AI, but otherwise the car design is pretty much the same as a conventional car. This is because you need to have the human driver seated at the front of the car, and the human driver has to have access to car controls to then drive the car. With that essential premise, you can’t otherwise change too much of the interior design of the car.

As an aside, there are some that have suggested maybe we don’t need the human driver to be looking out the windshield and that we can change the car design accordingly. We could put the human driver in the back seat and have them wear a Virtual Reality headset and be connected to the controls of the car via some kind of handheld devices or foot-operated nearby devices. Cameras on the hood and top of the car would beam the visual images to the VR headset. Yes, I suppose this is all possible, but I really doubt we are going to see cars go in that direction. I would say it is a likelier bet that cars less than a level 5 will be designed to look like a conventional car, and only will the level 5 self-driving cars have a new design. We’ll see.

For a level 5 self-driving car, since there is no need for a human driver, we can completely remake the interior of the car. No need to put a fixed place at the front of the car for the human driver to sit. No need for the human driver to look out the windshield. Some of the new designs suggest that one approach would be to have swivel seats for let’s say four passengers in the normal sized self-driving car. The four swivel seats can be turned to face each other, allowing a togetherness of discussion and interaction. At other times, you can rotate the seats so that you have let’s say two facing forward as though the front seats of the car, and the two behind those that are also facing forward.

Other ideas include allowing the seats to become beds. It could be that two seats can connect together and their backs be lowered, thus allowing for a bed, one that is essentially at the front of the car and another at the back of the car. Part of the reason that some are considering designing beds into an AI self-driving car is the belief that AI self-driving cars might be used 24×7, and people might sleep in their cars while on their way to work or while on their vacations.

See my article about the non-stop 24×7 nature of AI self-driving cars: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

Another design aspect involves lining the interior of the self-driving car with some kind of TV or LEDs that would allow for the interior to be a kind of movie theatre. This would allow for watching of movies, shows, live streaming, and even for doing online education. This also raises the question as to whether any kind of glass windows are needed at all. Some assert that we don’t need windows anymore for a Level 5 self-driving car. Instead, the cameras on the outside of the car can show what would otherwise be seen if you looked out a window. The interior screens would show what the cameras show, unless you then wanted to watch a movie and thus the interior screens would switch to displaying that instead.

Are we really destined to have people sitting in self-driving car shells that have no actual windows? It seems somewhat farfetched. You would think that people will still want to look out a real window. You would think that people would want to be able roll down their window when they wish to do so. Now, you could of course have true windows and make the glass out of material that can become transparent at times, and then become blocked at other times, thus potentially have the best of both worlds. We’ll see.

Interior Seat Configuration to be Determined

For a family road trip, you could configure the seats so that all four are facing each other, and have family discussions or play games or otherwise directly interact. This might not seem attractive to some people, or might be something that they sparingly do when trying to have a family chat. As mentioned, the seats could swivel to allow more of a conventional sense of privacy while sitting in your seat. I’d suggest though that the days of the parents saying don’t make us come back there are probably numbered. The “there” will be the same place that the parents are sitting. Maybe too much togetherness? Or, maybe it will spark a renewal of togetherness?

Another factor to consider is that none of the human occupants needs to be a driver. In theory, a family road trip has always consisted of one or more drivers, and the rest were occupants. Now, everyone is going to be an occupant. Will parents feel less “useful” since they are no longer undertaking the driving task directly? Or, will parents find this a relief since they can use the time to interact with their children or catch-up on their reading or whatever?

This has another potentially profound impact on the family road trip, namely that no one needs to know how to drive a car. Thus, in theory, you could even have just and only the children in the self-driving car and have no parents or adults at all. I’d agree that this doesn’t feel like a “family” trip at that point, but it could be that the parents are at the hotel and the kids want to go see the nearby theme park, and so the parents tell the kids they can take the self-driving car there.

How should the interior of the self-driving car be reshaped or re-designed if you have only children inside the car for lengths of time? Would there be interior aspects that you’d want to be able close off from use or slide away to be hidden from use? Perhaps you would not want the children to swivel the swivel seats and be able to lock in place the swivel seats during their journey. Via a Skype-like communication capability, you would likely want to interact with the kids, they seeing you and you seeing them via cameras pointed inward into the self-driving car.

Without a human driver, the AI is expected to do all of the driving. When you go on a cross-country road trip, you often discover “hidden” places to visit that are remote and not on the normal beaten path. The question will be how good is the AI when confronted with driving in an area that perhaps no GPS exists per se. Driving on city roads that have been well mapped is one thing. Driving on dirt roads that are not mapped or for which no map is available, this can be a trickier aspect. Suppose too that you want to have the self-driving car purposely go off-road. The AI has to be able to do that kind of driving, assuming that there is no provision for a human driver and only the AI is able to drive the car.

An AI self-driving car at a Level 5 will normally have some form of Over-The-Air (OTA) capability. This allows the AI to get updated by the auto maker or tech firm, and also for the AI to report what is has discovered into the auto maker or tech firm cloud for collective learning purposes. On a cross country road trip, the odds are that there will be places that have no immediate electronic communication available. Suppose there’s an urgent patch that the OTA needs to provide to the AI self-driving car? This can be dicey when doing a family road trip to off-road locations.

See my article about OTA: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

Suppose the family car, an AI self-driving car, suffers some kind of mechanical breakdown during the trip? What then? Keep in mind that a self-driving car is still a car. This means that parts can break or wear out. This means that you’ll need to get the car to a repair shop. And, with the sophisticated sensors on an AI self-driving car, it will likely have more frequent breakdowns and will require more sophisticated repair specialists and cost more to be repaired. The road trip could be marred by not being able to find someone in a small town that can deal with your broken down AI self-driving car.

See my article about automotive recalls and AI self-driving cars: https://aitrends.com/ai-insider/auto-recalls/

The AI of the self-driving car will become crucial as your driving “pilot” and companion, as it were. Take us to the next town, might be a command that the human occupants utter. One of the children might suddenly blurt out “I need to go to the bathroom” – in the olden days the parents would say hold it until you reach the next suitable place. What will the AI say? Presumably, if its good at what it does, it would have looked up where the next bathroom might be, and offer to stop there. This though is trickier than it seems. We cannot assume that the entire United States will be so well mapped that every bathroom can be looked up. The AI might need to be using its sensors to identify places that might appear to have a bathroom, in the same manner that a parent would furtively look at the window at a gas station or a rest stop.

See my article about NLP and voice commands for AI self-driving cars: https://aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

There is also the possibility of using V2V (vehicle to vehicle communications) to augment the family road trip. With V2V, an AI self-driving car can potentially electronically communicate with another AI self-driving car. Maybe up ahead there is an AI self-driving car that has discovered that the paved road has large ruts and it is dangerous to drive there. This might be relayed to AI self-driving cars a mile back, so those AI self-driving cars can avoid the area or at least be prepared for what is coming. The AI of those self-driving cars could even warn the family (the human occupants) to be ready for a bumpy ride for the mile up ahead.

There is too the possibility of V2I (vehicle to infrastructure communications). This involves having the roadway infrastructure electronically communicate with the AI self-driving car. It could be that a bridge is being repaired, but you wouldn’t know this from simply looking at a map. The bridge itself might be beaming out a signal that would forewarn cars within a few miles that the bridge is inoperable. Once again the AI self-driving car could thus re-plan the journey, and also warn the occupants about what’s going on.

One aspect that the AI can provide that might or might not have been done by a parent would be to explain the historical significance and other useful facets about where you are. Have you been on a family road trip and researched the upcoming farm that was once run by a U.S. president, or maybe there’s a museum where the first scoop of ice cream was ever dished out? A family road trip is often done to see and understand our heritage. What came before us? How did the country get formed? The AI can be a tour guide, in addition to driving the car.

See my article about AI as tour guide for a self-driving car: https://aitrends.com/selfdrivingcars/extra-scenery-perception-esp2-self-driving-cars-beyond-norm/

As perhaps is evident, the interior of the self-driving car has numerous possibilities in terms of how it might be reshaped for the advent of true Level 5 AI self-driving cars. For a family road trip, the interior can hopefully foster togetherness, while also allowing for privacy. It might accommodate sleeping while driving from place to place. The AI will be the driver, and be guided by where the human occupants want to go. In addition to driving, the AI can be a tour guide and perform various other handy tasks too. This is not all rosy though, and the potential for lack of electronic communications could hamper the ride, along with the potential for mechanical breakdowns that might be hard to get repaired.

No more veiled threats from the front seats to the back seats. I suppose some other veiled threats will culturally develop to replace those. Maybe you tell the children, behave yourselves or I won’t let you use the self-driving car to go to the theme park. Will we have AI self-driving cars possibly zipping along our byways with no adults present and only children, as they do a “family” road trip? That’s a tough one to ponder for now. In any case, enjoy the family road trips of today, using a conventional car or even a self-driving car up to the level 5. Once we have level 5 AI self-driving cars, it will be a whole new kind of family road trip experience.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Shiggy Challenge and Dangers of an In-Motion AI Self-Driving Car

By Lance Eliot, the AI Trends Insider I’m hoping that you have not tried to do the so-called Shiggy Challenge. If you haven’t done it, I further hope that my telling you about it does not somehow spark you to go ahead and try doing it. For those of you that don’t know about it […]

By Lance Eliot, the AI Trends Insider

I’m hoping that you have not tried to do the so-called Shiggy Challenge. If you haven’t done it, I further hope that my telling you about it does not somehow spark you to go ahead and try doing it. For those of you that don’t know about it and have not a clue about what it is, be ready to be “amazed” at what is emerging as a social media generated fad. It’s a dangerous one.

Here’s the deal.

You are supposed to get out of a moving car, leaving the driver’s seat vacant, and do a dance while nearby to the continually moving forward car, and video record your dancing (you are also moving forward at the same pace as the moving car), and then jump back into the car to continue driving it.

If you ponder this for a moment, I trust that you instantly recognize the danger of this and (if I might say) the stupidity of it (or does that make me appear to be old-fashioned?).

As you might guess, already there have been people that hurt themselves while trying to jump out of the moving car, spraining an ankle, hurting a knee, banging their legs on the door, etc. Likewise, they have gotten hurt while trying to jump back into the moving car (collided with the steering wheel or the seat arm, etc.).

There are some people that while dancing outside the moving car became preoccupied and didn’t notice that their moving car was heading toward someone or something. Or, they weren’t themselves moving forward fast enough to keep pace with the moving car. And so on. There have been reported cases of the moving car blindly hitting others and also in some cases hitting a parked car or other objects near or in the roadway.

Some of the videos show the person having gotten out of their car and then the car door closing unexpectedly, and, guess what, the car turns out to now have all the car doors locked. Thus, the person could not readily get back into the car to stop it from going forward and potentially hitting someone or something.

This is one of those seemingly bizarre social media fads that began somewhat innocently and then the ante got upped with each person seeking fame by adding more danger to it. As you know, people will do anything to try and get views. The bolder your video, the great the chance it will go viral.

This challenge began in a somewhat simple way. The song “In My Feelings” by Drake was released and at about the same time there was a video made by an on-line personality named Shiggy that showed Shiggy taking a video of himself dancing to the tune (posted on his Instagram site). Other personalities and celebrities then opted to do the same dance, video recording themselves dancing to the Drake song, and they posted their versions. This spawned a mild viral sensation of doing this.

But, as with most things on social media, there became a desire to do something more outlandish. At first, this involved being a passenger in a slowly moving car, getting out, doing the Shiggy inspired dance, and then jumping back in. This is obviously not recommended, though at least there was still a human driver at the wheel. This then morphed into the driver being the one to jump out, and either having a passenger to film it, or setting up the video to do a selfie recording of themselves performing the stunt.

Some of the early versions had the cars moving at a really low speed. It seems now that some people have cars that crawl along at a much faster speed. It further seems that some people don’t think about the dangers of this activity and they just “go for it” and figure that it will all work out fine and dandy. It often doesn’t. Not surprising to most of us, I’d dare say.

The craze is referred to as either the Shiggy Challenge or the In My Feelings challenge (#InMyFeelings), and some more explicitly call it the moving car dance challenge. This craze has even got the feds involved. The National Transportation Safety Board (NTSB) issued a tweet that said this:” #OntheBlog we’re sharing concerns about the #InMyFeelings challenge while driving. #DistractedDriving is dangerous and can be deadly. No call, no text, no update, and certainly no dance challenge is worth a human life.”

Be forewarned that this antic can get you busted, including a distracted driving ticket, or worse still a reckless driving charge.

Now that I’ve told you about this wondrous and trending challenge, I want to emphasize that I only refer to it as an indicator of something otherwise worthy of discussion herein, namely the act of getting out of or into a moving car. I suppose it should go without stating that getting into a moving car is highly dangerous and discouraged. The second corollary equally valid would be that getting out of a moving car is highly dangerous and discouraged.

I’m sure someone will instantly retort that hey, Lance, there are times that it is necessary to get out of or into a moving car. Yes, I’ve seen the same spy movies as you, and I realize that when James Bond is in a moving car and being held at gun point, maybe the right spy action is to leap out of the car. Got it. Seriously, I’ll be happy to concede that there are rare situations whereby getting into a moving car or out of a moving car might be needed, let’s say the car is on fire and in motion or you are being kidnapped, there will be rare such moments. By-and-large, I would hope we all agree that those are rarities.

Sadly, there are annually a number of reported incidents of people getting run over by their own car. Somewhat recently, a person left their car engine running, they got out of the car to do something such as drop a piece of mail into a nearby mailbox, and the car inadvertently shifted into gear and ran them over. These oddities do happen from time to time. Again, extremely rare, but further illustrate the dangers of getting out of even a non-moving car for which the engine is running.

Prior to the advent of seat belts, and the gradual mandatory use and acceptance of seat belts in cars, there were a surprisingly sizable number of reported incidents of people “falling” out of their cars. Now, it could be that some of them jumped out while the car was moving and so it wasn’t particularly the lack of a seat belt involved. On the other hand, there are documented cases of people sitting in a moving car, and not wearing a seat belt, while the car was in motion, and their car door open unexpectedly, with them then proceeding to accidentally hang outside of the car (often clinging to the door), or falling entirely out of the car onto the street.

This is why you should always wear your seat belt. Tip for the day.

For the daredevils of you, it might not be apparent why it is so bad to leave a moving car. If you are a passenger, you have a substantial chance of falling to the street and getting injured. Or, maybe you fall to the street and get killed by hitting the street with your head. Or, maybe you hit an object like a fire hydrant and get injured or killed. Or, maybe another car runs you over. Or, maybe the car you exited manages to drive over you. I think that paints the picture pretty well.

I’d guess that the human driver of the car might be shocked to have you suddenly leave the moving car. This could cause the human driver to make some kind of panic or erratic maneuver with the car. Thus, your “innocent” act of leaving the moving car could cause the human driver to swerve into another car, maybe injuring or killing other people. Or, maybe you roll onto the ground and seem OK, but then the human driver turns the car to try and somehow catch you and actually hits you, injuring you or killing you. There are numerous acrobatic variations to this.

Suppose that it’s the human driver that opts to leave the moving car? In that case, the car is now a torpedo ready to strike someone or something. It’s an unguided missile. Sure, the car will likely start to slow down because the human driver is no longer pushing on the accelerator pedal, but depending upon the speed when the driver ejected, the multi-ton car still has a lot of momentum and chances of injuring or killing or hitting someone or something. If there are any human occupants inside the car, they too are now at the mercy of a car that is going without any direct driving direction.

Risks of Exiting a Moving Car

Let’s recap, you can exit from a moving car and these things could happen:

  •         You directly get injured (by say hitting the street)
  •         You directly get killed (by hitting the street with your head, let’s say)
  •         You indirectly get injured (another car comes along and hits you)
  •         You indirectly get killed (the other car runs you over)
  •         Your action gets someone else injured (another car crashes trying to avoid you)
  •         Your action gets someone else killed (the other car rams a car and everyone gets killed)

I’m going to carve out a bit of an exception to this aspect of leaving a moving car. If you choose to leave the moving car or do so by happenstance, let’s call that a “normal” exiting of a moving car. On the other hand, suppose the car gets into a car accident, unrelated for the moment to your exiting, and during the accident you are involuntarily thrown out of the car due to the car crash. That’s kind of different than choosing to exit the moving car per se. Of course, this happens often when people that aren’t wearing seat belts get into severe car crashes.

Anyway, let’s consider that there’s the bad news of exiting a moving car, and we also want to keep in mind that trying to get into a moving car has its own dangers too. I remember a friend of mine in college that opted to try jumping into the back passenger seat of a moving car (I believe some drinking had been taking place). His pal opened the back door, and urged him to jump in. He was lucky to have landed into the seat. He could have easily been struck by the moving car. He could have fallen to the street and gotten run over by the car. Again, injuries and potential death, for him, and for other occupants of the car, and for other nearby cars too.

I’d like to enlarge the list of moving car aspects to these:

  •         Exiting a moving car
  •         Entering a moving car
  •         Riding on a moving car
  •         Hanging onto a moving car
  •         Facing off with a moving car
  •         Chasing after a moving car
  •         Other

I’ve covered already the first two items, so let’s consider the others on the list.

There are reports from time-to-time of people that opted to ride on the hood of a car, usually for fun, and unfortunately they fell off and got hurt or killed once the car got into motion.

Hanging onto a moving car was somewhat popularized by the “Back To The Future” movie series when Marty McFly (Michael J. Fox) opts to grab onto the back of a car while he’s riding his skateboard. I’m not blaming the movie for this and realize it is something people already had done, but the movie did momentarily increase the popularity of trying this dangerous act.

Facing off with a moving car has sometimes been done by people that perhaps watch too many bull fights. They seem to think that they can hold a red cape and challenge the bull (the car). In my experience, the car is likely to win over the human standing in the street and facing off with the car. It’s a weight thing.

Chasing after a moving car happens somewhat commonly in places like New York City. You see a cab, it fails to stop, you are in a hurry, so you run after the cab, yelling at the top of your lungs. With the advent of Uber and other ridesharing services, this doesn’t happen as much as it used to. Instead, we let our mobile apps do our cab or rideshare hailing for us.

What does all of this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and one aspect that many auto makers and tech firms are not yet considering deals with the aforementioned things that people do regarding moving cars.

Some of the auto makers and tech firms would say that these various actions by humans, such as exiting a moving car or trying to get into a moving car, are considered an “edge” problem. An edge problem is one that is not at the core of the overarching problem being solved. If you are in the midst of trying to get AI to drive a car, you likely consider these cases of people exiting and entering a moving car to be such a remote possibility that you don’t put much attention to it right now. You figure it’s something to ultimately deal with, but getting the car to drive is foremost in your mind right now.

I’ve had some AI developers that tell me that if a human is stupid enough to exit from a moving car, they get what they deserve. Same for all of the other possibilities, such as trying to enter a moving car, chasing after a moving car, etc. The perspective is that the AI has enough to do already, and dealing with stupid human tricks (aka David Letterman!), that’s just not very high priority. Humans do stupid things, and these AI developers shrug their shoulders and say that an AI self-driving car is not going to ever be able to stop people from being stupid.

This narrow view by those AI developers is unfortunate.

I can already predict that there will be an AI self-driving car that while driving on the public roadways will have an occupant that opts to jump out of the moving self-driving car. Let’s say that indeed this is a stupid act and the person had no particularly justifiable cause to do so. If the AI self-driving car proceeds along and does not realize that the person jumped out, and the AI blindly continues to drive ahead, I’ll bet there will be backlash about this. Backlash against the particular self-driving car maker. Backlash against possibly the entire AI self-driving car industry. It could get ugly.

For my explanation of the egocentric designs of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For lawsuits about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/

For why AI self-driving cars need to be able to do defensive driving, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

Let’s take a moment and clarify too what is meant by an AI self-driving car. There are various levels of capabilities of AI self-driving cars. The topmost level is considered Level 5. A Level 5 AI self-driving car is one in which the AI is fully able to drive the car, and there is no requirement for a human driver to be present. Indeed, often a Level 5 self-driving car has no provision for human driving, encompassing that there aren’t any pedals and not a steering wheel available for a human to use. For self-driving cars less than a Level 5, it is expected that a human driver will be present and that the AI and the human driver will co-share the driving task. I’ve mentioned many times that this co-sharing arrangement allows for dangerous situations and adverse consequences.

For more about the co-sharing of the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For human factors aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/not-fast-enough-human-factors-ai-self-driving-cars-control-transitions/

The level of an AI self-driving car is a crucial consideration in this discussion about people leaping out of a moving self-driving car or taking other such actions.

Consider first the self-driving cars less than a Level 5. If the human driver that’s supposed to be in the self-driving car is the one that jumps out, this leaves the AI alone to continue driving the car (assuming that no other human driver is an occupant and able to step into the human driving role of the co-sharing task). We likely don’t want the AI to now be alone as the driver, since for levels less than 5 it is considered a precondition that there be a human driver present. As such, the AI needs to ascertain that the human driver is no longer present, and as a minimum proceed to take some concerted effort to safely bring the self-driving car to a proper and appropriate halt.

Would we want the AI in the less-than level 5 self-driving car to take any special steps about the exited human? This is somewhat of an open question because the expectation of what the AI can accomplish at the less-than level 5 is that it is not fully yet sophisticated. It could be that we might agree that at the less-than level 5, the most we can expect is that the AI will try to safely bring the self-driving car to a halt. It won’t try to somehow go around and pick-up the person or take other actions that we would expect a human driver to possibly undertake.

This brings us to the Level 5 self-driving car. It too should be established to detect that someone has left the moving self-driving car. In this case, it doesn’t matter whether the person that left is a driver or not, because no human driver is needed anyway. In that sense, in theory, the driving can continue. It’s now a question of what to do about the human that left the moving car.

In essence, with the Level 5 self-driving car, we have more options of what to have the AI do in this circumstance. It could just ignore that a human abruptly left the car, and continue along, acting as though nothing happened at all. Or, it could have some kind of provision of action to take in such situations, and invoke that action. Or, it could act similar to the less-than Level 5 self-driving cars and merely seek to safely and appropriately bring the self-diving car to a halt.

One would question the approach of not doing anything and yet being aware that a human left the self-driving car while in motion, this seems counter intuitive to what we would expect or hope that the AI would do. If the AI is acting like a human driver, we would certainly expect that the human driver would do something overtly about the occupant that has left the moving car. Call 911. Slow down. Turn around. Do something. Unless the human driver and the occupants are somehow in agreement about leaving the self-driving car, and maybe they made some pact to do so, it would seem prudent and expected that a human driver would do something to come to the aid of the other person. Thus, so should the AI.

You might wonder how would the AI even realize that a human has left the car?

Consider that there are these key aspects of the driving task by the AI:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls commands issuance

See my article about the framework of AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

The AI self-driving car will likely have sensors pointing outward of the car, such as the use of radar, cameras, LIDAR, sonar, and the like. These provide an indication of what is occurring outside of the self-driving car in the surrounding environment.

It is likely that there will also be sensors pointing inward into the car compartment. For example, it is anticipated that there will be cameras and an audio microphone in the car compartment. The microphone allows for the human occupants to verbally interact with the AI system, similar to interacting with a Siri or Alexa. The camera would allow those within the self-driving car to be seen, such as if the self-driving car is being used to drive your children to school that you could readily see that they are doing OK inside the AI self-driving car.

For more about the natural language interaction with human occupants in a self-driving car, see my article: https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

I’ll walk you through a scenario of an AI self-driving car at a Level 5 and the case of someone that opts to exit from the self-driving car while it is in motion.

Joe and Samatha have opted to use the family AI self-driving car to go to the beach. They both gather up their beach towels and sunscreen, and get into the AI self-driving car. Joe tells the AI to take them to the beach. Dutifully, the AI system repeats back that it will head to the beach and indicates an estimated arrival time. Samatha and Joe settle into their seats and opt to watch a live video stream of a volleyball tournament taking place at the beach and for which they hope to arrive there before it ends.

At this juncture, the AI system would have used the inward facing camera to detect that two people are in the self-driving car. In fact, it would recognize them since it is the family car and they have been in it many times before. The AI sets the internal environment to their normal preferences, such as the temperature, the lighting, and the rest. It proceeds to drive the car to the beach.

Once the self-driving car gets close to the beach, turns out there’s lots of traffic as many other people opted to drive to the beach that day. Joe starts to get worried that he’s going to miss seeing the end of the volleyball game in-person. So, while the self-driving car is crawling along at about five to eight miles per hour in solid traffic, Joe suddenly decides to open the car door and leap out. He then runs over to the volleyball game to see the last few moments of the match.

Level 5 Self-Driving Car Thinks About Passenger Who Jumped Out

The AI system would have detected that the car door had opened and closed. The inward facing cameras would have detected that Joe had moved toward the door and exited the door. The outward facing cameras, the sonar, the radar, and the LIDAR would all have detected him once he got out of the self-driving car. The sensor fusion would have put together the data from those outward facing sensors have been able to ascertain that a human was near to the self-driving car, and proceeding away from the self-driving car at a relatively fast pace.

The virtual world model would have contained an indicator of a human near to the self-driving car, once Joe had gotten out of the self-driving car. And, it would also have indicators of the other nearby cars. It is plausible then that the AI would via the sensors be aware that Joe had been in the self-driving car, had gotten out of it, and was then moving away from it.

The big question then is what should the AI action planning do? If Joe’s exit does not pose a threat to the AI self-driving car, in the sense that Joe moved rapidly away from it, and so he’s not a potential inadvertent target of the self-driving car by its moving forward, presumably there’s not much that needs to be done. The AI doesn’t need to slow down or stop the car. But, this is unclear since it could be that Joe somehow fell out of the car, and so maybe the self-driving car should come to a halt safely.

Here’s where the interaction part comes to play. The AI could potentially ask the remaining human occupant, Samantha, about what has happened and what to do. It could have even called out to Joe, when he first opened the door to exit, and asked what he’s doing. Joe, had he been thoughtful, could have even beforehand told the AI that he was planning on jumping out of the car while it is in motion, and thus a kind of “pact” would have been established.

These aspects are not so easily decided upon. Suppose the human occupant is unable to interact with the AI, or refuses to do so? This is a contingency that the AI needs to contend with. Suppose the human is purposely doing something highly dangerous? Perhaps in this case that when Joe jumped out, there was another car coming up that the AI could detect and knew might hit Joe, what should the AI have done?

Some say that maybe the best way to deal with this aspect of leaping out of the car involves forcing the car doors to be unable to be opened by the human occupants when inside the AI self-driving car. This might seem appealing, as an easy answer, but it fails to recognize the complexity of the real-world. Will people accept the idea that they are locked inside an AI self-driving car and cannot get out on their own? Doubtful. If you say that just have the humans tell the AI to unlock the door when they want to get out, and the AI can refuse when the car is in motion, this again will likely be met with skepticism by humans as a viable means of human control over the automation.

A similar question though does exist about self-driving cars and children.

If AI self-driving cars are going to be used to send your children to school or play, do you want those children to be able to get out of the self-driving car whenever they wish? Probably not. You would want the children to be forced to stay inside. But, there’s no adult present to help determine when unlocking the doors is good or not to do. Some say that by having inward facing cameras and a Skype like feature, the parents could be the ones that instruct the AI via live streaming to go ahead and unlock the doors when appropriate. This of course has downsides since it makes the assumption that there will be a responsible adult available for this purpose and that they’ll have a real-time connection to the self-driving car, etc.

Each of the other actions by humans such as entering the car while in-motion, chasing after a self-driving car, hanging onto a self-driving car, riding on top of a self-driving car, and so on, they all have their own particulars as to what the AI should and maybe should not do.

Being able to detect any of these human actions is the “easier” part since it involves finding objects and tracking those objects (when I say easy, I am not saying that the sensors will work flawlessly and nor that it can necessarily reliably make such detections, I am simply saying that the programming for this is clearer than the AI action planning is).

Using machine learning or similar kinds of automation for figuring out what to do is unlikely as a means of getting out of the pickle of what the AI should do. There are generally few instances of these kind, and each instance would tend to have its own unique circumstances. It would be hard to have a large enough training set. There would also be the concern that the learning would overfit to the limited data and thus not be viable in generalizable situations that are likely to arise.

Our view of this is that it is something requiring templates and programmatic solutions, rather than an artificial neural network or similar. Nonetheless, allow me to emphasize that we still see these as circumstances that once encountered should go up to the cloud of the AI system for purposes of sharing with the rest of the system and for enhancing the abilities of the on-board AI systems that otherwise have not yet encountered such instances.

For understanding the OTA capabilities of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

The odds are high that human occupants will be tempted to jump out of a moving AI self-driving car more so than a human driven car, or similarly try to get into one that is moving. I say this because at first, humans will likely be timid with the AI and be hesitant to do anything untoward, but after a while the AI will become more accepted and humans will become bolder. If your friend or parent is driving the car, you are likely more socially bound to not do strange tricks, you would worry that they might get in trouble. With the AI driving the car, you have no such social binding per se. I’m sure that many maverick teenagers will delight in “tricking” the AI self-driving car into doing all sorts of Instagram worthy untoward things.

Of course, it’s not always just maverick kinds of actions that would occur. I’ve had situations wherein I was driving in an area that was unfamiliar, and a friend walked ahead of my car, guiding the way. If you owned an AI self-driving car of Level 5, you might want it to do the same — you get out of the self-driving car and have it follow you. In theory, the self-driving car should come to a stop before you get out, and likewise be stopped when you want to get in, but is this always going to be true? Do we want to have such unmalleable rules for our AI self-driving cars?

Should your AI self-driving car enable you to undertake the Shiggy Challenge?

In theory, a Level 5 AI self-driving car could do so and even help you do so. It could do the video recording of your dancing. It could respond to your verbal commands to slow down or speed-up the car. It could make sure to avoid any upcoming cars and thus avert the possibility of ramming into someone else while you are dancing wildly to “In My Feelings.” This is relatively straightforward.

But, as a society, do we want this to be happening? Will it encourage behavior that ultimately is likely to lead to human injury and possibly death? We can add this to a long list of the ethics aspects of AI self-driving cars. Meanwhile, it’s something that cannot be neglected, else we’ll for sure have AI that’s unaware and those “stupid” humans will get themselves into trouble and the AI might get axed because of it.

As the song says: “Gotta be real with it, yup.”

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Crossing the Rubicon and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Julius Caesar is famously known for his radical act in 49 BC of defying authority by marching his army across the Rubicon river. Unless you happen to be a historian, you might not be aware that the Roman Senate had explicitly ordered Caesar to disband his army, return […]

By Lance Eliot, the AI Trends Insider

Julius Caesar is famously known for his radical act in 49 BC of defying authority by marching his army across the Rubicon river. Unless you happen to be a historian, you might not be aware that the Roman Senate had explicitly ordered Caesar to disband his army, return to Rome, and not to bring his troops across the Rubicon. His doing so was an outright act of defiance.

Not only was Caesar defiant, he was risking everything by taking such a bold and unimaginable act. The government of Rome and its laws were very clear cut that that any imperium (a person appointed with the right to command) that dared to cross the Rubicon would forfeit their imperium, meaning they would no longer hold the right to command troops. Furthermore, it was considered a capital offense that would cause the commander to become an outlaw. The commander would be condemned to death, and — just to give the commander some pause for thought, all of the troops that followed the commander across the Rubicon would also be condemned to death. Presumably, the troops would not be willing to risk their own lives, even if the commander was willing to risk his life.

As we now know, Caesar made the crossing. When he did so, he reportedly exclaimed “alea iacta est” which loosely translated means that the die has been cast. We use today the idiom “crossing the Rubicon” to suggest a circumstance where you’ve opted to go beyond a point of no return. There is no crossing back. You can’t undo what you’ve done. In the case of Caesar, his gamble ultimately kind of paid-off, as he was never punished per se for his act of rebellion, and he led the Roman Empire, doing so until his assassination in 44 BC.

I’m sure that most of us have had situations where we felt like we were crossing the Rubicon.

One time I was out in the wilderness as a scout master and decided to take the scouts over to a mountain area that was readily hiked over to. While doing the hike, I began to realize that we were going across a dry streambed. Sure enough, when we reached the base of the mountain, rain began to fall, and the streambed began to fill with water. Getting back across it would not have been easy. The more the rain fell, the faster the stream became. Eventually, the stream was so active that we were now stuck on the other side of it. We had crossed our own Rubicon.

At work, you’ve probably had projects that involved making some difficult go or no-go decisions. At one company, I had a team of developers and we were going to create a new system to keep track of VHS video tapes, but we also knew that DVD was emerging. Should we make the system for VHS or for DVD? We only had enough resources to do one. After considering the matter, we opted to hope that DVD was going to catch-on and so we proceeded to focus on DVD’s. We got lucky and it turned out to be one of the first such systems and even earned an award for its innovation. Crossed the Rubicon and luckily landed on the right side.

Of course, crossing the Rubicon can lead to bad results. Caesar was fortunate that he was not right away killed for his insubordination. Maybe his own troops might have even tried to kill him, since there were bound to be some that didn’t want to get caught up in the whole you-are-condemned to death thing. The recent news story about the teenage soccer team in Thailand that went into the caves and became lost, and then the rain closed off their exit, it’s something that they all easily could have died in those caves, were it not for the tremendous and lucky effort that ultimately saved them.

What does this all have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI for self-driving cars. As we do so, there are often very serious and crucial “crossing the Rubicon” kinds of decisions to be made. These same decisions are being made right now by auto makers and tech firms also developing AI self-driving cars.

Let’s take a look at some of those kinds of difficult and nearly undoable decisions that need to be made.

  •         LIDAR

LIDAR is a type of sensor that can be used for an AI self-driving car. It makes use of Light and Radar to help ascertain the world around the self-driving car. Beams of light are sent out from the sensor, the light bounces back like a radar wave, and the sensor is able to gauge the shapes of nearby objects by the length of time involved in the returns of the light waves. This can be a handy means to have the AI determine if there is a pedestrian that is standing ahead of the self-driving car and at a distance of say 15 feet. Or that there is a fire hydrant over to the right of the self-driving car at a distance of 20 feet. And so on.

For my assessment of LIDAR for AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/lidar-secret-sauce-self-driving-cars/

AI self-driving cars tend to use conventional radar to try and identify the surroundings, they use sonic sensors to do likewise, and they use cameras to capture visual images and try to analyze what’s around via vision related processing. They can also use LIDAR. There is no stated requirement that an AI self-driving car has to use any of those kinds of sensors. It is up to whatever the designers of the self-driving car decide to do.

That being said, it is hard to imagine that a self-driving car could properly operate in the real-world if you didn’t have cameras on it and weren’t doing vision processing of the images. You could maybe decide you’ll only use cameras, but that’s a potential drawback since there are going to be situations where vision alone won’t provide a sufficient ability to sense the real-world around the self-driving car. Thus, you’d likely want to add at least radar. Now, with the cameras and radar, you have a fighting chance of being able to have a self-driving car that can operate in the real-world. Adding sonar would help further.

What about LIDAR? Well, if you only had LIDAR, you’d probably not have much of an operational self-driving car, so you’d likely want to add cameras too. Now, with LIDAR and cameras, you have a fighting chance. If you also add radar, you’ve further increased the abilities. Add sonic sensors and you’ve got even more going for you.

Indeed, you might say to yourself, hey, I want my self-driving car to have as many kinds of sensors that will increase the capabilities of the self-driving car to the maximum possible. Therefore, if you already had cameras, radar, and sonar, you’d likely be inclined to add LIDAR. That being said, you also need to be aware that nothing in life is free. If you add LIDAR, you are adding the costs associated with the LIDAR sensor. You are also increasing the nature of the AI programming required to be able to collect the LIDAR data and analyze it.

There are these major stages of processing for self-driving cars:

o   Sensor data collection and interpretation

o   Sensor fusion

o   Virtual model updating

o   AI action plan updating

o   Car controls commands issuance

See my framework about AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

If you add LIDAR to the set of sensors for your self-driving car, you also presumably need to add the software needed to do the sensor data collection and interpretation of the LIDAR. You also presumably need to boost the sensor fusion to be able to handle trying to figure out how to reconcile the LIDAR results, the radar results, the camera vision processing results, and the sonar results. Some would say that makes sense because it’s like reconciling your sense of smell, sense of sight, sense of touch, sense of hearing, and that if you lacked one of those senses you’d have a lesser ability to sense the world. You would likely argue that the overhead of doing the sensor fusion is worth what you’d gain.

Nearly all of the auto makers and tech firms would agree that LIDAR is essential to achieving a true AI self-driving car. A true AI self-driving car is considered by industry standards to be a self-driving car of a Level 5. There are levels less than 5 that are self-driving cars requiring a human driver. These involve co-sharing of the driving task with a human driver. For a Level 5 self-driving car, the idea is that the self-driving car is driven only by the AI, and there is no need for a human driver. The Level 5 self-driving car even is likely to omit entirely any driving controls for humans, and the Level 5 is expected to be able to drive the car as a human would (in terms of being able to handle any driving task to the same degree a human could do so).

For my article about the levels of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

Tesla Foregoes LIDAR

It might then seem obvious that of course all self-driving cars would use LIDAR. Not so for Tesla. Tesla and Elon Musk have opted to go without LIDAR. One of Elon Musk’s most famous quotes for those in the self-driving car field is this one:

“In my view, it’s a crutch that will drive companies to a local maximum that they will find very hard to get out of. Perhaps I am wrong, and I will look like a fool. But I am quite certain that I am not.”

https://www.theverge.com/2018/2/7/16988628/elon-musk-lidar-self-driving-car-tesla

This is the crossing of the Rubicon for Tesla.

Right now and for the foreseeable future, they are not making use of LIDAR. It could be that they’ve made a good bet and everyone else will later on realize they’ve needlessly deployed LIDAR. Or, maybe there’s more than one way to skin a cat, and it will turn out that Tesla was right about being able to forego LIDAR, while the other firms were right to not forego it. Perhaps both such approaches will achieve the same ends of getting us to a Level 5 self-driving car.

For Tesla, if they are betting wrong, it would imply that they will be unable to achieve a Level 5 self-driving car. And if that’s the case, and the only way to get there is to add LIDAR, they would then need to add it to their self-driving cars. This would be a likely costly endeavor to retrofit and might or might not be viable. They might then opt to redesign future designs and write-off the prior models as unalterable, but at that point will be behind other auto makers, and will need to after-the-fact figure out how to integrate it into everything else. Either way, it’s going to be costly and could cause significant delays and a falling behind of the rest of the marketplace.

It would also cause Tesla to have to eat crow, as it were, since they’ve all along advertised that your Tesla has “Full Self-Driving Hardware on All Cars” – which might even get them caught in lawsuits by Tesla owners that argue they were ripped-off and did not actually get all the hardware truly needed for a self-driving car. This could lead to class action lawsuits. It could drain the company of money and focus. It would likely cause the stock to drop like a rock.

For my article about product liability for AI self-driving cars, see: https://aitrends.com/selfdrivingcars/product-liability-self-driving-cars-looming-cloud-ahead/

For my article about class action lawsuits against AI self-driving car makers see: https://aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/

This does not mean that Tesla couldn’t re-cross the Rubicon and opt to add LIDAR, but it just shows that when you’ve made the decision to cross the Rubicon, going back is often somewhat infeasible or going to be darned hard to do.

Perhaps Elon Musk had uttered “alea iacta est” when he made this rather monumental decision.

  •         Straight to Level 5

Another potential crossing of the Rubicon involves deciding whether to get to Level 5 by going straight to it, or instead to get there by progressing via Level 3 and Level 4 first.

Some believe that you need to crawl before you walk, and walk before your run, in order to progress in this world. For self-driving cars, this translates into achieving Level 3 self-driving cars first. Then, after maturing with Level 3, move into Level 4. After maturing with Level 4, move into Level 5. This is the proverbial “baby steps” at a time kind of approach.

Others assert that there’s no need to do this progressively. You can skip past the intermediary levels. Just aim directly to get to Level 5. Some would say it is a waste of time to do the intermediary levels. Others would claim you’ll not get to Level 5 if you don’t cut your teeth first on the lower levels. No one knows for sure.

Meanwhile, Waymo has pretty much made a bet that you can get straight to Level 5 and there’s no need to do the intermediaries. They rather blatantly eschew the intermediary steps approach. They have taken the bold route of get to the moon or bust. No need to land elsewhere beforehand. Will they be right? Suppose their approach falls flat and it turns out those that got to Level 4 are able to make the leap to Level 5, meanwhile maybe the efforts underway on Level 5 aren’t able to be finalized.

For more about the notion that Level 5 is like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

Does this mean that Waymo cannot re-cross the Rubicon and opt to first settle for a Level 4. As with all of these crossings, they could certainly back-down, though it would likely involve added effort, costs, and so on.

  •         Machine Learning Models

When developing the AI for self-driving cars, by-and-large it involves making use of various machine learning models. Tough choices are made about which kinds of neural networks to craft and what forms of learning algorithms to employ. Decisions, decisions, decisions.

Trying to later on change these decisions can be difficult and costly. It’s another crossing of the Rubicon.

For my article about machine learning and AI self-driving cars, see: https://aitrends.com/ai-insider/machine-learning-benchmarks-and-ai-self-driving-cars/

  •         Virtual World Model

At the crux of most AI self-driving car systems there is a virtual world model. It is used to bring together all of the information and interpretations about the world surrounding the self-driving car. It embodies the latest status gleaned from the sensors and the sensor fusion. It is used for the creation of AI action plans. It is crucial for doing what-if scenarios in real-time for the AI to try and anticipate what might happen next.

In that sense, it’s like having to decide whether to use a Rubik’s cube or use a Rubik’s snake or a Rubik’s domino. Each has its own merits. Whichever one you pick, everything else gets shaped around it. Thus, if you put at the core a virtual world model structure that is of shape Q, you are going to base the rest of the AI on that structure. It’s no easy thing to then undo and suddenly shift to shape Z. It would be costly and involve gutting much of the AI system you’d already built.

It’s once again a crossing of the Rubicon.

  •         Particular Brand/Model of Car

Another tough choice in some cases is which brand/model of car to use as the core car underlying your AI self-driving car. For the auto makers, they are of course going to choose their own brand/model. For the tech firms that are trying to make the AI of the self-driving car, the question arises as to whom do you get into bed with. The AI you craft will be to a certain extent particular to that particular car.

I know that some of you will object and say that the AI, if properly written, should be readily ported over to some other self-driving car. This is much harder than it seems. I assure you it’s not just like re-compiling your code and voila it works on a different kind of car.

Furthermore, many of these tech firms are painting themselves into a corner. They are writing their AI code with magic numbers and other facets that will make porting the AI system nearly impossible. Without good commenting and thinking ahead about generalizing your system, it’s going to be stuck on whatever brand/model you started with. The rush right now to get the stuff to work is more important than making it portable. There are many that will be shocked down the road that they suddenly realize they cannot overnight shift onto some other model car.

See my article about kits for AI self-driving cars: https://aitrends.com/selfdrivingcars/kits-and-ai-self-driving-cars/

See my article about idealism and AI self-driving cars: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

  •         Premature Roadway Release

This last example of crossing the Rubicon has to do with putting AI self-driving cars onto public roadways, perhaps doing so prematurely.

The auto makers and tech firms are eager to put their self-driving cars onto public roadways. It is a sign to the world that there is progress being made. It helps boost stock prices. It helps for the AI itself to gain “experience” from being driven miles upon miles. It helps the AI developers as they tune and fix the AI systems and do so based on real-world encounters by the self-driving car.

That’s all well and good, except for the fact that it is a grand experiment upon the public. If the self-driving cars have problems and get into accidents, it’s not going to be good times for self-driving cars. Indeed, it’s the bad apple in the barrel in that even if only one specific brand of self-driving car gets into trouble, the public will perceive this as the entire barrel is bad.

If the public becomes disenchanted with AI self-driving cars, you can bet that regulators will change their tune and no longer be so supportive of self-driving cars. A backlash will most certainly occur. This could slow down AI self-driving car progress. It could somewhat curtail it, but it seems unlikely to stop it entirely. Right now, we’re playing a game of dice and just hoping that few enough of the AI self-driving cars on the roadways have incidents that it won’t become a nightmare for the whole industry.

For more about this rolling of the dice, see my article about responsibility and AI self-driving cars: https://aitrends.com/ai-insider/responsibility-and-ai-self-driving-cars/

This then is another example of crossing the Rubicon.

Putting AI self-driving cars onto the roadways, which if it turns out premature, might make it difficult to continue forward with self-driving cars, at least not at the pace that it is today.

For the AI self-driving car field, there are a plethora of crossings of the Rubicon. Some decision makers are crossing the Rubicon and doing so like Caesar, fully aware of the chances they are taking, and betting that in the end they’ve made the right choice. There are some decision makers that are blissfully unaware that they have crossed the Rubicon, and only once something untoward happens will they realize that oops, they made decisions earlier that now haunt them. Each of these decisions are not necessarily immutable and undoable per se, it’s more like there is a cost and adverse impact if you’ve made the wrong choice and need to backtrack or redo what you’ve done.

I’d ask that all of you involved in AI self-driving cars make sure to be cognizant of the Rubicon’s you’ve already crossed, and which ones are still up ahead. I’m hoping that by my raising your awareness, in the end you’ll be able to recite the immortal words of Caesar: Veni, vidi, vici (which translates loosely into I came, and I saw, and I conquered).

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Developer Burnout and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Did you realize that apparently more than half of all United States medical doctors are suffering from burnout? You might at first glance not be overly surprised, since you’ve likely seen how harried most medical doctors are. Often, their patient load is at the max and they barely […]

By Lance Eliot, the AI Trends Insider

Did you realize that apparently more than half of all United States medical doctors are suffering from burnout?

You might at first glance not be overly surprised, since you’ve likely seen how harried most medical doctors are. Often, their patient load is at the max and they barely have time to say hello before they move onto the next patient. Many medical doctors bitterly complain that the nature of the healthcare system prevents them from spending quality time with their patients as they are under strict time guidelines and have little choice in the matter of how they use their time with patients.

You might also be thinking that it doesn’t matter that medical doctors might be suffering burnout. The perception is that they are well-paid anyway, and so if they have to work long and hard hours, so be it. Some might think of them as whiners that don’t realize how good they really have it. This though would be a misunderstanding about the impacts of burnout.

The medical doctors reported that nearly one in ten of them had committed a significant medical mistake, one or more such mistakes or errors, in the three-month period prior to the poll being taken. It is generally well proven already that burnout leads to medical doctors making mistakes or errors, and we now know the alarming frequency in which it can occur.

There can be errors in ascertaining the ailment that a patient has, or maybe a mistake in a prescription issued for a patient, and so on. The burnout therefore can directly and adversely impact the nature and quality of the healthcare provided to patients. In addition, medical doctors can become depressed, have high fatigue, and otherwise be less effective and efficient in performing their medical tasks.

Presumably, burnout is a reversible work-related matter.

If you can detect early enough that someone is suffering from burnout at work, you can potentially provide them guidance on how to alleviate the burnout. Some try stress management techniques to reduce their burnout. Some use the latest in so-called mindfulness training. Some try to seek a balance between the demands of work and their other life pursuits, carving out more time and attention to efforts outside work that enable them to better contend with the work situation.

It is usually unlikely that changes by the individual alone that has exhibited the burnout is going to be sufficient to curtail the burnout. The work situation often needs to also adjust. An organization has to realize what factors are leading to the burnout, and potentially readjust work schedules, or adjust the nature of the work being performed, and so on. Someone that is otherwise well prepared to contend with burnout is still going to have a tough time not getting burned out if the work environment that presumably is causing the burnout does not make adjustments too. It takes two to tango, as they say.

When I mention that burnout is potentially reversible, I’d like to clarify that for some people in some companies it is not reversible. When someone reaches a certain threshold, they can be so far gone that they cannot find within themselves the desire and nor the need to re-commit themselves to work, even if the company offers to try and find a means to do so. I’ve seen some workers that got burned out at a firm and they left in disgust and with no intention of ever returning. That being said, I’ve seen some firms that claimed they wanted to save someone that was burned out, but the firm did nothing more than token attempts to keep the person, which for that person made them even more determined to leave the firm.

When considering medical doctors that get burned out, you need to consider not just the impact on them and their patients, but also take a wider view and consider the larger ramifications. The odds are that if the patient gets less capable medical care due to the burn out, it might also indirectly impact their family and friends. Those family and friends might need to provide other outside care or additional care to make-up for whatever medical errors or omissions occur. The odds are that fellow staff at the medical facility will also suffer, having to either deal day-to-day with a medical doctor that might be difficult to deal with, or need to deal with patients that become irate when they realize they are not getting their desired care. Overall, you could make the claim that medical doctor burnout will raise costs overall for the medical delivery system and all of society accordingly, and also reduce the available medical care for others by needlessly consuming limited available medical resources by the burnout effects.

There are some workers that drive their own burnout. You’ve likely dealt with a workaholic that seems to work all of the time. They say that it makes them stronger and they enjoy it. This can sometimes be true, but more often than not it is the path towards burnout. A workaholic can work themselves to the bone. For some junior managers they think that having a workaholic under them is great, since the person gets so much work done. But, in the end, the person might be a prime candidate for burnout and thus the junior manager has likely done a disservice by not having done something about the matter earlier.

Besides the workaholic, there are other types of workers that can be especially susceptible to burnout. There’s the lone ranger that tries to take on all the work themselves and doesn’t appropriately make use of their fellow team mates. There’s the perfectionist that wants to do everything to the nth degree and often goes overboard in terms of their work. There’s the superhero type that relishes coming to the rescue on efforts and will become overwhelmed with work. There’s the martyr that likes to do tons of work to be able to let others know that they are doing so. Etc.

Besides medical doctors, there are other professions that involve substantial amounts of burnout.

One such occupation are the AI developers that are working on AI self-driving cars.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars and besides our own AI developers we also keep in touch with other self-driving car AI developers.

Generally, burn out is pervasive among such AI developers.

Why?

Why Burnout is Pervasive Among Self-Driving Car Developers

You might at first think that it would be an exciting and enjoyable job to be an AI developer for AI self-driving cars. It’s like trying to achieve a moonshot and having been there in the early days of developing the Apollo spacecraft’s that got us to the moon. There’s a thrill about doing something that could change society. It has the potential for great benefits to all of us. That kind of a job should be joyous!

For my article about why AI self-driving cars are like a moonshot, see: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For my article about idealism in AI self-driving car developers: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

Though it is true that you could perceive the job as striving to achieve new ends and changing our lives, getting there is not all fun and games.

Let’s consider why there is a high chance for burnout for AI developers that are developing AI self-driving cars:

  •         New Ground

AI self-driving cars are pushing the boundaries of what we can do with AI. This is not run-of-the-mill stuff. We are using the latest AI techniques, the latest Machine Learning (ML) capabilities, etc. Many everyday developers often just reuse what they have done before. In this case, it’s new ground with every step you take.

  •         Life-or-Death System

It’s one thing if you are pushing the boundaries of AI for let’s say a financial system, but in the case of a self-driving car, it’s a life-or-death matter. If your software hiccups at the wrong time, it could mean that the car will hurtle itself into a wall and kill the human occupants. Or, it could swerve unintentionally and kill pedestrians. And so on. This is serious stuff.

  •         Real-time System

Whenever you are developing software for a real-time system, it tends to increase the difficulty factor. One of the first real-time systems I was involved in, years ago, involved a real-time controller for a roller coaster. I can tell you that we sweated quite a bit about how to get the timing just right and make sure the software was always able to handle whatever happened in real-time.

  •         Intense Pressure

The pressure by the auto makers and tech firms to get their self-driving cars on the roadways is intense. Every day you see new announcements about one self-driving car maker is going to get to the market sooner than the other. This kind of gamesmanship is often taking place without regard to what the actual AI developers can do – it’s about what they leaders are telling the marketplace. Deadlines aplenty. Irrational deadlines aplenty.

  •         Lack of Specs

Many of the auto makers and tech firms are developing their self-driving cars on-the-fly in an agile method and doing so without a definitive set of specs. To some degree, it’s make it up as you go. Some of the ideas that are being delved into these projects are dreams rather than something that can be actually achieved. AI developers are often told, rather than asked, what can be done.

  •         Spotty Peer Expertise

There aren’t many that have industrial style expertise in developing software for cars, let alone for AI self-driving cars. Thus, it is somewhat unlikely that an AI developer can depend upon their fellow AI developer on their team to lend a hand. The odds are that they are all mainly in-the-dark and trying to figure out things as they go along.

  •         Highly Secretive

The self-driving car efforts by each auto maker and tech firm are typically being done in a skunk works operation that’s considered for secure eyes only. This secretive manner makes sense because everyone is trying to do their own thing and they don’t want others to steal it. But, this also makes things harder for the AI developers since it narrows whom else they can turn to for assistance. In many cases, they aren’t supposed to talk about their work with family and friends – it’s like being in the CIA.

  •         Shift From R&D

For many of the AI developers in the self-driving cars field, they most recently were working at an AI research lab at a university. That’s a whole different kind of work environment than in industry. For example, at a university, there is often the view that failing on something is OK since you are doing experimentation and not everything will work out. No matter what you hear about Silicon Valley saying to fail first and fail fast, I assure you that with the pressures to get self-driving cars going, the “let’s try failing” model is verboten.

  •         Long Hours

With the vast amount of work to be done for the AI of a self-driving car, there are long hours involved. It can be frustrating too because as mentioned it is punctuated with trying new things and hoping they will work. And, you can’t readily explain to family and friends why you are working late and on the weekends, other than they know vaguely you are working on something important and hush-hush.

  •         Other

There are a myriad of other factors involved too. For example, even the tools used to develop the AI systems are at times brittle and still untried. It would be like trying to make a house and you have hammers and screwdrivers that no one knows for sure will work properly.

Now, I realize that many of these AI developers are getting paid big bucks. As such, similar to the perception about medical doctors, you might have little sympathy about these AI developers possibly getting burned out. You might say they should relish their moment in the sun. Now’s the time to make enough bucks to then retire.

Well, maybe, but let’s also consider the impacts of burnout, similar to the concerns when medical doctors experience burnout.

In the case of the AI self-driving cars, it can lead to the AI developers making errors or mistakes, more likely than they might have otherwise. Perhaps mistakes are made in the machine learning and so the AI system is unable to properly interpret a road sign. Or, perhaps there’s a bug in the code that when the self-driving car reaches a particular speed that the code burps and gets stuck in a loop that it can’t get out of.

Here’s the major actions that an AI self-driving car undertakes:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual World model updating
  •         AI Action Plan updating
  •         Cars control commands issuance

See my framework about AI self-driving cars:

https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

Impacts of Burned-Out AI Self-Driving Car Developers

A burned out AI developer can be “lazy” when it comes to testing and decide that they’ve done too much testing already. Or, they might have an attitude of “why test it” since they don’t believe the whole thing will work anyway. Or, they might fix something that they find as broken, and in the effort of rendering the fix they inadvertently and unknowingly introduce another problem into the code.

Of course, any of these aspects can happen to any developer. And they do. But, as mentioned earlier, it is magnified in the case of AI developers for self-driving cars due to the pressures involved, and the pushing of new boundaries, and so on. Plus, this is a real-time system that involves life-and-death aspects. Thus, this happening for AI developers of self-driving cars has especially important and significant ramifications.

Imagine the problems of AI code that is half-hearted and does the sensor data collection interpretation. Or that does the sensor fusion. Or that does the virtual world modeling. Or that does the AI action plan updating. Or that does the cars control commands issuance. What also can hamper things is that an error in one of those crucial components can compound itself by then misleading the other components. It can have an adverse cascading impact. This includes the potential for the Freezing Robot Problem.

See my article about the dangers of the Freezing Robot Problem in AI self-driving cars: https://aitrends.com/ai-insider/freezing-robot-problem-and-ai-self-driving-cars/

What can be done about the burnout of AI developers that are creating the next generation of self-driving cars?

First, it’s vital to acknowledge that the burnout can and does exist. There are some firms that are blind to burnout and don’t know it happens. They often just say that Joe or Samantha need to take a day off, and when they get back they’ll be fine again. This kind of Band-Aid approach fails to recognize the depth and seriousness of true burnout, and the lengthy and complex process to typically undo it.

Next, watch out for the burnout culture that some firms seem to foster. I say this because there are many Silicon Valley firms that actually tout their burnout rates. They like to chew-up people. They make it into a macho kind of atmosphere and try to project an image that only the strong survive. In their viewpoint, if you aren’t already on the verge of burnout, get there or get out. I’ve been waiting to see what happens when those employees so treated decide to finally lawyer-up and go after those firms for the cruddy work environment. We’ll see.

In some firms, they buy a ping pong table or a foosball table and that’s their way of telling the employees to not burnout. Somehow, you are supposed to take time off from your non-stop high-pressure AI development work, and by playing ping pong a few times a day you’ll not get burned out. Doubtful.

Firms that are serious about detecting, mitigating, and preventing burnout will go out of their way to try and arrange the nature of the work and the work situations to deal with burnout. They need to hire the right people, put in place the right managers, provide the right kind of leadership, and otherwise aim to gauge how much work can be reasonably done and by whom. There are many key decisions being made about self-driving car designs, and the coding, which will either aid the fulfillment of self-driving cars, or will have the opposite impact and completely undermine the advent of self-driving cars.

See my article forewarning the groupthink occurring about AI self-driving car development: https://aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

Some say that burnout in the workplace leads to the erosion of the soul. I know one promising AI developer that nearly collapsed at the pace that he was going, and decided that it was just too unhealthy to continue on this kind of work. He’s since switched to another occupation entirely. That’s a shame as we already have too few well-qualified AI developers to start with. And we need a lot more of them to ramp up for achieving a true Level 5 self-driving car.

I’ve seen some AI developers that have emotional exhaustion from work burnout. One that went home and took it out on his wife and kids. Another one that became so cynical that he pretty much was approving any kind of code going into production. He had lost the belief in caring for his work. Now, you might say that there should be double-checks to catch these kinds of things in terms of faulty designs and faulty code, but with the go-go atmosphere and high pressures to produce, there are developers that look the other way and figure that it’s up to the other person to make sure their stuff works properly.

I had mentioned earlier that if a medical doctor makes mistakes due to burnout, the patient suffers and also so do a lot of other stakeholders. The same can be said of the AI developers for self-driving cars. They can each in their own way lead to self-driving cars that just aren’t ready for prime time. Unfortunately, those at the upper levels of an auto maker or tech firm might not know or care to know, and just insist that the self-driving car be put onto our roadways. If those self-driving cars harm humans, it’s bad and it will also produce a backlash against self-driving cars overall.

And if that happens, if we have haywire AI that stop or stunts the advent of self-driving cars, it undermines or delays the potential benefits to society that we’re all hoping that self-driving cars will derive. Thus, in that manner, even just one burnout can be like the butterfly that flaps its wings on one side of the earth and it ultimately leads to being felt on the other side of the globe.

I implore the auto makers and tech firms to carefully do an assessment of how they are treating their AI developers, and if it’s “burnout city” then they would be wise to step-back, take another look, and see what can be done to overcome it. All of us need to watch out for that last straw on the camel’s back that will break the spirit of our most prized workers, those AI developers, tasked with creating the future of society via the advent of self-driving cars.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

API’s and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider API’s have become the darling of the high-tech software world. There are conferences devoted to the topic of API’s. Non-tech business-oriented magazines and journals gush about the importance of API’s. Anyone that makes a software package nowadays is nearly forced into providing API’s. It’s the rise of the […]

By Lance Eliot, the AI Trends Insider

API’s have become the darling of the high-tech software world. There are conferences devoted to the topic of API’s. Non-tech business-oriented magazines and journals gush about the importance of API’s. Anyone that makes a software package nowadays is nearly forced into providing API’s.

It’s the rise of the Application Programming Interface (API).

Rather than being something magical, it’s actually just the providing of a portal into a software system that otherwise might be difficult to communicate with. One major advantage of a portal is that it can allow various extensions that can add-on to the software system and go beyond what the original software system itself can accomplish. You can also interface to the original software system and allow it to become interconnected with other software. And, you can avoid potentially having to redevelop the wheel, so to speak, by leveraging whatever capabilities the original software system has. Some would say it also allows the software to be at a higher level of abstraction.

I’ve written extensively about API’s for AI systems, which you can read about in Chapter 3 and Chapter 4 of my book “AI Guardian Angel Bots for Deep AI Trustworthiness: Practical Advances in Artificial Intelligence (AI) and Machine Learning”  (available on Amazon at https://www.amazon.com/Guardian-Angel-Bots-Deep-Trustworthiness/dp/0692800611)

By providing a healthy set of API’s, the developers of the original software system are able to encourage the emergence of a third-party add-on ecosystem. This in turn will help make the original software system more popular as others connect to it and rely upon it. Eventually, with some luck and skill, the original software system becomes immersed in so many other areas of life that it becomes undeniably necessary. What might have begun as a small effort can snowball into a widespread and highly known cornerstone for an entire marketplace radiating outward from the original software core.

With great promise often comes great peril. In the case of API’s, there is a chance that the use of the API’s can boomerang on a company that made the original software system. These portals can be used as intended and yet cause undesirable results, along with them being used for unintended nefarious reasons and also causing undesirable results.

Let’s consider the case of an API used as intended, but has caused what some perceive as an undesirable result. This particular example involving Gmail has been in the news recently and led to some untoward attention and concerns.

Google allows for API’s to connect to Gmail. This would seem to be handy since it would allow other software developers to connect their software with Gmail. This can provide handy new capabilities for Gmail that otherwise would have never existed. Meanwhile, those software developers that might have written something that would have never seen the light of day, might be able to piggyback onto the popularity of Gmail and hit themselves a homerun.

When an app that is able to connect to Gmail via the API is first run, it usually asks the user whether they are OK with the app connecting into their Gmail. For many users, they often don’t read the fine print on these kinds of messages and are so eager to get the app that they just say yes to anything that the app displays when being installed. Or, the user might be tempted to read what the conditions are, but it is so lengthy and written in arcane legalese that they don’t do so, and often wonder whether they maybe have given up their first born child by agreeing to the app’s conditions. It’s a combination of the app at times being tricky about explaining what’s up, and the end-user not diligently making sure that they know what they signing up for.

Typically, once the user agrees to the app request at first install, Google then grants to the app that it can access the Gmail of that user. This includes being able to access their emails. The app can potentially read the contents of those emails. It can potentially delete emails. It can potentially send emails on behalf of the user.

In recent widespread news reports, the media caused a stir by finding some companies that read the user’s Gmail emails via AI, doing so to try and figure out what interests the person has and possibly then hit them with ads. In some cases, the emails are even read by humans at the software company, presumably for purposes of being able to gauge how well the AI is doing the reading of the emails. There are also some firms that provide the emails or snapshots of the emails to other third-parties that they have deals with. All in all, it was a bit of a shock to many people that they had provided such access to their “private” email.

I realize that many software developers would blame the user on this – how dumb can you be to go ahead and agree to have your emails accessed and then later on complain that it is taking place? As I mentioned earlier, many users aren’t aware they are doing so, or might be vaguely aware but not really put together two-plus-two and fully understand the implications of what they have allowed to happen. There are some software developers that insist their app is doing a service for the user, and by reading their emails it is helping to target them with things that the person is interested in. That’s a bit of a stretch and for many users the logic doesn’t ring true to them.

You might remember the case of McDonald’s in India and the API that allowed personal information of the McDelivery mobile app to be leaked out. The API connection, normally intended for useful and proper uses, also allowed access to the name, phone numbers, home address, email addresses, and other private info. This was unintended and was an undesirable result.

Hackers Love API’s

As you might guess, hackers love it when there are API’s. It gives them hope that there might be a means to sneakily “break into” a system. I’ve likened this to a fortress that has all sorts of fortified locked doors, which also provides a window that someone with a bit of extra effort can use to get into the fort. Software companies often spend a tremendous amount of effort to try and make their software impervious to security breaches and attacks, and yet then provide an API that exposes aspects that undermine all the rest of their security.

How could that happen? Wouldn’t the API’s get as much scrutiny as the rest of the system in terms of becoming secure? The answer is that no, the API’s often don’t get as much scrutiny. The perception of the company making the software is that the API’s are some kind of techie detail and there’s no need to make sure those are tight. In my experience, most software firms happily provide the API’s in hopes that someone will want to use them, and aren’t nearly as concerned that those that might use them would do so for nefarious reasons.

The API’s are often classified into these three groupings:

  •         Private
  •         Partner
  •         Public

API’s that are considered private are usually intended to be used solely by the firm making the software. They setup the API’s for their own convenience. This also though often means that the API’s have a lot of power and can access all sorts of aspects of the software. The firm figures that’s Okay since only the firm itself will presumably be using the API’s. These are often either undocumented and just known amongst those that developed the software, or there is written documentation but it is kept inside the firm and written for those that are insiders.

API’s that are oriented toward partners are intended to be used by allied firms that the firm making the software decides to cut some kind of deal with. Maybe I make a software package that does sales and marketing kinds of functions, while a firm I cut a deal with has a software package for accounting and wants to connect with my package. Once again, the assumption is that only authorized developers with firms that are properly engaged with will use these API’s. The power of the access by these API’s is once again relatively high, but usually less than the private API’s since the original developers often don’t want the third-party to mess-up and do great harm. The documentation often is a bit more elaborated than the private API’s since the partner firm and its developers need to know what the API’s do.

API’s of a public nature are intended to be used by anyone that wants to access the software. These are often very limited in their access capabilities and are considered potential threats to the system. Thus, only the need-to-know aspects are usually made available. The documentation can sometimes be very elaborate and extensive, while in other cases the documentation is slim and the assumption is that people will figure it out on their own or they might share amongst each other as they figure out what the API’s do.

What sometimes happens is that a firm provides say public API’s, and secretly has partner API’s and private API’s. Those developers that opt to use the public API’s become curious about the partner API’s, and either figure them out on their own, or convince a partner to leak details about what they are. If the partner API’s can be used, the next step is to go after the private API’s. It can become a stepwise progression to figuring out the whole set of API’s.

The API’s are often classified into whether they do this:

  •         Perform an action
  •         Provide object access

Let’s consider first the performing an action type of API. This allows an app invoking the API to request that the original software perform an action that has been made available via the API. For example, suppose there’s a car that has an electronic on-board system and there’s an API associated with the system. You develop a mobile app that connects to the on-board electronic system and you opt to use the API to invoke an action that the electronic system is capable of performing. Suppose the action consists of honking the horn. Your mobile app then connects to the on-board electronic system and via the API requests the electronic system to honk the horn, which it then dutifully does. Honk, honk.

Or, an app might seek to get access to an object and do so via the API. Suppose the electronic on-board system of the car has data in it that includes the name of the car owner and vehicle info such as the make, model, and number of miles driven. The developers of the electronic on-board system might make available an API that allows for access to the “car owner object” that has that data. You then create an app that connects to the electronic on-board car system and asks via the API to access the car owner object. Once the object is provided, your app then reads the data and now can display it on the screen of the mobile app.

How does this apply to AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. This includes providing API’s, and also involves making use of API’s provided by other allied software systems and components.

If you’ve ever played with the API for the Tesla, you likely know that you can get access to vehicle information, vehicle settings, and the like. You can also invoke actions such as honking the horn, waking up the car, starting the charging of the car, setting the car climate controls, open the trunk, and so on. It’s fun and exciting to create your own mobile app to do these things. That being said, there is already a mobile app provided by Tesla that does these things, so it really doesn’t payoff to create them yourself, other than the personal satisfaction involved and also to explore the nature of API’s on cars, or if you are trying to develop your own third-party app and want to avoid or circumvent the official one.

One of the crucial aspects about API’s for cars is that a car is a life-or-death matter. It’s one thing to provide API’s to an on-board entertainment center, allowing you to write an app that can connect to it and play your favorite songs. Not much of a life or death matter there. On the other hand, if the car provides API’s that allow for actual car controls aspects, it could be something much more dangerous and of concern.

Now that I’ve dragged you through the fundamentals of API’s, it gets us to some important points:

  •         What kind of API’s, if any, should an AI self-driving car provide?
  •         If the API’s are provided for an AI self-driving car, how will they be protected from misuse?
  •         If the API’s are provided for an AI self-driving car, how will they be tested to ensure their veracity?
  •         Etc.

Some auto makers and tech firms are indicating they will not provide any API’s regarding their AI self-driving cars. That’s their prerogative and we’ll have to see if that’s a good strategy.

Some are making private API’s and trying to be secretive about it. The question always arises, how can you keep it secret and what happens if the secret gets discovered.

Some are making partner API’s and letting their various business partners know about it. This can be handy, though as mentioned earlier it might start other third-parties down the path of figuring out the partner API’s and then next aiming at the private API’s.

Overall, it’s a mixed bag as to the various AI self-driving car firms are opting to deal with API’s.

There’s also another twist to the API topic for AI self-driving cars, namely:

  •         API’s for Self-Driving Car On-Board System

o   API for the AI portion of self-driving car on-board system

o   API for non-AI portions of the self-driving on-board system

  •         API’s for Self-Driving Car Cloud-Based System

o   API for AI portion of self-driving car cloud-based system

o   API for non-AI portions of the self-driving car cloud-based system

There can be API’s for the on-board systems of the self-driving car, and there can be other API’s for the cloud-based system of the self-driving car. Most AI self-driving cars are going to have OTA (Over The Air) capabilities to interact with a cloud-based system established by the auto maker or tech firm. From a third-party perspective, it would be handy to be able to communicate with the software that’s in the cloud over OTA, in addition to the software that’s on-board the self-driving car.

See my article about the OTA in AI self-driving cars: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

Some AI developers think it is crazy talk to allow API’s for the self-driving car on-board systems. They believe that the on-board systems are sacrosanct and that nobody but nobody should be poking around in them. Likewise, there are AI developers that believe fervently that there should not be API’s allowed for the cloud-based systems associated with self-driving cars. They perceive that this could lead to incredible troubles, since it might somehow allow someone to do something untoward that could then get spread to all of the self-driving cars that connect to the cloud-based system.

See my article about kits for AI self-driving cars: https://aitrends.com/selfdrivingcars/kits-and-ai-self-driving-cars/

See my article about security and AI self-driving cars: https://aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/

There are some auto makers and tech firms that want to provide API’s, which they do so in hopes that their AI self-driving car will become more popular than their competition. As mentioned earlier, if you can get a thriving third-party ecosystem going, it can greatly help boost your core system and get it to become more enmeshed into the marketplace. Also, if you have only one hundred developers in your company, they can only do so much, but if you can have thousands upon thousands of “developers” that write more software to connect to your system, you have magnified greatly your programming reach.

Innocent API’s Promise Not to Endanger

It is believed by some that the API’s can be provided for aspects that don’t endanger the self-driving car and its occupants, these are so-called innocent API’s.

Suppose for example that the API’s only allow for retrieval of information from the AI and the self-driving car. This presumably would prevent someone from getting the AI self-driving car to perform an undue action. Just make available API’s for object access, but none that allow for performing an action. You can still criticize this and suggest there might be a privacy of information loss due to object access the API’s, but at least it isn’t going to directly commandeer the AI self-driving car.

See my article on privacy of AI self-driving cars: https://aitrends.com/selfdrivingcars/privacy-ai-self-driving-cars/

Another viewpoint is that it is Okay to allow for action performing API’s, but those API’s would be constrained to only narrow and presumably safe actions. Suppose you have an API that allows for honking the horn or for flashing the lights of the car? Those seem innocuous. That being said, I suppose if you honk the horn at the wrong time it can confuse pedestrians and maybe also scare people. Similarly, flashing the lights of the car at the wrong time might be alarming to another human driver of a human driven car. Generally, those don’t seem overly unsafe per se.

There are five core stages of an AI self-driving car while in action:

  •         Sensors data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls commands issuance

See my article about my framework for AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

If there was an API for the retrieval of information from the sensors data collection and interpretation, this would seem to be innocuous. Indeed, it might allow a clever third-party to develop add-on’s that could do some impressive augmentation to the sensor analysis. You could potentially also grab the data and push it through other machine learning models to try and find better ways to interpret the data. As mentioned before, this could though have privacy and other complications.

For the sensor fusion, suppose you provided an API that would allow for invoking of some subroutines that combine together the radar data and the LIDAR data. This raises all sorts of potential issues. Will this undermine the validity of the system? Will this consume on-board computer resources and possibly starve other mission critical elements? And so on.

The same concerns can be raised about API’s that might invoke actions of the virtual world model, or actions involving the AI action plan updating. The same is the case for toying with the car controls commands issuance. Indeed, any kind of taxing of those components, even if only for data retrieval, would have to be done in such a manner that it does not simultaneously slow down or distract those aspects while they are working.

We must also consider that there can be a difference between what an API was intended to do, and what it actually does. If the auto maker or tech firm was not careful, they could have provided an API that is only supposed to honk the horn, but that if used in some other manner it can suddenly (let’s pretend) change the steering direction of the self-driving car. This shouldn’t happen, of course, and could produce deadly consequences. It wasn’t intended to happen. But inadvertently, while creating the API, the developers made a hole that allowed for this to occur. Some determined hackers might discover that the API has this other purpose.

Now, I am sure that some of you will say that even if there is something untoward in an API capability, all the auto maker or tech firm needs to do is send out an update via the OTA and close off that back-door. Yes, kind of. First, the auto maker or tech firm has to even find out that the back-door exists. Then, they need to create the plug or fix, and test it to make sure it doesn’t produce some other untoward result. They then need to push it out to the self-driving cars via the OTA. The self-driving cars have to have their OTA enabled and download the plug or fix, and install it. All of this can take time, and meanwhile the self-driving cars are “exposed” in terms of someone taking a nefarious advantage of the hole.

The API’s are often setup with authentication that requires any connecting system to have proper authority to access the API. This is a handy and important security feature. That being said, it is not necessarily an impenetrable barrier to still use the API. Remember the story of the app that gains access to your Gmail when you first install the app by getting your permission to do so. Suppose you are installing an app on your smartphone, which you’ve already connected to your AI self-driving car, and you are asked by the app to allow it to access the API’s in your self-driving car. You indicate yes, not knowing that ramifications this could have.

Will AI self-driving car makers provide API’s? Will they provide SDK’s (Software Development Kits)? Will they discourage or encourage so-called “hot wiring” of AI self-driving cars? Perhaps the path will be to limit any such capabilities to only on-board entertainment systems and not at all to any kind of car control or driving task elements.

Without such API’s, presumably the AI self-driving car might be safer, but will it also lose out on the possible bonanza of all sorts of third-party add-ons that will make your AI self-driving car superior to others and become the defacto standard AI self-driving car that everyone wants. We’ll have to wait and see how the API wars plays out.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Egocentric Design and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider You might find of interest the social psychology aspect known as the actor-observer effect. Before I explain what it is, allow me to provide you with a small story of something that happened the other day. I was chatting with an AI developer that is creating software for […]

By Lance Eliot, the AI Trends Insider

You might find of interest the social psychology aspect known as the actor-observer effect. Before I explain what it is, allow me to provide you with a small story of something that happened the other day.

I was chatting with an AI developer that is creating software for an auto maker and he had approached me after I had finished talking at an industry conference. During my speech, I had mentioned several so-called “edge” cases involving AI self-driving cars. These edge cases involved aspects such as an AI self-driving car being able to navigate safely and properly a roundabout or traffic circle, and being able to navigate safely an accident scene, and so on.

See my article about AI self-driving cars and roundabouts: https://aitrends.com/selfdrivingcars/solving-roundabouts-traffic-circle-traversal-problem-self-driving-cars/

See my article about accident scene traversal and AI self-driving cars: https://aitrends.com/selfdrivingcars/accident-scene-traversal-self-driving-cars/

See my article about edge problems for AI self-driving cars:  https://aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars and also advising other firms about the matter too. Thus, we’re working on quite a number of edge problems.

Well, the AI developer was curious why I cared about the “edge” problems in AI self-driving cars.

An edge problem is one that is not considered at the core of a system. It is considered less vital and an aspect that you can presumably come around and solve at a later time, after you’ve finished up the core. This is not a hard-and-fast rule in the sense that something that one person thinks is an edge might truly be part of the core. Or, something that is an edge might not be at the core but that otherwise without the edge you are going to have a very limited and potentially brittle core.

Edges are often in the eyes of the beholder. Thus, be careful when someone tosses out there that some feature or capability or issue is an “edge” problem. It’s an easy means to deflect attention and distract you from realizing that maybe the edge is truly needed, or that the core you are going to be getting will fall apart because it fails to solve an edge aspect. I’ve seen many people be dismissive of something important by trying to label it as an edge problem. This is a sneaky way at times to avoid having to cover an aspect, and instead pretend that it is inconsequential. Oh, that’s just an edge, someone will assert, and then walk away from the conversation or drop the microphone, as it were.

Anyway, back to my chat with the AI developer. I asked him why he was curious that I was so serious about the various edge problems of AI self-driving cars. It seemed relatively self-evident that these are aspects that can occur in real-world driving situations and that if we are going to have AI self-driving cars that are on our roadways we ought to be able to expect that those self-driving cars can handle everyday driving circumstances. Self-driving cars are a life-and-death matter. If an AI self-driving car cannot handle the driving tasks at-hand, things can get mighty dangerous.

His reply was that there was no particular need to deal with these various “edge” problems. As an example, I asked him what would his AI do when it encountered a driving situation involving a roundabout or traffic circle (I’m sure you know what these are — they are areas where cars go around a circle to then get to an exit radiating from the circle)?

He replied that it wouldn’t have to deal with it. The GPS would have alerted his AI that a roundabout was upcoming, and his AI would simply route itself another way. By avoiding the “edge” problem, he said that it no longer mattered.

Really, I asked?

I pointed out that suppose the GPS did not have it marked and thus the self-driving car went into the roundabout anyway? Or, suppose the GPS wasn’t working properly and so the AI self-driving car blindly went to the roundabout. Even if the GPS did indicate it was there, suppose that there was no viable alternative route and that the self-driving car would have to proceed through the roundabout? Was it supposed to always take the long way, assuming that such a path was even available? This reminded me of a teenage driver that I knew that avoided roundabouts because he was scared of them.

He insisted that none of these aspects would occur. He stood steadfast that there was no need to worry about it. He said I might as well say that suppose aliens from Mars came down to earth. Should he need to have his AI cope with that too?

The Pogo Stick Problem

This brings up another example that’s been making the hallways of AI developers for auto makers and tech firms doing self-driving cars systems. It’s the pogo stick problem. The AI self-driving car is going down a street, minding its own business (so to speak), and all of a sudden a human on a pogo stick bounces into the road and directly in front of the self-driving car. What does the AI do?

One answer that some have asserted is that this will never happen. They retort that the odds of a person being on a pogo stick is extremely remote. If there was such a circumstance, the odds that the person on the pogo stick would go out into the street is even further remote. And, the odds that they would do this just as a car was approaching was even more so remote, since why would someone be stupid enough to pogo stick into traffic and endanger getting hit?

In this viewpoint, we are at some kind of odds that are like getting hit by lightning. In fact, they would say it’s even lesser odds and more like getting hit by lightning twice in a row.

I am not so sure that the probability of this happening is quite as low as they would claim. They are also suggesting or implying that the probability is zero. This seems like a false suggestion since I think we can all agree there is a chance it could happen. No matter how small a chance, it is definitely more than zero.

Those that buy into the zero probability belief will then refuse to discuss the matter any further. They say it is like discussing the tooth fairy, so why waste time on something that will never happen. There are some that I can at least get them to consider that suppose it did happen, even if really remote odds. What then?

They then seem to divide into one of two camps. There’s the camp that says if the human was stupid enough to pogo stick into the road and directly in front of the self-driving car, whatever happens next is their fault. If the AI detects them and screeches the car to a halt and still hits them, because there wasn’t enough distance between them and the self-driving car, that’s the fault of the stupid human on the pogo stick. Case closed.

The other camp says that we shouldn’t allow humans on pogo sticks to go out onto the road. They believe that the matter should be a legal one, outlawing people from using pogo sticks on streets. When I point out that even if there was such a law, it is conceivable that a “law breaker” (like say a child on the pogo stick, which I guess might be facing a life of crime by using it in the streets), might wander unknowingly into the street. What then? The reply to that is that we need to put up barriers to prevent pogo stick riding humans from going out into the streets. All I can say is imagine a world in which we have tall barriers on all streets across all of the United States so that we won’t have pogo stick wandering kids. Imagine that!

If you think these kinds of arguments seem somewhat foolish in that why not just make the AI of the self-driving car so it can deal with a pogo stick riding human, you are perhaps starting to see what I call egocentric design of AI self-driving cars.

There are some firms and some AI developers that look at the world through the eyes of the self-driving car. What’s best for the self-driving car is the way that the world should be, in their view. If pogo riding humans are a pest for self-driving cars, get rid of the pests, so to speak, by outlawing those humans or do something like erecting a barrier to keep them from becoming a problem. Why should the AI need to shoulder the hassle of those pogo stick riding humans? Solve the problem by instead controlling the environment.

For those of you that are standing outside of this kind of viewpoint, you likely find it to be somewhat a bizarre perspective. It likely seems to you that it is real-world impractical to consider controlling the environment. The environment is what it is. Take it as a given. Make your darned AI good enough to deal with it. Expect that humans on pogo sticks are going to happen. Live with it.

What’s even more damming is that there are lots of variants beyond just a pogo stick riding human that could fall into the same classification of sorts. Suppose a human on a scooter suddenly went into the street in front of the self-driving car? Isn’t that the same class of problem? And, isn’t it pretty good odds that with the recent advent of ridesharing scooters that we’ll see this happening more and more?

If you are perplexed that anybody of their right mind could somehow believe that the AI of a self-driving car does not need to deal with the pogo stick riding human, and worse still the scooter riding human that is more likely prevalent, you might be interested in the actor-observer effect.

Here’s the background about the actor-observer effect.

Suppose we put someone into a room to do some work, let’s make it office type of work. We’ll have a one-way mirror that allows you to stand outside the room and watch what the person is doing. Let’s pretend that the person in the room is unaware that they are being observed. We’ll refer to the person in the room as an “actor” and we’ll refer to you standing outside the room as the “observer.”

At first, there will be work brought into the room, some kind of paperwork to be done, and it will be given to the actor. They are supposed to work on this paperwork task. You are watching them and so far all seems relatively normal and benign. They do the work. You can see that they are doing the work. The work is getting accomplished.

Next, the amount of work brought into the room starts to increase. The actor begins to sweat as they are genuinely trying to keep up with the volume of paperwork to be processed. Even more paperwork is brought into the room. Now the actor starts to get frantic. It’s way too much work. It is beginning to pile up. The actor is getting strained and you can see that they are obviously unable to get the work completed.

We stop the experiment.

If we were to ask you what happened, as an observer you would likely say that the person doing the work was incapable to keep up with the work required. The actor was low performing. Had the actor done a better job, they presumably could have kept up. They didn’t seem to know or find a means to be efficient enough to get the work done.

If we were to ask the actor what happened, they would likely say that they were doing good at the start, but then the environment went wacky. They were inundated with an unfair amount of paperwork. Nobody could have coped with it. They did the best they could do.

Which of these is right – the actor or the observer?

Perspective Determines What is Seen

It’s not so much about right or wrong, as it is the perspective of the matter. Usually, an actor or the person in the middle or midst of an activity tends to look at themselves as the stable part and the environment as the uncontrollable part. Meanwhile, the observer tends to see the environment as the part that is given, and it is the actor that becomes the focus of attention.

If you are a manager, you might have encountered this same kind of phenomena when you first started managing other people. You have someone working for you that seems to not be keeping up. They argue that it is because they are being given an unfair amount of work to do. You meanwhile believe they are being given a fair amount of work and it is their performance that’s at fault. You, and the person you are managing, can end-up endlessly going round and round about this, caught in a nearly hopeless deadlock. Each of you likely becoming increasingly insistent that the other one is not seeing things the right way.

It is likely due to the actor-observer effect, namely:

  •         When you are in an observer position, you tend to see the environment as a given. The thing that needs to change is the actor.
  •         When you are in the actor position, you tend to see the environment as something that needs to be changed, and you are the given.

Until both parties realize the impact of this effect, it becomes very hard to carry on a balanced discussion. Otherwise, it’s like looking at a painting that one of you insists is red, and the other insists is blue. Neither of you will be able to discuss the painting in other more useful terms until you realize that you each are seeing a particular color that maybe makes sense depending upon the nature of your eyes and your cornea.

Let’s then revisit the AI developer that I spoke with at the conference. Recall that he was insistent that the edge problems were not important. For the pogo stick riding human example, the “problem” at hand was the stupid human. I was saying that the problem was that the AI was insufficient to cope with the pogo stick riding human.  Why did we not see eye to eye?

His focus was on the self-driving car. In a sense, he’s like the actor in the actor-observer effect. His view was that the environment was the problem and so all you need to do is change the wacky environment. My view was that of the “observer” in that I assert the environment is a given, and you need to make the “actor” up to snuff to deal with that environment.

This then brings us to the egocentric design of AI self-driving cars. There are many auto makers and tech firms that are filled with AI developers and teams that view the world from the perspective of the AI self-driving car. They want the world to fit to what their AI self-driving car can do. This could be considered “egocentric” because it elevates the AI of the self-driving car in terms of being the focus. It does what it does. What it can’t do, that’s tough for the rest of us. Live with it.

For the rest of us, we tend to say wait a second, they need to make the AI self-driving car do whatever the environment requires. Putting an AI self-driving car onto our roadways is something that is a privilege and they need to consider it as such. It is on the shoulders of the AI developers and the auto makers and tech firms to make that AI self-driving car deal with whatever comes its way.

Believe it or not, I’ve had some of these auto makers and tech firms that have said we ought to have special roads just for AI self-driving cars. The reason for this is that whenever I point out that self-driving cars will need to mix with human driven cars, and so the AI needs to know how to deal with cars around it that are being driven by “unpredictable” humans, the answer I get is that we should devote special roads for AI self-driving cars. Divide the AI self-driving cars from those pesky human drivers.

There are some AI developers that dream wishfully of the day that there are only AI self-driving cars on our roadways. I point out that’s not going to happen for a very long time. In the United States alone we have 200 million conventional cars. Those are not going away overnight. If we are going to be introducing true Level 5 self-driving cars onto our roadways, it is going to be done in a mixture with human driven cars. As such, the AI has to assume there will be human driven cars and needs to be able to cope with those human driven cars.

The solution voiced by some AI developers is to separate the AI self-driving cars from the human driven cars. For example, convert the HOV lanes into AI self-driving car only lanes. I then ask them what happens when a human driven car decides to swerve into the HOV lane that has AI self-driving cars? Their answer is that the HOV lanes need to have barriers to prevent this from happening. And so on, with the reply always dealing with changing the environment to make this feasible. What about motorcycles? Answer, make sure the barriers will prevent motorcycles from going into the HOV lane. What about animals that wander onto the highway? Answer, the barriers should prevent animals or put up other additional barriers on the sides of the highway to prevent animals from wandering in.

After seeing how far they’ll go on this, I eventually get them to a point that I ask if maybe we ought to consider the AI self-driving car to be similar to a train. Right now, we usually cordon off train tracks. We put barriers to prevent anything from wandering into the path of the train. We put up signs warning about the train is coming. Isn’t that what they are arguing for? Namely, AI self-driving cars are to be treated like trains?

But, if that’s the case, I don’t quite then see where the AI part of the self-driving cars enters into things. Why not just make some kind of simpleton software that treats each car like it is part of a train. You then have these semi-automated cars that come together and collect into a series of cars like a train does. They then proceed along as a train. There are some that have even proposed this, though I’ll grant them that at least they view this as something like a “smart” colony of self-driving cars that come together when needed, but then still are individual “intelligent” self-driving once they leave the hive.

See my article about swarm intelligence and AI: https://aitrends.com/selfdrivingcars/swarm-intelligence-ai-self-driving-cars-stigmergy-boids/

See my framework about AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

Those that are making AI self-driving cars need to look past an egocentric view. We are not going to have true AI self-driving cars if we continue to try and limit the environment. A true level 5 self-driving car is supposed to be able to drive a car like a human would. If that’s the case, we then ought to not have to change anything per se about the existing driving environment. If humans can drive it, the AI should be able to do the same. I tried to explain this to the AI developer. I’m not sure that my words made much sense, since I think he was still seeing the painting as entirely in the color of red, while I was talking about the color blue. Maybe my words herein about the actor-observer effect might aid him in seeing the situation from both sides. I certainly hope so.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.