Coopetition and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Competitors usually fight tooth and nail for every inch of ground they can gain over the other. It’s a dog eat dog world and if you can gain an advantage over your competition, so the better you shall be. If you can even somehow drive your competition out […]

By Lance Eliot, the AI Trends Insider

Competitors usually fight tooth and nail for every inch of ground they can gain over the other. It’s a dog eat dog world and if you can gain an advantage over your competition, so the better you shall be. If you can even somehow drive your competition out of business, well, as long as it happened legally, there’s more of the pie for you.

Given this rather obvious and strident desire to beat your competition, it might seem like heresy to suggest that you might at times consider backing down from being at each other’s throats and instead, dare I say, possibly cooperate with your competition. You might not be aware that the US Postal Service (USPS) has cooperative arrangements with FedEx and UPS – on the surface this seems wild to think that these competitors, obviously all directly competing as shippers, would consider working together rather than solely battling each other.

Here’s another example, Wintel. For those of you in the tech arena, you know well that Microsoft and Intel have seemingly forever cooperated with each other. The Windows and Intel mash-up, Wintel, has been pretty good for each of them respectively and collectively. When Intel’s chips became more powerful, it aided Microsoft in speeding up Windows and being able to add more features and heavier ones. As people used Windows and wanted faster speed and greater capabilities, it sparked Intel to boost their chips, knowing there was a place to sell them, and make more money by doing so. You could say it is a synergistic relationship between those two firms that in combination has aided them both.

Now, I realize you might object somewhat and insist that Microsoft and Intel are not competitors per se, thus, the suggestion that this was two competitors that found a means to cooperate seems either an unfair characterization or a false one.  You’d be somewhat on the mark to have noticed that they don’t seem to be direct competitors, though they could be if they wanted to do so (Microsoft could easily get into the chip business, Intel could easily get into the OS business, and they’ve both dabbled in each other’s pond from time-to-time). Certainly, though it’s not as strong straight-ahead competition example as would be the USPS, FedEx, UPS kind of cooperative arrangement.

There’s a word used to depict the mash-up of competition and cooperation, namely coopetition.

The word coopetition grew into prominence in the 1990s. Some people instantly react to the notion of being both a competitor and a cooperator as though it’s a crazy idea. What, give away my secrets to my competition, are you nuts? Indeed, trying to pull-off a coopetition can be tricky, as I’ll describe further herein. Please also be aware that occasionally you’ll see the use of the more informal phrasing of “frenemy” to depict a similar notion (another kind of mash-up, this one being between the word “friend” and the word “enemy”).

There are those that instantly recoil in horror at the idea of coopetition and their knee jerk reaction is that it must be utterly illegal. They assume that there must be laws that prevent such a thing. Generally, depending upon how the coopetition is arranged, there’s nothing illegal about it per se. The coopetition can though veer in a direction that raises legal concerns and thus the participants need to be especially careful about what they do, how they do it, and what impact it has on the marketplace.

It’s not particularly the potential for legal difficulties that tends to keep coopetition from happening. By and large, the means to structure a coopetition arrangement, via say putting together a consortium, it can be done with relatively little effort and cost. The real question and the bigger difficulty is whether the competing firms are able to find middle ground that allows them to enter into a coopetition agreement.

Think about today’s major high-tech firms.

Most of them are run by strong CEO’s or founders that relish being bold and love smashing their competition. They often drive their firm to have a kind of intense “hatred” for the competition and want their firm to crush the competition. Within a firm, there is often a cultural milieu formed that their firm is far superior, and the competition is unquestionably inferior. Your firm is a winner, the competing firm is a loser. That being said, they don’t want you to let down your guard, in the sense that though the other firm is an alleged loser, they can pop-up at any moment and be on the attack, so you need to be on your guard. To some degree, there’s a begrudging respect for the competition, paradoxically mixed with disdain for the competition.

These strong personalities will generally tend to keep the competitive juices going and not permit the possibility of a coopetition option. On the other hand, even these strong personalities can be motivated to consider the coopetition approach, if the circumstances or the deal looks attractive enough. With a desire to get bigger and stronger, if it seems like a coopetition could get you there, the most egocentric of leaders is willing to give the matter some thought. Of course, it’s got to be incredibly compelling, but at least it is worthy of consideration and not out of hand to float the idea.

What could be compelling?

Here’s a number for you, $7 trillion dollars.

Allow me to explain.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. We do so because it’s going to be a gargantuan market, and because it’s exciting to be creating something that’s on par with a moonshot.

See my article about how making AI self-driving cars is like a moonshot:https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

See my article that provides a framework about AI self-driving cars:https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

Total AI Self-Driving Car Market Estimated at $7 Trillion

Suppose you were the head of a car maker, or the head of a high-tech firm that wanted or is making tech for cars, and I told you that the potential market for AI self-driving cars is estimated at $7 trillion dollars by the year 2050 (as predicted in Fortune magazine, see: http://fortune.com/2017/06/03/autonomous-vehicles-market/).

That’s right, I said $7 trillion dollars. It’s a lot of money. It’s a boatload, and more, of money. The odds are that you would want to do whatever you could to get a piece of that action. Even a small slice, let’s say just a few percentages, would make your firm huge.

Furthermore, consider things from the other side of that coin. Suppose you don’t get a piece of that pie. Whatever else you are doing is likely to become crumbs. If you are making conventional cars, the odds are that few will want to buy them anymore. There are some AI self-driving car pundits that are even suggesting that conventional cars would be outlawed by 2050. The logic is that if you have conventional cars being driven by humans on our roadways in the 2050’s, it will muck up the potential nirvana of having all AI self-driving cars that presumably will be able to work in unison and thus get us to the vaunted zero fatalities goal.

For my article that debunks the zero fatalities goal, see:https://aitrends.com/selfdrivingcars/self-driving-cars-zero-fatalities-zero-chance/

If you are a high-tech firm and you’ve not gotten into the AI self-driving car realm, your fear is that you’ll also miss out on the $7 trillion dollar prize. Suppose that your high-tech competitor got into AI self-driving cars early on and they became the standard, kind of like how there was a fight between VHS and Betamax. Maybe it’s wisest to get into things early and become the standard.

Or, alternatively, maybe the early arrivers will waste a lot of money trying to figure out what to do, so instead of falling into that trap, you wait on the periphery, avoiding the drain of resources, and then jump in once the others have flailed around. Many in Silicon Valley seem to believe that you have to be the first into a new realm. This is actually a false awareness since many of the most prominent firms in many areas weren’t there first, they instead came along somewhat after others had poked and tried and based on the heels of those true first attempts did the other firm step in and become a household name.

Let’s return to the notion of coopetition. I assume we can agree that generally the auto makers aren’t very likely to want to be cooperative with each other and usually consider themselves head-on competitors. I realize there have been exceptions, such as the deal that PSA Peugeot Citroen and Toyota made to produce the Peugeot 107 and the Toyota Aygo, but those such arrangements are somewhat sparse. Likewise, the high-tech firms tend to strive towards being competitive with each other, rather than cooperative. Again, there are exceptions such as a willingness to serve on groups that are putting together standards and protocols for various architectural and interface aspects (think of the World Wide Web Consortium, W3C, as an example).

We’ve certainly already seen that auto makers and high-tech firms are willing to team-up for the AI self-driving cars realm.

In that sense, it’s kind of akin to the Wintel type of arrangement. I don’t think we’d infer they are true coopetition arrangements since they weren’t especially competing to begin with. Google’s Waymo has teamed up with Chrysler to outfit the Pacifica minivans with AI self-driving car aspects. Those two firms weren’t especially competitors. I realize you could assert that Google could get into the car business and be an auto maker if it wanted to, which is quite the case and they could buy their way in or even start something from scratch. You could also assert that Chrysler is doing its own work on high-tech aspects for AI self-driving cars and in that manner might be competing with Waymo. It just doesn’t though quite add-up to them being true competitors per se, at least not right now.

So, let’s put to the side the myriad of auto maker and high-tech firm cooperatives underway and say that we aren’t going to label those as coopetitions. Again, I realize you can argue the point and might say that even if they aren’t competitors today, they could become competitors a decade from now. Yes, I get that. Just go along with me on this for now and we can keep in mind the future possibilities too.

Consider these thought provoking questions:

  •         Could we get the auto makers to come together into a coopetition arrangement to establish the basis for AI self-driving cars?
  •         Could we get the high-tech firms to come together into a coopetition arrangement to establish the basis for AI self-driving cars?
  •         Could we get the auto makers and tech firms that are already in bed with each other to altogether come together to enter into a coopetition arrangement?

I get asked these questions during a number of my industry talks. There are some that believe the goal of achieving AI self-driving cars is so crucial for society, so important for the benefit of mankind, that it would be best if all of these firms could come together, shake hands, and forge the basis for AI self-driving cars.

For my article about idealists in AI self-driving cars, see:https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

Why would these firms be willing to do this? Shouldn’t they instead be wanting to “win” and become the standard for AI self-driving cars? The tempting $7 trillion dollars is a pretty alluring pot of gold. Seems premature to already throw in the towel and allow other firms to grab a piece of the pie. Maybe your efforts will knock them out of the picture. You’ll have the whole kit and caboodle yourself.

Those proposing a coopetition notion for AI self-driving cars are worried that the rather “isolated” attempts by each of the auto makers and the tech firms is going either lead to failure in terms of true AI self-driving cars, or it will stretch out for a much longer time than needed. Suppose you could have true AI self-driving cars by the year 2030, if you did a coopetition deal, versus that suppose it wasn’t until 2050 or 2060 that true AI self-driving cars would emerge. This means that for perhaps 20 or 30 years there could have been true AI self-driving cars, doing so to the benefit of us all, and yet we let it slip off due to being “selfish” and allowing the AI self-driving car makers to duke it out.

For selfishness and AI self-driving cars, see my article:https://aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/

You’ve likely see science fictions movies about a giant meteor that is going to strike earth and destroy all that we have, or an alien force from Mars that is heading to earth and likely to enslave us all. In those cases, there has been a larger foe to contend with. As such, it got all of the countries of the world to set aside their differences and band together to try and defeat the larger foe. I’m not saying that would happen in real life, and perhaps instead everyone would tear each other apart, but anyway, let’s go with the happy face scenario and say that when faced with tough times, we could get together those that otherwise despise each other or see each other as their enemies, and they would become cooperative.

That’s what some want to have happen in the AI self-driving cars realm. The bigger foe is the number of annual fatalities due to car accidents. The bigger foe also includes the issue of a lack of democratization of mobility, which is what it is hoped that AI self-driving cars will bring forth, a greater democratization. The bigger foe is the need to increase mobility for those that aren’t able to be mobile. In other words, the basket of benefits for AI self-driving cars, and the basket of woes that it will overturn, the belief is that for those reasons the auto makers and tech firms should band together into a coopetition.

Zero-Sum Versus Coopetition in Game Theory

Game theory comes to play in coopetition.

If you believe in a zero-sum game, whereby the pie is just one size and those that get a bigger piece of the pie are doing so at the loss of others that will get a smaller piece of the pie, the win-lose perspective makes it hard to consider participating in a coopetition. On the other hand, if it could be a win-win possibility, whereby the pie can be made bigger, and thus the participants each get sizable pieces of pie, it makes being in the coopetition seemingly more sensible.

How would things fare in the AI self-driving cars realm? Suppose that an auto maker X that has teamed up with high-tech firm Y, they are the XY team, and they are frantically trying to be the first with a true AI self-driving car. Meanwhile, we’ve got auto maker Q and its high-tech partner firm Z, and so the QZ team is also frantically trying to put together a true AI self-driving car.

Would XY be willing to get into a coopetition with QZ, and would QZ want to get into a coopetition with XY?

If XY believes they need no help and will be able to achieve an AI self-driving car and do so on a timely basis and possibly beat the competition, it seems unlikely they would perceive value in doing the coopetition. You can say the same about QZ, namely, if they think they are going to be the winner, there’s little incentive to get into the coopetition.

Some would argue that they could potentially shave on costs of trying to achieve an AI self-driving car by joining together. Pool resources. Do R&D together. They could possibly do some kind of technology transfer amongst each other, with one having gotten more advanced in some area than the other, and thus they trade with each on the things they each have gotten farthest along on. There’s a steep learning curve on the latest in AI and so the XY and QZ could perhaps boost each other up that learning curve. Seems like the benefits of being in a coopetition are convincing.

And, it is already the case that these auto makers and tech firms are eyeing each other. They each are intently desirous of knowing how far along the other is. They are hiring away key people from each other. Some would even say there is industrial espionage underway. Plus, in some cases, there are AI self-driving car developers that appear to have stepped over the line and stolen secrets about AI self-driving cars.

See my article about the stealing of secrets of AI self-driving cars:https://aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

This coopetition is not so easy to arrange, let alone to even consider. You are the CEO of the auto maker X, which has already forged a relationship with the high-tech firm Y. The marketplace perceives that you are doing the right thing and moving forward with AI self-driving cars. This is a crucial perception for any auto maker, since we’ve already seen that the auto makers will get drummed by the marketplace, such as their shares dropping, if they don’t seem to be committed to achieving an AI self-driving car. It’s become a key determiner for the auto maker and its leadership.

The marketplace figures that your firm, you the auto maker, will be able to achieve AI self-driving cars and that consumers will flock to your cars. Consumers will be delighted that you have AI self-driving cars. The other auto makers will fall far behind in terms of sales as everyone switches over to you. In light of that expectation, it would be somewhat risky to come out and say that you’ve decided to do a coopetition with your major competitors.

I’d bet that there would be a stock drop as the marketplace reacted to this approach. If all the auto makers were in the coopetition, I suppose you could say that the money couldn’t flow anywhere else anyway.

On the other hand, if only some of the auto makers were in the coopetition, it would force the marketplace into making a bet. You might put your money into the auto makers that are in the coopetition, under the belief they will succeed first, or you might put your money into the other auto makers that are outside the coopetition, under the belief they will win and win bigger because they aren’t having to share the pie.

Speaking of which, what would be the arrangement for the coopetition? Would all of the members participating have equal use of the AI self-driving car technologies developed? Would they be in the coopetition forever or only until a true AI self-driving car was achieved, or until some other time or ending state? Could they take whatever they got from the coopetition and use it in whatever they wanted, or would there be restrictions? And so on.

I’d bet that the coopetition would have a lot of tension. There is always bound to be professional differences of opinion. A member of the coopetition might believe that LIDAR is essential to achieving a true AI self-driving car, while some other member says they don’t believe in LIDAR and see it as a false hope and a waste of time. How would the coopetition deal with this?

For other aspects about differences in opinions about AI self-driving car designs, see my article: https://aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

Also, see my article about egocentric designs:https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

Normally, a coopetition is likely to be formulated when the competitors are willing to find a common means to contend with something that is relatively non-strategic to their core business. If you believe that AI self-driving cars are the future of the automobile, it’s hard to see that it wouldn’t be considered strategic to the core business. Indeed, even though today we don’t necessarily think of AI self-driving cars as a strategic core per se, because it’s still so early in the life cycle, anyone with a bit of vision can see that soon enough it will be.

If the auto makers did get together in a coopetition, and they all ended-up with the same AI self-driving car technology, how else would they differentiate themselves in the marketplace? I realize you can say that even today the auto makers are pretty much the same in the sense that they offer a car that has an engine and has a transmission, etc. The “technology” you might say is about the same, and yet they do seem to differentiate each other. Often, the differentiation is more on style of the car, the looks of the car, rather than the tech side of things.

For how auto makers might be marketing AI self-driving cars in the future, see my article: https://aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/

For those that believe that the AI part of the self-driving car will end-up being the same for cars of the future, and it won’t be a differentiator to the marketplace, this admittedly makes the case for banding into a coopetition on the high-tech stuff. If the auto makers believe that the AI will be a commodity item, why not get into a coopetition, figure this arcane high-tech AI stuff out, and be done with it. No sense in fighting over something that anyway is going to be generic across the board.

At this time, it appears that the auto makers believe they can reach a higher value by creating their own AI self-driving car, doing so in conjunction with a particular high-tech firm that they’ve chosen, rather than doing so via a coopetition. Some have wondered if we’ll see a high-tech firm that opts to build its own car, maybe from scratch, but so far that doesn’t seem to be the case (in spite of the rumors about Apple, for example). There are some firms that are developing both the car and the high-tech themselves, such as Tesla, and see no need to band with another firm, as yet.

Right now, the forces appear to be swayed toward the don’t side of doing a coopetition. Things could change. Suppose that no one is able to achieve a true AI self-driving car? It could be that the pressures become large enough (the bigger foe) that they auto makers and tech firms consider the coopetition notion. Or, maybe the government decides to step in and forces some kind of coopetition, doing so under the belief that it is a societal matter and regulatory guidance is needed to get us to true AI self-driving cars. Or, maybe indeed aliens from Mars start to head here and we realize that if we just had AI self-driving cars we’d be able to fend them off.

For my piece about conspiracy theories and AI self-driving cars, see:https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

There’s the old line about if you can’t beat them, join them. For the moment, it’s assumed that the ability to beat them is greater than the join them alternative. The year 2050 is still off in the future and anything might happen on the path to that $7 trillion dollars.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

Ensemble Machine Learning for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider How do you learn something? That’s the same question that we need to ask when trying to achieve Machine Learning (ML). In what way can we undertake “learning” for a computer and seek to “teach” the system to do things of an intelligent nature. That’s a holy grail […]

By Lance Eliot, the AI Trends Insider

How do you learn something?

That’s the same question that we need to ask when trying to achieve Machine Learning (ML). In what way can we undertake “learning” for a computer and seek to “teach” the system to do things of an intelligent nature. That’s a holy grail for those in AI that are aiming to avoid having to program their way into intelligent behavior. Instead, the notion is to be able to somehow get a computer to learn what to do and not need to explicitly write out every step or knowledge aspect required.

Allow me a moment to share with you a story about the nature of learning.

Earlier in my career, I started out as a professor and was excited to teach classes for both undergraduate students and graduate level students. Those first few lectures were my chance to aid those students in learning about computer science and AI. Before each lecture I spent a lot of time to prepare my lecture notes and was ready to fill the classroom whiteboard with all the key principles they’d need to know. Sure enough, I’d stride into the classroom and start writing on the board and kept doing so until the bell went off that the class session was finished.

After doing this for about a week or two, a student came to my office hours and asked if there was a textbook they could use to study from. I was taken aback since I had purposely not chosen a textbook in order to save the students money. I figured that my copious notes on the board would be better than some stodgy textbook and averted them from having to spend a fortune on costly books. The student explained that though they welcomed my approach, they were the type of person that found it easier to learn by reading a book. Trying not to offend me, the student gingerly inquired as to whether my lecture notes could be augmented by a textbook.

I considered this suggestion and sure enough found a textbook that I thought would be pretty good to recommend, and at the next session of the class mentioned it to the students, indicating that it was optional and not mandatory for the class.

While walking across the campus after a class session, another student came up to me and asked if there were any videos of my lectures. I was suspicious that the student wanted to skip coming to lecture and figured they could just watch a video instead, but this student sincerely convinced me that she found that watching a video allowed her to start and stop the lecture while trying to study the material after class sessions. She said that my fast pace during class didn’t allow time for her to really soak in the points and that by having a video she would be able to do so at a measured pace on her own time.

I considered this suggestion and provided to the class links to some videos that were pertinent to the lectures that I was giving.

Yet another student came to see me about another facet of my classes. For the undergrad lectures, I spoke the entire time and didn’t allow for any classroom discussion or interaction. This seemed sensible because the classes were large lecture halls that had hundreds of students attending. I figured it would not be feasible to carry on a Socratic dialogue similar to what I was doing in the graduate level courses where I had many 15-20 students per class. I had even been told by some of the senior faculty that trying to engage undergrads in discussion was a waste of time anyway since those newbie students were neophytes and it would be ineffective to allow any kind of Q&A with them.

Well, an undergrad student came to see me and asked if I was ever going to allow Q&A during my lectures. When I started to discuss this with the student, I inquired as to what kinds of questions was he thinking of asking. Turns out that we had a very vigorous back-and-forth on some meaty aspects of AI and it made me realize that there were perhaps students in the lecture hall that could indeed engage in a hearty dialogue during class. At my next lecture, I opted to stop every twenty minutes and gauge the reaction from the students and see if I could get a brief and useful interaction going with them. It worked, and I noticed that many of the students became much more interested in the lectures by this added feature of allowing for Q&A (even for so-called “lowly” undergraduate students, which was how my fellow faculty seemed to think of them).

Why do I tell you this story about my initial days of being a professor?

I found out pretty quickly that using only one method or approach to learning is not necessarily very wise. My initial impetus to do fast paced all-spoken lectures was perhaps sufficient for some students, but not for all. Furthermore, even the students that were OK with that narrow singular approach were likely to tap into other means of learning if I was able to provide it. By augmenting my lectures with videos, with textbooks, and by allowing for in-classroom discussion, I was providing a multitude of means to learn.

You’ll be happy to know that I learned that learning is best done via offering multiple ways to learn. Allow the learner to select which approach best fits to them. When I say this, also keep in mind that the situation might determine which mode is best at that time. In other words, don’t assume that someone that prefers learning via in-person lecture is always going to find that to be the best learning method for them. They might switch to a preference for say video or textbook, depending upon the circumstance.

And, don’t assume that each learner will learn via only one method. Student A might find that using lectures and the textbook is their best fit. Student B might find lectures to be unsuitable for learning and prefer dialogue and videos. Each learner will have their own one-or-more learning approaches that work best for them, and this varies by the nature of the topic being learned.

I kept all of this in mind for the rest of my professorial days and always tried to provide multiple learning methods to the students, so they could choose the best fit for them.

Ensemble Learning Employs Multiple Methods, Approaches

A phrase sometimes used to refer to this notion of multiple learning methods is known as ensemble learning. When you consider the word “ensemble” you tend to think of multiples of something, such as multiple musicians in an orchestra or multiple actors in a play. They each have their own role, and yet they also combine together to create a whole.

Ensemble machine learning is the same kind of concept. Rather than using only one method or approach to “teach” a computer to do something, we might use multiple methods or approaches. These multiple methods or approaches are intended to somehow ultimately work together so as to form a group. In other words, we don’t want the learning methods to be so disparate that they don’t end-up working together. It’s like musicians that are supposed to play the same song together. The hope is that the multiple learning methods are going to lead to a greater chance at having the learner learn, which in this case is the computer system as the learner.

At the Cybernetic AI Self-Driving Car Institute, we are using ensemble machine learning as part of our approach to developing AI for self-driving cars.

Allow me to further elaborate.

Suppose I was trying to get a computer system to learn some aspect of how to drive a car. One approach might be to use artificial neural networks (ANN). This is very popular and a relatively standardized way to “teach” the computer about certain driving task aspects. That’s just one approach though. I might also try to use genetic algorithms (GA). I might also use support vector machines (SVM). And so on. These could be done in an ensemble manner, meaning that I’m trying to “teach” the same thing but using multiple learning techniques to do so.

For the use of genetic algorithms in AI self-driving cars see my article: https://aitrends.com/selfdrivingcars/genetic-algorithms-self-driving-cars-darwinism-optimization/

For my article about support vector machines in AI self-driving cars see: https://aitrends.com/selfdrivingcars/support-vector-machines-svm-ai-self-driving-cars/

For my articles about machine learning for AI self-driving cars see:

Benchmarks and machine learning: https://aitrends.com/ai-insider/machine-learning-benchmarks-and-ai-self-driving-cars/

Federated machine learning: https://aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/

Explanation-based machine learning: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/

Deep reinforcement learning: https://aitrends.com/ai-insider/human-aided-training-deep-reinforcement-learning-ai-self-driving-cars/

Deep compression pruning in machine learning: https://aitrends.com/selfdrivingcars/deep-compression-pruning-machine-learning-ai-self-driving-cars-using-convolutional-neural-networks-cnn/

Simulations and machine learning: https://aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/

Training data and machine learning: https://aitrends.com/machine-learning/machine-learning-data-self-driving-cars-shared-proprietary/

Now you don’t normally just toss together an ensemble. When you put together a musical band, you probably would be astute to pick musicians that have particular musical skills and play particular musical instruments. You’d want them to end-up being complimentary with each other. Sure, some might be duplicative, such as you might have more than one guitar player, but that could be because one guitarist will be the lead guitar and the other perhaps the bass guitar player.

The same is said for doing ensemble machine learning. You’ll want to select machine learning approaches or methods that seem to make sense when considered in the totality as a group of such machine learning approaches. What is the strength of each ML chosen for the ensemble? What is the weakness of the ML chosen? By having multiple learning methods, hopefully you’ll be able to either find the “best” one for the given learning circumstance at hand, or you might be able to combine them together in a manner that offers a synergistic outcome beyond each of them performing individually.

So, you could select some N number of machine learning approaches, train them on some data, and then see which of them learned the best, as based on some kind of metrics. You might after training feed the MLs with new data and see which does the best job. For example, suppose I’m trying to train toward being able to discern street signs. So, I feed a bunch of pictures of street signs into these each ML’s of my ensemble. After they’ve each used their own respective learning approach, I then test them. I do so by feeding new pictures of street signs and see which of them most consistently can identify a stop sign versus a speed limit sign.

See my article about street signs and AI self-driving cars: https://aitrends.com/selfdrivingcars/making-ai-sense-of-road-signs/

Out of my N number of machine learning approaches that I selected for this street sign learning task, suppose that the SVM turns out to be the “best” as based on my testing after the learning has occurred. I might then decide that for the street sign interpretation I’m going to exclusively use SVM for my AI self-driving car system. This aspect of selecting a particular model out of a set of models is sometimes referred to as the “bucket of models” approach, wherein you have a bucket of models in the ensemble and you choose one out of them. Your selection is based on a kind of “bake-off” as to which is the better choice.

But, suppose that I discover that of the N machine learning approaches, sometimes the SVM is the “best” and meanwhile there are other times that the GA is better. I don’t necessarily need to confine myself to choosing only one of the learning methods for the system. What I might do is opt to use both SVM and GA, and be aware beforehand of when each is preferred to come to play. This is akin to having the two guitarists in my musical band, and each has their own strengths and weaknesses, so if I’m thoughtful about how to arrange my band when they play a concert I’ll put them each into a part of the music playing that seems best for their capabilities.  Maybe one of them starts the song, and the other ends the song. Or however arranging them seems most suitable to their capabilities.

Thus, we might choose N number of machine learning approaches for our ensemble, train them, and then decide that some subset Q are chosen to become part of the actual system we are putting together. Q might be 1, in that maybe there’s only one of the machine learning approaches that seemed appropriate to move forward with, or Q might be 2, or 3, and so on up to the number N. If we do select more than just one, the question then arises as to when and how to use the Q number of chosen machine learning approaches.

In some cases, you might use each separately, such as maybe machine learning approach Q1 is good at detecting stop signs, while Q2 is good at detecting speed limit signs. Therefore, you put Q1 and Q2 into the real system and when it is working you are going to rely upon Q1 for stop sign detection and Q2 for speed limit sign detection.

In other cases, you might decide to combine together the machine learning approaches that have been successful to get into the set Q. I might decide that whenever a street sign is being analyzed, I’ll see what Q1 has to indicate about it, and what Q2 has to indicate about it. If they both agree that it is a stop sign, I’ll be satisfied that it’s likely a stop sign, and especially if Q1 is very sure of it. If they both agree that it is speed limit sign, and especially if Q2 is very sure of it, I’ll then be comfortable assuming that it is a speed limit sign.

Various Ways to Combine the Q Sets

There are various ways you might combine together the Q’s. You could simply consider them all equal in terms of their voting power, which is generally called “bagging” or bootstrap aggregation. Or, you could consider them to be unequal in their voting power. In this case, we’re going with the idea that Q1 is better at stop sign detection, so I’ll add a weighting to its results that if it’s interpretation is a stop sign then I’ll give it a lot of weight, while if Q2 detects a stop sign I’ll give it a lower weighting because I already know beforehand it’s not so good at stop sign detection.

These machine learning approaches that are chosen for the ensemble are often referred to as individual learners. You can have any N number of these individual learners and it all depends on what you are trying to achieve and how many machine learning approaches you want to consider for the matter at-hand. Some also refer to these individual learners as base learners. A base or individual learner can be whatever machine learning approach you know and are comfortable with, and that matches to the learning task at hand, and as mentioned earlier can be ANN, SVM, GA, decision trees, etc.

Some believe that to make the learning task fair, you should provide essentially the same training data to the machine learning approaches that you’ve chosen for the matter at-hand. Thus, I might select one sample of training data that I feed into each of the N machine learning approaches. I then see how each of those machine learning approaches did based on the sample data. For example, I select a thousand street sign images and feed them into my N machine learning approaches which in this case I’ve chosen say three, ANN, SVM, GA.

Or, instead, I might take a series of samples of the training data. Let’s refer to one such sample as S1, consisting of a thousand images randomly chosen from a population of 50,000 images, and feed the sample S1 into machine learning approach Q1. I might then select another sample of training data, let’s call it S2, consisting of another randomly selected set of a thousand images, and feed it into machine learning approach Q2. And so on for each of the N machine learning approaches that I’ve selected.

I could then see how each of the machine learning approaches did on their respective sample data. I might then opt to keep all of the machine learning approaches for my actual system, or I might selectively choose which ones will go into my actual system. And, as mentioned earlier, if I have selected multiple machine learning approaches for the actual system then I’ll want to figure out how to possibly combine together their results.

You can further advance the ensemble learning technique by adding learning upon learning. Suppose I have a base set of individual learners. I might feed their results into a second-level of machine learning approaches that act as meta-learners. In a sense, you can use the first-level to do some initial screening and scanning, and then potentially have a second-level that then aims at getting into further refinement of what the first-level found. For example, suppose my first-level identified that a street sign is a speed limit sign, but the first-level isn’t capable to then determine what the speed limit numbers are. I might feed the results into a second-level that is adept at ascertaining the numbers on the speed limit sign and be able to detect what the actual speed limit is as posted on the sign.

The ensemble approach to machine learning allows for a lot of flexibility in how you undertake it. There’s no particular standardized way in which you are supposed to do ensemble machine learning. It’s an area still evolving as to what works best and how to most effectively and efficiently use it.

Some might be tempted to throw every machine learning approach into an ensemble under the blind hope that it will then showcase which is the best for your matter at-hand. This is not as easy as it seems. You need to know what the machine learning approach does and there’s an effort involved in setting it up and giving it a fair chance. In essence, there are costs to undertaking this and you shouldn’t be using a scattergun style way of doing so.

For any particular matter, there are going to be so-called weak learners and strong learners. Some of the machine learning approaches are very good in some situations and quite poor in others. You also need to be thinking about the generalizability of the machine learning approaches. You could be fooled when feeding sample data into the machine learning approaches that say one of them looks really good, but it turns out maybe it has overfitted to the sample data. This might not then do you much good once you start feeding new data into the mix.

Another aspect is the value of diversity. If you have no-diversity, such as only one machine learning approach that you are using, there are likely to be situations wherein it isn’t as good as some other machine learning approach, and you should consider having diversity. Therefore, by having more than one machine learning approach in your mix, you are gaining diversity which will hopefully pay-off for varying circumstances. As with anything else, if you have too many though of the machine learning approaches it can lead to muddled results and you might not be able to know which one to believe for a given result provided.

Keep in mind that any ensemble that you put together will require computational effort, in essence computing power, in order to not only do the training but more importantly when involved in receiving new data and responding accordingly. Thus, if you opt to have a slew of machine learning approaches that are going to become part of your Q final set, and if you are expecting them to run in real-time on-board an AI self-driving car, this is going to be something you need to carefully assess. The amount of memory consumed and the processing power consumed might be prohibitive. There’s a big difference between using an ensemble for a research-oriented task, wherein you might not have any particular time constraints, and versus when using in an AI self-driving car that has severe time constraints and also limits on computational processing available.

For those of you familiar with Python, you might consider trying using the Python-oriented scikit-learn machine learning library and try out various ensemble machine learning aspects to get an understanding of how to use an ensemble learning approach.

If we’re going to have true AI systems, and especially AI self-driving cars, the odds are that we’ll need to deploy multiple machine learning models. Trying to only program directly our way to full AI is unlikely to be feasible. As Benjamin Franklin is famous for saying: “Tell me and I forget. Teach me and I remember. Involve me and I learn.” Using an ensemble learning approach is to-date a vital technique to get us toward that involve me and learn goal. We might still need even better machine learning models, but the chances are that no matter what we discover for better ML’s, we’ll end-up needing to combine them into an ensemble. That’s how the music will come out sounding robust and fulfilling for achieving ultimate AI.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Code Obfuscation for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Earlier in my career, I was hired to reverse engineer a million lines of code for a system that the original developer had long since disappeared. He had left behind no documentation. The firm had at least gotten him to provide a copy of the source code. Nobody […]

By Lance Eliot, the AI Trends Insider

Earlier in my career, I was hired to reverse engineer a million lines of code for a system that the original developer had long since disappeared. He had left behind no documentation. The firm had at least gotten him to provide a copy of the source code. Nobody at the firm knew anything about how the code itself worked. The firm was dependent upon the compiled code executing right and they simply hoped and prayed that they would not need to make any changes to the system.

Not a very good spot to be in.

I was told that the project was a hush-hush one and that I should not tell anyone else what I was doing. They would only let me see the source code while physically at their office, and otherwise I wasn’t to make a copy of it or take it off the premises. They even gave me a private room to work in, rather than sitting in a cubicle or other area where fellow staffers were. I became my own miniature skunk works, of sorts.

There was a mixture of excitement and trepidation for me about this project. I had done other reverse engineering efforts before and knew how tough it could be to figure out someone else’s code. Any morsels of “documentation” were always welcomed, even if the former developer(s) had only written things onto napkins or the back of recycled sheets of paper. Also, I usually had someone that kind of knew something about the structure of the code or at least had heard rumors by water cooler chats with the tech team. In this case, the only thing I had available were the end-users that used the system. I was able to converse with them and find out what the system was supposed to do, how they interacted with it, the outputs it produced, etc.

For a million lines of code, and with supposedly just one developer, he presumably was churning out a lot of lines of code for being just one person. I was told that he was a “coding genius” and that he was always able to “magically” make the system do whatever they needed. He was a great resource, they said. He was willing to make changes on the fly. He would come in during weekends to make changes. They felt like they had been given the “hacker from heaven” (with the word hacker in this case meaning a proficient programmer, and not the nowadays more common use as a criminal or cyber hacker).

I gently pointed out that if he was such a great developer, dare I say software engineer, how come he hadn’t documented his work? How come no one else was ever able to lay eyes on his work? How come he was the only one that knew what it did? I pointed out that they had painted themselves into a corner. If this heavenly hacker got hit by a bus (and floated upstairs, if you know what I mean), what then?

Well, they sheepishly admitted that I must be some kind of mind reader because he had one day just gotten up and left the company. There were stories that his girlfriend had gotten kidnapped in some foreign country and that he had arranged for mercenaries to rescue her, and that he personally was going there to be part of the rescue team. My mouth gaped open at this story. Sure, I suppose it could be true. I kind of doubted it. Seemed bogus.

The whole thing smelled like the classic case of someone that was protective of their work, and also maybe wanted a bit of job security. It’s pretty common that some developers will purposely aim to not document their code and make it as obscure as they can, in hopes of staving off losing their job. The idea is that if you are the only one that knows the secret sauce, the firm won’t dare get rid of you. You will have them trapped. Many companies have gotten themselves into that same predicament. And, though it seems like an obvious ploy to you and me, these firms often are clueless about what is taking place and fall into the trap without any awareness. When the person suddenly departs, the firm wakes up “shockingly” to what they’ve allowed to happen.

Some developers that get themselves into this posture will also at times try to push their luck. They demand that the firm pay them more money. They demand that the firm let them have some special perks. They keep upping the ante figuring that they’ll see how far they can push their leverage. This will at times trigger a firm to realize that things aren’t so kosher. At that point, they often aren’t sure of what to do. I’ve been hired as a “code mercenary” to parachute into such situations and try to help bail out the firm. As you might guess, the original developer, if still around, becomes nearly impossible to deal with and will refuse to lift a finger to help share or explain the secret sauce.

When I’ve discussed these situations with the programmer that had led things in that direction, they usually justified it. They would tell me that the firm at first paid them less than what a McDonald’s hamburger slinger would get. They got no respect for having finely honed programming skills. If the firm was stupid enough to then allow things to get into a posture whereby the programmer now had the upper hand, it seems like fair play. The company was willing to “cheat” him, so why shouldn’t he do likewise back to the company. The world’s a tough place and we each need to make our own choices, is what I was usually told.

Besides, it often played out over months and sometimes years, and the firm could have at any time opted to do something to prevent the continuing and deepening dependency. One such programmer told me that he had “saved” the company a lot of money. The doing of documentation would have required more hours and more billable time. The act of showing the code to others and teaching them about how it worked, once again more billable time. Furthermore, just like the case that I began to describe herein, he had worked evenings and weekends, being at the beck and call of the firm. They had gotten a great deal and had no right to complain.

Anyway, I’ll put to the side for the moment the ethics involved in all of this.

For those of you interested in the ethical aspects of programmers, please see my article: https://aitrends.com/selfdrivingcars/algorithmic-transparency-self-driving-cars-call-action/

When I took a look at the code of the “man that went to save his girlfriend in a strange land,” here’s what I found:   Ludwig Van Beethoven, Wolfgang Amadeus Mozart, Johann Sebastian Bach, Richard Wagner, Joseph Haydn, Johannes Brahms, Franz Schubert, Peter Ilyich Tchaikovsky, etc.

Huh?

Allow me to elaborate. The entire source code consisted of variables with names of famous musical composers, and likewise all of the structure and objects and subroutines were named after such composers or were based on titles of their songs. Instead of seeing something like LoopCounter = LoopCounter + 1, it would say Mozart = Mozart + 1. Imagine a financial banking application that instead of referring to Account Name, Account Balance, Account Type, it instead said Bach, Wagner, and Brahms, respectively.

So, when trying to figure out the code, you’d need to tease out of the code that whenever you see the use of “Bach” it really means the Account Name field. When you see the use of Wagner it really means the Account Balance. And so on.

I was kind of curious about this seeming fascination with musical composers. When I asked if the developer was known for perhaps having a passion for classical music, I was told that maybe so, but not that anyone noticed.

I’d guess that it wasn’t so much his personal tastes in composers, and instead it was more likely his interest in code obfuscation.

You might not be aware that some programmers will purposely write their code in a manner to obfuscate it. They will do exactly what this developer had done. Instead of using naming that would be logically befitting the circumstance, they would make-up other names. The idea was that this would make it much harder for anyone else to figure out the code. This ties back to my earlier point about the potential desire to become the only person that can do the maintenance and upkeep on the code. By making things as obfuscated as you can, it causes anyone else to be either be baffled or have to climb up a steep learning curve to divine your secret sauce code.

If the person’s hand was forced by the company insisting that they share the code with Joe or Samantha, the programmer could say, sure, I’ll do so, and then hand them something that seems like utter mush. Here you go, have fun, the developer would say. If Joe and Samantha had not seen this kind of trickery before, they would likely roll their eyes and report back to management that it was going to be a long time to ferret out how the thing works.

I had the CEO of a software company that when this very thing happened, and when it was me that told him the programmer had made the code obfuscated, the CEO nearly blew his top. We’ll sue him for every dime we ever paid him, the CEO exclaimed. We’ll hang him out to dry and tell any future prospective employer that he’s poison and don’t ever hire him. And so on. Of course, trying to go after the programmer for this is going to be somewhat problematic. Did the code work? Yes. Did it do what the firm wanted? Yes. Did the firm ever say anything about the code having to be more transparently written? No.

Motivations for Code Obfuscation Vary

I realize that some of you have dealt with code that appears to be the product of obfuscation, and yet you might say that it wasn’t done intentionally. Yes, I agree that sometimes the code obfuscation can occur by happenstance. A programmer that doesn’t consider the ramifications of their coding practices might indeed write such code. They maybe didn’t intend to write something obfuscated, it just turned out that way. Suppose this programmer loved the classics and the composers, and when he started the coding he opted to use their names. That was well and good for say the first thousand lines of code.

He then kept building upon the initial base of code. Might as well continue the theme of using composer names. After a while, the whole darned thing is shaped in that way. It can happen, bit by bit. At each point in time, you think it doesn’t make sense to redo what you’ve already done, and so you just keep going. It might be like constructing a building that you first laid down some wood beams for, and even if maybe you should be using steel instead because that building is actually ultimately going to be a skyscraper, you started with wood, you kept adding into it with wood, and so wood it is.

For those of you that have pride as a software engineer, these stories often make you ill to your stomach. It’s those seat-of-the-pants programmers that give software development and software developers a bad name. Code obfuscation for a true software engineer is the antithesis of what they try to achieve. It’s like seeing a bridge with rivets and struts made of paper and you know the whole thing was done in a jury rigged manner. That’s not how you believe good and proper software is written.

I think we can anyway say this, code obfuscation can happen for a number of reasons, including possibly:

  •         Unintentionally and without awareness of it as a concern
  •         Unintentionally and by step at a time falling into it
  •         Intentionally and with some loathsome intent to obfuscate
  •         Intentionally but with an innocent or good meaning intent

So far, the intent to obfuscate has been suggested as something being done for job security or other personal reasons that have seemed somewhat untoward. There’s another reason to want to obfuscate the code, namely for code security or privacy, and rightfully so.

Suppose you are worried that someone else might find the code. This someone is not supposed to have it. You want the code to remain relatively private and you are hopeful of securing it so that no one else can rip it off or otherwise see what’s in it. This could be rightfully the case, since you’ve written the code and the Intellectual Property (IP) rights belong to you of it. Companies often invest millions of dollars into developing proprietary code and they obviously would like to prevent others from readily taking it or stealing it.

You might opt to encrypt the file that contains the source code. Thus, if someone gets the file, they need to find a means to decrypt it to see the contents. You can use some really strong form of encryption and hopefully the person wanting to inappropriately decrypt the file will have a hard time doing so and might be unable to do so or give up trying.

Using encryption is a pretty much an on-or-off kind of thing. In the encrypted state, no sense can be made of the contents, presumably. Suppose though that you realize that one way or another, someone has a chance of actually getting to the source code and being able to read what it says. Either they decrypt the file, or they happen to come along when it is otherwise in a decrypted state and grab up a copy of it, maybe they wander over to the programmer’s desktop and put in a USB stick and quickly get a copy while it is in plaintext format.

So, another layer of protection would be to obfuscate the code. You render the code less understandable. This can be done by altering the semantics of the code. The example of the musical composer names showcases how you might do this obfuscation. The musical composer names are written in English and readily read. But, from a logical perspective, in the context of this code, it wouldn’t have any meaning to someone else. The programmer(s) working on the code might have agreed that they all accept the idea that Bach means Account Name and Wagner means Account Balance.

Anyone else that somehow gets their hands on the code will be perplexed. What does Bach mean here? What does Wagner refer to? It puts those interlopers at a disadvantage. Rather than just picking up the code and immediately comprehending it, now they need to carefully study it and try to “reverse engineer” what it seems to be doing and how it is working.

This might require a laborious line-by-line inspection. It might take lots of time to figure out. Maybe it is so well obfuscated that there’s no reasonable way to figure it out at all.

The code obfuscation can also act like a watermark. Suppose that someone else grabs your code, and they opt to reuse it in their own system. They go around telling everyone that it is their own code, written from scratch, and no one else’s. Meanwhile, you come along and are able to take a look at their code. Imagine that you look at their code and observe that the code has musical composer names for all of the key objects in the code. Coincidence? Maybe, maybe not. It could be a means to try and argue that the code was ripped off from your code.

There are ways to programmatically make code obfuscated. Thus, you don’t necessarily need to do so by hand. You can use a tool to do the code obfuscation. Likewise, there are tools to help you crack a code obfuscation. Thus, you don’t necessarily need to do so entirely by hand.

In the case of the musical composer names, I might simply substitute the word “Bach” with the words “Account Name” and so on, which might make the code more comprehensible. The reality is that it isn’t quite that easy, and there are lots of clever ways to make the code obfuscated that it is very hard to render it fully un-obfuscated. There is still often a lot of by-hand effort required.

In this sense, the use of code obfuscation can be by purposeful design. You are trying to achieve the so-called “security by obscurity” kind of trickery. If you can make something obscure, it tends to make it harder to figure out and break into. At my house, I might put a key outside in my backyard so that I can get in whenever I want, but of course a burglar can now do the same. I might put the key under the doormat, but that’s pretty minimal obscurity. If I instead put the key inside a fake rock and I put it amongst a whole dirt area of rocks, the obfuscation is a lot stronger.

One thing about the source code obfuscation that needs to be kept in mind is that you don’t want to alter the code such that it computationally does something different than what it otherwise was going to do. That’s not usually considered in the realm of obfuscation. In other words, you can change the appearance of the code, you can possibly change around the code so that it doesn’t seem as recognizable, but if you’ve now made it that the code can no longer calculate the person’s banking balance, or if you’ve changed it such that the banking balance now gets calculated in a different way, you aren’t doing just code obfuscation.

In quick recap, here’s some aspects about code obfuscation:

  •         You are changing up the semantics and the look, but not the computational effect
  •         Code obfuscation can be done by-hand and/or by the use of tools
  •         Trying to reverse engineer the obfuscation can be done by-hand and/or by the use of tools
  •         There is weak obfuscation that doesn’t do an extensive code obfuscation
  •         There is strong obfuscation that makes the code obfuscation deep and arcane to unwind
  •         Code obfuscation can serve an additional purpose of trying to act like a watermark

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. And, like many of the auto makers and tech firms, we consider the source code to be proprietary and worthy of protecting.

One means for the auto makers and tech firms to try and achieve some “security via obscurity” is to go ahead and apply code obfuscation to their precious and highly costly source code.

This will help too for circumstances where someone somehow gets a copy of the source code. It could be an insider that opts to leak it to another firm or sell it to a competitor. Or, it could be that an breach took place into the systems holding the source code and a determined attacker managed to grab it. At some later point in time, if the matter gets exposed and there is a legal dispute, it’s possible that the code obfuscation aspects could come to play as a type of watermark of the original code.

For my article about the stealing of secrets and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

For my article about the egocentric designs of AI self-driving cars, see:  https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

If you are considering using code obfuscation for this kind of purpose, you’ll obviously want to make sure that the rest of the team involved in the code development is on-board with the notion too. Some developers will like the idea, some will not. Some firms will say that when you check-out the code from a versioning system, they will have it automatically undo the code obfuscation, and only when it is resting in the code management system will it be in the code obfuscation form. Anyway, there are lots of issues to be considered before jumping into this.

For my article about AI developers and groupthink, see: https://aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

For the dangers of making an AI system into a Frankenstein, see my article: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

Let’s also remember that there are other ways that one can end-up with code obfuscation. For some of the auto makers and tech firms, and with some of the open source code that has been posted for AI self-driving cars, I’ve right away noticed a certain amount of code obfuscation that has crept into the code when I’ve gotten an opportunity to inspect it.

As mentioned earlier, it could be that the natural inclination of the programmers or AI developers involves writing code that has code obfuscation in it. This can be especially true for some of the AI developers that were working in university research labs and now they have taken a job at an auto maker or tech firm that is creating AI software for self-driving cars. In the academic environment, often any kind of code you want to sling is fine, no need to “pretty it up” since it usually is done as a one-off to do an experiment or provide some kind of proof about an algorithm.

Self-Driving Car Software Needs to be Well-Built

The software intended to run a self-driving car ought to be better made than that – lives are at stake.

In some cases, the AI developers are under such immense pressures to churn out code for a self-driving car, due to the auto maker or tech firm having unimaginable or unattainable deadlines, they inadvertently write code no matter whether it seems clear cut or not. As often has been said, there is no style in a knife fight. There can also be AI developers that aren’t given guidance to write clearer code, or not given the time to do so, or not rewarded for doing so, and thus all of those reasons can come to play in code obfuscation too.

See my article about AI developer burnout: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

See my article about API’s and AI self-driving cars: https://aitrends.com/selfdrivingcars/apis-and-ai-self-driving-cars/

Per my framework about AI self-driving cars, these are the major tasks involved in the AI driving the car:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action plan formulation
  •         Car controls command issuance

See my framework at: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

There is a lot of code involved in each of those tasks. This is a real-time system that must be able to act and react quickly. The code needs to be tightly done so that it can run in optimal time. Meanwhile, the code needs to be understandable since the humans that wrote the code will need to find bugs in it, when they appear (which they will), and the humans need to update the code (such as when new sensors are added), and so on.

Some of the elements are based on “non-code” such as a machine learning model. Let’s agree to carve that out of the code obfuscation topic for the moment, though there are certainly ways to craft a machine learning model that can be more transparent or less transparent. In any case, taking out those pre-canned portions, I assure you that there’s a lot of code still leftover.

See my article about machine learning models and AI self-driving cars: https://aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/

The auto makers and tech firms are in a mixed bag right now with some of them developing AI software for self-driving cars that is well written, robust, and ready for being maintained and updated. Others are rushing to write the code, or are unaware of the ramifications of writing obfuscated code, and might not realize the err of their ways until further along in the life cycle of advancing their self-driving cars. There are even some AI developers that are like the music man that wrote his code with musical composers in mind, for which it could be an unintentional act or an intentional act. In any case, it might be “good” for them right now, but likely later on will most likely turn out to be “bad” for them and others too.

Here’s then the final rules for today’s discussion on code obfuscation for AI self-driving cars:

  •         If it is happening and you don’t realize it, please wake-up and decide what to overtly be doing
  •         If you are using it as a rightful technique for security by obscurity, please make sure you do so aptly
  •         If you are using it for nefarious purposes, just be aware that what goes around comes around
  •         If you aren’t using it, decide explicitly whether to consider it or not, making a calculated decision about the value and ROI of using code obfuscation

For those of you reading this article, please be aware that in thirty seconds this text will self-obfuscate into English language obfuscation and the article will no longer appear to be about code obfuscation and instead will be about underwater basket weaving. The secrets of code obfuscation herein will no longer be visible. Voila!

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Affordability of AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider They’ll cost too much. They will only be for the elite. Having one will be a sign of prestige. It’s a rich person’s toy. The “have nots” will not be able to get one. People are going to rise-up in resentment that the general population can’t get one. […]

By Lance Eliot, the AI Trends Insider

They’ll cost too much. They will only be for the elite. Having one will be a sign of prestige. It’s a rich person’s toy. The “have nots” will not be able to get one. People are going to rise-up in resentment that the general population can’t get one. Maybe the government should step in and control the pricing. Refuse to get into one as a form of protest. Ban them because if the rest of us cannot have one, nobody should.

What’s this all about?

It’s some of the comments that are already being voiced about the potential affordability (or lack thereof) of AI self-driving cars.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and we get asked quite frequently about whether AI self-driving cars will be affordable or not. I thought you might find of interest my answer (read on).

When people clamor about the potential sky reaching cost of AI self-driving cars, you might at first wonder if people are maybe talking about flying cars, rather than AI self-driving cars. I mention this because there are some that say that flying cars will be very pricy and I think we all pretty much accept that notion. We know that jet planes are pricey, so why shouldn’t a flying car be pricey. But, an earth-based car that rolls on the ground and cannot fly in the air, nor can it submerge like a submarine, we openly question how much such a seemingly “ordinary” car should cost.

It is said that a Rolls-Royce Sweptail is priced upwards of $13 million dollars. Have there been mass protests about this? Are we upset that only a few that are wealthy can afford such a car? Not really. It is pretty much taken for granted that there are cars that are indeed very expensive. Of course, we might all consider it rather foolish of those that are willing to pump hard-earned millions of dollars into such a car. We might think them pretentious for doing so. Or, we might envy them that they have the means to buy such a car. Either way, the Rolls-Royce and other such top to-end cars are over-the-top pricey and most people not especially complain or argue about it.

Part of the reason that people seem to object to the possible high price tag on an AI self-driving car is that the AI self-driving car is being touted as a means to benefit society. AI self-driving cars are ultimately hopefully going to cut down on the number of annual driving related deaths. AI self-driving cars will provide mobility to those that need it, and that cannot otherwise achieve it, such as the poor and the elderly. If an AI self-driving car has such tremendous societal benefits, then we want to as a society ensure that society as a whole gets those benefits and that those benefits will presumably apply across the board. It’s a car of the people, for the people.

What kind of pricing then, for an AI self-driving car, are people apparently thinking of? Some that don’t have any clue of what the price might be are leaving the price tag unknown and thus it makes things easier to get into a lather about how expensive it is. It could be a zillion dollars. Or more. This though seems like a rather vacuous way to discuss the topic. It would seem that we might be better off if we start tossing around some actual numbers and then see if that’s prohibitive or not to buy an AI self-driving car.

The average transaction price (ATP) for a traditional passenger car in the United States for this year is so far around $36,000 according to various published statistics. That’s the national average.

When AI self-driving cars first get started a few years ago, the cost of the added sensors and other specialized gear for achieving self-driving capabilities was estimated at somewhere around $100,000. Meanwhile, since then, the price on those self-driving car specialized components aspects has steadily come down. As with most high-tech, the cost starts “high” and then as it is perfected and the costs to make it wringed out of the process, the price heads downward. In any case, some at the time were saying that an AI self-driving car might be around $150,000 to $200,000, though that’s a wild guess and we don’t yet know what the real pricing will be. Will it be a million dollars for an AI self-driving car? That doesn’t seem to be in anyone’s estimates at this time.

Of course, any time a new car comes out, particularly one that has new innovations, there is usually a premium price placed on the car. It’s a novelty item at first. The number of such cars is usually scarce initially, and so the usual laws of supply and demand help to punch up the price. If the car is able to be eventually mass produced, gradually the price starts to come down as more of those cars enter into the marketplace. If there are competitors that provide equivalent alternatives, the competition of the marketplace tends to drive down the price. You can refer to the Tesla models as prime examples of this kind of marketplace phenomena.

Will True AI Self-Driving Cas Be Within Financial Reach?

Suppose indeed that the first true AI self-driving cars in the low hundreds of thousands of dollars. Does that mean that those cars are out of the reach of the everyday person?

Before we jump into the answer for that question, let’s clarify what I mean by true AI self-driving cars. There are levels of self-driving cars. The topmost level is Level 5. A Level 5 AI self-driving car is able to be driven by the AI without any human intervention. In fact, there is not a human driver needed in a Level 5 car. So much so that there is unlikely to be any driving controls in a Level 5 self-driving car for a human to operate even if the human wanted to try and drive it. In theory, the AI of the Level 5 self-driving car is supposed to be able to drive the car as a human could.

Let’s therefore not consider in this affordability discussion the AI self-driving cars that are less than a Level 5. A less than level 5 self-driving car is a lot like a conventional car, though augmented in a manner that allows for co-sharing of the driving task. This means that there must be a human driver in a car that is classified as a less than Level 5 self-driving car. In spite of having whatever kind of AI in such a self-driving car, the driving task is still considered the responsibility of the human driver. No matter whether the human driver opts to take their eyes off the road, which can be an easy trap to fall into when in a less than level 5 self-driving car, and if the AI were to suddenly toss the control aspects to that human driver, it is nonetheless the human driver considered to be responsible for the driving. I’ve warned many times about the dangers this creates in the driving task.

For my article about the levels of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For my framework about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the dangers of co-shared driving and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

We’ll focus herein on the true Level 5 self-driving car. This is the self-driving car that has the full bells and whistles and really is a self-driving car. No human driver needed. This is the one that those referring to a driving utopia are actually meaning to bring up. The less than level 5’s aren’t quite so exciting, though they might well be important and perhaps stepping stones to the level 5.

Now, let’s get back to the question at hand – will a true Level 5 AI self-driving car be affordable?

We can first quibble about the word “affordable” in this context. If by affordability we mean that it should be around the same price tag as the ATP $36,000 of today’s average passenger car in the United States, I’d say that we aren’t going to see Level 5 Ai self-driving cars at that price for likely a long time until after they are quite prevalent. In other words, out the gate, it isn’t going to be that kind of price (it will be much higher). After years of growth of more and more AI self-driving cars coming into the marketplace, sure, it could possibly eventually come down to that range. Keep in mind that today there are around 200 million conventional cars in the United States, and presumably over time those cars will get replaced by AI self-driving cars. It won’t happen overnight. It will be a gradual wind down of the old ways, and a gradual wind-up of the new ways.

Imagine that the first sets of AI self-driving cars will cost in the neighborhood of several hundreds of thousands of dollars. Obviously, that price is outside the range of the average person. No argument there.

But, that’s if you only look at the problem or question in just one simple way, namely purchasing the car for purely personal use. That’s the mental trap that most fall into. They perceive of the AI self-driving car as a personal car and nothing more. I’d suggest you reconsider that notion.

It is generally predicted and accepted that AI self-driving cars are likely to be running 24×7. You can have your self-driving car going all the time, pretty much. Today’s conventional cars are only used around 5% of their available time. This makes sense because you drive your personal car to work, you park it, you work all day, you drive home. Over ninety percent of the day it is sitting and not doing anything other than being a paperweight, if you will.

For AI self-driving cars, you have an electronic chauffeur that will drive the car whenever you want. But, are you actually going to want to be going in your AI self-driving car all day long? I doubt it. So, you will have extra available driving capacity that is unused. You could just chock it up and say that’s the way the ball bounces. More than likely, you would realize that you could turn that idle time into personal revenue.

See my article about the non-stop use of AI self-driving cars: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

Here’s what is most likely to actually happen.

We all generally agree that the advent of the AI self-driving car will spur the ridesharing industry. In fact, some say that the AI self-driving car will shift our society into a ridesharing-as-an-economy model. This is why the Uber and Lyft and other existing ridesharing firms are so frantic about AI self-driving cars. Right now, ridesharing firms are able to justify what they do because they are able to connect together human drivers with cars to those that need a lift. If you eliminate the human driver out of the equation, what then if the ridesharing firm doing? That’s the scary proposition for the ridesharing firms.

This all implies that ridesharing-as-a-service will now be possible by the masses. It doesn’t matter if you have a full-time job and cannot spare the time to be a ridesharing driver, because instead you just let your AI self-driving car be your ridesharing service. You mainly need to get connected up with people that need a ridesharing lift. How will that occur? Uber and Lyft are hopeful it will occur via their platform, but it could instead be say a Facebook wherein the people are already there in the billions. This is all going to be a big shakeout coming.

Meanwhile, you buy yourself an AI self-driving car, and you use it for some portion of the time, and the rest of the time you have it earning some extra dough as a ridesharing vehicle. Nice!

This then ties into the affordability question posed earlier.

If you are going to have revenue generated by your AI self-driving car, you can then look at it as a small business of sorts. You then should consider your AI self-driving car as an investment. You are making an investment in an asset that you can put to work and earn revenue. As such, you should then consider what the revenue might be and what the cost might be to achieve that revenue.

Self-Driving Car Revenue Potential Opens Door to Affordability

This opens the door towards being able to afford an otherwise seemingly unaffordable car. Even if the AI self-driving car costs you say several hundreds of thousands of dollars, which seems doubtful as a price tag, but let’s use it as an example, you can weigh against that the revenue you can earn from that car.

For tax purposes (depending on how taxes will be regulated in the era of AI self-driving cars), you can usually deduct a car loan interest when using a car for business purposes (the deduction is only with respect to the portion of it used for business purposes). So, suppose you use your AI self-driving car for 15% of the time, and the other 85% of the time you use it for your ridesharing business, you can deduct the car loan interest normally for the 85% portion.

You can also do deductions for tax purposes, sometimes using the federal standard mileage rate, or also with actual vehicle expenses including:

  •         Depreciation
  •         Licenses
  •         Gas and oil
  •         Tolls
  •         Lease payments
  •         Insurance
  •         Garage rent
  •         Parking fees
  •         Registration fees
  •         Repairs
  •         Tires

Therefore, you need to rethink the cost of an AI self-driving car. It becomes a potential money maker and you need to consider the cost to purchase the car, the cost of ongoing maintenance and support, the cost of special taxes, the cost of undertaking the ridesharing services, and other such associated costs.

These costs are weighed in comparison to the potential revenue. You might at first only be thinking of the revenue derived from the riders that use your AI self-driving car. You might also consider that there is the opportunity for in-car entertainment that you could possibly charge a fee for (access to streaming movies, etc.), perhaps in-car provided food (you might stock the self-driving car with a small refrigerator and have other food in it), etc. You can also possibly use your AI self-driving car for doing advertising and get money from advertisers based on how many eyeballs see their ads while people are going around in your AI self-driving car.

And, this all then becomes part of your budding small business. You get various tax breaks. You might also then expand your business into other areas of related operations or even beyond AI self-driving cars entirely.

One related tie-in might be with the companies that are providing ridesharing scooters and bicycles. Suppose someone gets into your AI self-driving car and they indicate that when they reach their destination, they’d like to have a bicycle to rent. Your ridesharing service might have an arrangement with a firm that does those kinds of ridesharing services, and you get a piece of the action accordingly.

Will the average person be ready to be their own AI self-driving car mogul?

Likely not. But, fear not, a cottage industry will quickly arise that will support the emergence of small businesses that are doing ridesharing with AI self-driving cars. I’ll bet there will be seminars on how to setup your own corporation for these purposes. How to keep your ridesharing AI self-driving car always on the go. Accountants will promote their tax services to the ridesharing start-ups. There will be auto maintenance and repair shops that will seek to be your primary go-to for keeping your ridesharing money maker going. And so on.

In that sense, there will be a ridesharing-as-a-business business that booms to help new entrepreneurs on how to tap into the ridesharing-as-a-service economy. Make millions off your AI self-driving car, will be the late night TV infomercials. You’ll see ads on YouTube of a smiling person that says until they got their AI self-driving car they were stuck in a dead-end job, but now, with their money producing AI self-driving car, they are so wealthy they don’t know where to put all the money they are making. The big bonanza is on its way.

This approach of being a solo entrepreneur to afford an AI self-driving car is only one of several possible approaches. I’d guess it will be perhaps the most popular.

I’ll caution though that it is not a guaranteed path to riches. There will be some that manage to get themselves an AI self-driving car and then discover that it is not being put to ridesharing use as much as they thought. It could be that they live in an area swamped with other AI self-driving cars and so they get just leftover crumbs of ridesharing requests. Or, they are in an area that has other mass transit and no one needs ridesharing. Or, maybe few will trust using an AI self-driving car and so there won’t be many that are willing to use it for ridesharing. Another angle is that you get such a car and do so under the assumption it will be ridesharing for 85% of the time, but you instead use it for personal purposes 70% of the time and this leaves only 30% of the time for the ridesharing (cutting down on the revenue potential).

Meanwhile, there are some other alternatives, let’s briefly consider them:

  •         Solo ridesharing business as a money maker (discussed so far) of an AI self-driving car
  •         Pooling an AI self-driving car
  •         Timeshare an AI self-driving car
  •         Personal use exclusively of an AI self-driving car
  •         Other

In the case of pooling an AI self-driving car, imagine that your next door neighbor would like an AI self-driving car and so would you. The two of you realize that since the neighbor starts work at 7 a.m., while you start work at 8 a.m., and the kids of both families start school at 9 a.m., here’s what you could do. You and the neighbor split the cost of an AI self-driving car. It takes your neighbor to work at 7 a.m., comes back and takes you to work at 8 a.m., comes back and takes the kids to school by 9 a.m. In essence, you all pool the use of the AI self-driving car. There’s no revenue aspects, it’s all just being used for personal use, on a group basis. This could be done with more than just one neighbor.

The pooling would then allow you to split the cost of the AI self-driving car, making it more affordable per person. Suppose you have 3 people and they decided to evenly split the cost, this would make it so that you’d only need to afford one-third of whatever the prevailing cost would be of an AI self-driving car at that time. Voila, the cost is less, seemingly so. But, you’d need to figure out the sharing aspects and I realize it could get heated as to who gets to use the AI self-driving car when needed. It’s like having only one TV and it might be difficult at times to balance the aspect that someone wants to watch one show and someone else wants another one – say you need the AI self-driving car to take you to the store, while the kids need it to get to the ballpark.

In the case of the timeshare approach, you buy into an AI self-driving car like you would if buying into a condo in San Carlo. You purchase a time-based portion of the AI self-driving car. You can use it for whatever is the agreed amount of time. Potentially, you can opt to “invest” in more than one at a time, perhaps getting a timeshare in a passenger car that’s an AI self-driving car, and also investing in an RV that’s an AI self-driving vehicle. You would use them each at different times for their suitable purposes. With any kind of timesharing arrangement, watch out for the details and whether you can get out of it or it might have other such limitations.

There’s the purely personal use of an AI self-driving car option too, which we started this discussion by saying it might be too much for the average person to afford. Even that is somewhat malleable in that there are likely to be car loans that take into account that you are buying an AI self-driving car. The loans might be very affordable in the sense that there’s the collateral of the car, plus the AI self-driving car if needed can be repossessed and then turned into a potential money maker. The auto makers and the banks and others might be willing to cut some pretty good loans to get you into your very own AI self-driving car. As always, watch out for the interest and any onerous loan terms!

Well, before we get too far ahead of ourselves, the main point to be made is that even if AI self-driving cars are priced “high” in comparison to today’s conventional cars, it does not necessarily mean that those AI self-driving cars are only going to be only for the very rich. Instead, those AI self-driving cars are actually going to be a means to help augment the wealth of those that see this as an opportunity. Not everyone will be ready or willing to go the small business route. For many, it will be a means to not only enjoy the benefits of AI self-driving cars, but also spark them towards becoming entrepreneurs. Let’s see how this all plays out and maybe it adds another potential benefit to the emergence of AI self-driving cars.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Here are 8 Myths About AI in the Workplace Debunked – With Infographic

By Jeff Desjardins, The Visual Capitalist The interplay between technology and work has always been a hot topic. While technology has typically created more jobs than it has destroyed on a historical basis, this context rarely stops people from believing that things are “different” this time around. In this case, it’s the potential impact of artificial intelligence […]

By Jeff Desjardins, The Visual Capitalist

The interplay between technology and work has always been a hot topic.

While technology has typically created more jobs than it has destroyed on a historical basis, this context rarely stops people from believing that things are “different” this time around.

In this case, it’s the potential impact of artificial intelligence (AI) that is being hotly debated by the media and expert commentators. Although there is no doubt that AI will be a transformative force in business, the recent attention on the subject has also led to many common misconceptions about the technology and its anticipated effects.

DISPROVING COMMON MYTHS ABOUT AI

Today’s infographic comes to us from Raconteur and it helps paint a clearer picture about the nature of AI, while attempting to debunk various myths about AI in the workplace.

AI is going to be a seismic shift in business – and it’s expected to create a $15.7 trillion economic impact globally by 2030.

But understandably, monumental shifts like this tend to make people nervous, resulting in many unanswered questions and misconceptions about the technology and what it will do in the workplace.

DEMYSTIFYING MYTHS

Here are the eight debunked myths about AI:

1. Automation will completely displace employees
Truth: 70% of employers see AI in supporting humans in completing business processes. Meanwhile, only 11% of employers believe that automation will take over the work found in jobs and business processes to a “great extent”.

2. Companies are primarily interested in cutting costs with AI
Truth: 84% of employers see AI as obtaining or sustaining a competitive advantage, and 75% see AI as a way to enter into new business areas. 63% see pressure to reduce costs as a reason to use AI.

3. AI, machine learning, and deep learning are the same thing 
Truth: AI is a broader term, while machine learning is a subset of AI that enables “intelligence” by using training algorithms and data. Deep learning is an even narrower subset of machine learning inspired by the interconnected neurons of the brain.

4. Automation will eradicate more jobs than it creates 
Truth: At least according to one recent study by Gartner, there will be 1.8 million jobs lost to AI by 2020 and 2.3 million jobs created. How this shakes out in the longer term is much more debatable.

5. Robots and AI are the same thing
Truth: Even though there is a tendency to link AI and robots, most AI actually works in the background and is unseen (think Amazon product recommendations). Robots, meanwhile, can be “dumb” and just automate simple physical processes.

6. AI won’t affect my industry 
Truth: AI is expected to have a significant impact on almost every industry in the next five years.

7. Companies implementing AI don’t care about workers
Truth: 65% of companies pursuing AI are also investing in the reskilling of current employees.

8. High productivity equals higher profits and less employment
Truth: AI and automation will increase productivity, but this could also translate to lower prices, higher wages, higher demand, and employment growth.

Read the source article at The Visual Capitalist.

Family Road Trip and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Have you ever taken a road trip across the United States with your family? It’s considered a core part of Americana to make such a trip. Somewhat immortalized by the now classic movie National Lampoon’s Vacation, the film showcased the doting scatter brained father Clark Griswold with his […]

By Lance Eliot, the AI Trends Insider

Have you ever taken a road trip across the United States with your family? It’s considered a core part of Americana to make such a trip. Somewhat immortalized by the now classic movie National Lampoon’s Vacation, the film showcased the doting scatter brained father Clark Griswold with his caring wife, Ellen, and their vacation-with-your-parents trapped children, Rusty and Audrey, as they all at times either enjoyed or managed to endure a cross-country expedition of a lifetime.

As is typically portrayed in such situations, the father drives the car for most of the trip and serves as the taskmaster to keep the trip moving forward, the mother provides soothing care for the family and tries to keep things on an even keel, and the children must contend with parents that are out-of-touch with reality and that are jointly determined that come heck-or-high-water their kids will presumably have a good time (at least by the definition of the parents). The move was released in 1983 and became a blockbuster that spawned other variants. Today, we can find fault with how the nuclear family is portrayed and the stereotypes used throughout the movie, but nonetheless it put on film what generally is known as the family road trip.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars and doing so with an eye towards how people will want to use AI self-driving cars. It is important to consider the behavior of how human occupants will be while inside an AI self-driving car and therefore astutely design and build AI self-driving cars accordingly.

In a conventional car, for a family road trip, it is pretty much the case that the parents sit in the front seats of the car. This makes sense since either the father or the mother will be the drivers of the car, often times switching off the driving task from one to the other. In prior times the driving task was considered to be “manly” and so usually the husband was shown driving the car. In contemporary times, whatever the nature of and gender of the parents, the point is that the licensed driving adults are most likely to be seated in the front of the car.

If there are two parents, why have both in the front seat, you might ask? Couldn’t you put one of the children up in the front passenger seat, next to the parent or adult that is driving the car? You can certainly arrange things that way, but the usual notion about having the front passenger be another adult or parent is that they can be watching the roadway, serving as an extra pair of eyes for the driver. The driver might be preoccupied with the traffic in front of the car, and meanwhile the front passenger notices that further up ahead there is a bridge-out sign warning that approaching cars need to be cautious. The front passenger is a kind of co-pilot, though they don’t have ready access to the car controls and must instead verbally provide advice to the driver.

The front passenger is not always shown though in movies as a dispassionate observer that thoughtfully aids the driver. Humorous anecdotes are often shown as the front passenger suddenly points at a cow and screams out load for everyone to look. The driver could be distracted by such an exclamation and inadvertently drive off the road at the sudden yelling and pointing. Another commonly portrayed scenario is the front passenger that insists the driver take the next right turn ahead, but offering such a verbal instruction once the car is nearly past the available turn. The driver is then torn between making a radical and dangerous turn, or passing the turn entirely and then likely getting berated by the front seat passenger.

Does this seem familiar to you?

If so, you are likely a veteran of family road trips. Congratulations.

What about the children that are seated in the back seat of the car? One portrayal would be of young children with impressionable minds that are carefully studying their parents and learning the wise ways of life, doing so during the vacation and they will become more learned young adults because of the experience. Of course, this is not the stuff of reality.

Kids Converse with Out-of-Touch Parents

Instead, the movies show something that pertains more closely to reality. The kids often feel trapped. Their parents are forcing them along on a trip. It’s a trip the parents want, but not necessarily what the kids want. At times feeling like prisoners, they need to occupy themselves for hours at time on long stretches of highway. Though at first it might be keen to see an open highway and the mountains and blue skies, it is something that won’t last your attention for hours upon hours, days upon days. Boredom sets in. Conversation with the parents also can only last so long. The parents are out-of-touch with the interests, musical tastes, and other facets of the younger generation.

The classic indication is that ultimately the kids will get into a fight. Not a fisticuffs fight per se, more like an arms waving and hands slapping kind of fight. And the parents then need to turn their heads and look at the kids with laser like eyes, and tell the kids in overtly stern terms, stop that fighting back there or it will be heck to pay. No more ice cream, no more allowance, or whatever other levers the parents can use to threaten the kids to behave. Don’t make me come back there, is the usual refrain.

Sometimes one or more of the kids will start crying. Could be for just about any reason. They are tired of the trip and want it to end. They got hit by their brother or sister and want the parents to know so. Etc. The parents will often retort that the kids need to stop crying. Or, as they are want to say, they’ll give them a true reason to cry (a veiled threat). If the kids are complaining incessantly about the trip, this will likely produce the other classic veiled threat of “I’d better not hear another peep out of you!”

Does the above suggest that the togetherness of the family road trip is perhaps hollow and we should abandon the pretense of having a family trip? I don’t think so. It’s more like showing how family trips really happen. In that sense, the movie National Lampoon’s Vacation was a more apt portrayal than a Leave It To Beaver kind of portrayal, at least in more modern times.

Indeed, today’s family road trips are replete with gadgets and electronics in the car. The kids are likely to be focusing on their smartphones and tablets. The car probably has WiFi, though at times only getting intermittent reception as the trip across some of the more barren parts of the United States takes place. There might be TV’s built into the headrests so the kids can watch movies that way. One of the more popular and cynical portrayals of today’s family road trips is that there is no actual human-to-human interaction inside the car, since everyone is tuned into their own electronic device.

Given the above description of how the family road trip seems to occur, what can we anticipate for the future?

First, it is important to point out that there are varying levels of self-driving cars. The topmost level, a level 5 self-driving car, consists of having AI that can drive the car without any human intervention. This means there is no need for a human driver. The AI should be able to do all of the driving, in the same manner that a human could drive the car. At the levels less than 5, there is and must be a human driver in the car. The self-driving car is not expected to be able to drive entirely on its own and relies upon having a human driver that is at-the-ready to take over the car controls.

See my article about the levels of AI self-driving cars: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

See my article that indicates my framework for AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels less than 5, the AI self-driving car is essentially going to be a lot like a conventional car in terms of what happens during the family road trip. Admittedly, the human driver will be able to have a direct “co-pilot” of sorts to co-share in the driving task via the AI, but otherwise the car design is pretty much the same as a conventional car. This is because you need to have the human driver seated at the front of the car, and the human driver has to have access to car controls to then drive the car. With that essential premise, you can’t otherwise change too much of the interior design of the car.

As an aside, there are some that have suggested maybe we don’t need the human driver to be looking out the windshield and that we can change the car design accordingly. We could put the human driver in the back seat and have them wear a Virtual Reality headset and be connected to the controls of the car via some kind of handheld devices or foot-operated nearby devices. Cameras on the hood and top of the car would beam the visual images to the VR headset. Yes, I suppose this is all possible, but I really doubt we are going to see cars go in that direction. I would say it is a likelier bet that cars less than a level 5 will be designed to look like a conventional car, and only will the level 5 self-driving cars have a new design. We’ll see.

For a level 5 self-driving car, since there is no need for a human driver, we can completely remake the interior of the car. No need to put a fixed place at the front of the car for the human driver to sit. No need for the human driver to look out the windshield. Some of the new designs suggest that one approach would be to have swivel seats for let’s say four passengers in the normal sized self-driving car. The four swivel seats can be turned to face each other, allowing a togetherness of discussion and interaction. At other times, you can rotate the seats so that you have let’s say two facing forward as though the front seats of the car, and the two behind those that are also facing forward.

Other ideas include allowing the seats to become beds. It could be that two seats can connect together and their backs be lowered, thus allowing for a bed, one that is essentially at the front of the car and another at the back of the car. Part of the reason that some are considering designing beds into an AI self-driving car is the belief that AI self-driving cars might be used 24×7, and people might sleep in their cars while on their way to work or while on their vacations.

See my article about the non-stop 24×7 nature of AI self-driving cars: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

Another design aspect involves lining the interior of the self-driving car with some kind of TV or LEDs that would allow for the interior to be a kind of movie theatre. This would allow for watching of movies, shows, live streaming, and even for doing online education. This also raises the question as to whether any kind of glass windows are needed at all. Some assert that we don’t need windows anymore for a Level 5 self-driving car. Instead, the cameras on the outside of the car can show what would otherwise be seen if you looked out a window. The interior screens would show what the cameras show, unless you then wanted to watch a movie and thus the interior screens would switch to displaying that instead.

Are we really destined to have people sitting in self-driving car shells that have no actual windows? It seems somewhat farfetched. You would think that people will still want to look out a real window. You would think that people would want to be able roll down their window when they wish to do so. Now, you could of course have true windows and make the glass out of material that can become transparent at times, and then become blocked at other times, thus potentially have the best of both worlds. We’ll see.

Interior Seat Configuration to be Determined

For a family road trip, you could configure the seats so that all four are facing each other, and have family discussions or play games or otherwise directly interact. This might not seem attractive to some people, or might be something that they sparingly do when trying to have a family chat. As mentioned, the seats could swivel to allow more of a conventional sense of privacy while sitting in your seat. I’d suggest though that the days of the parents saying don’t make us come back there are probably numbered. The “there” will be the same place that the parents are sitting. Maybe too much togetherness? Or, maybe it will spark a renewal of togetherness?

Another factor to consider is that none of the human occupants needs to be a driver. In theory, a family road trip has always consisted of one or more drivers, and the rest were occupants. Now, everyone is going to be an occupant. Will parents feel less “useful” since they are no longer undertaking the driving task directly? Or, will parents find this a relief since they can use the time to interact with their children or catch-up on their reading or whatever?

This has another potentially profound impact on the family road trip, namely that no one needs to know how to drive a car. Thus, in theory, you could even have just and only the children in the self-driving car and have no parents or adults at all. I’d agree that this doesn’t feel like a “family” trip at that point, but it could be that the parents are at the hotel and the kids want to go see the nearby theme park, and so the parents tell the kids they can take the self-driving car there.

How should the interior of the self-driving car be reshaped or re-designed if you have only children inside the car for lengths of time? Would there be interior aspects that you’d want to be able close off from use or slide away to be hidden from use? Perhaps you would not want the children to swivel the swivel seats and be able to lock in place the swivel seats during their journey. Via a Skype-like communication capability, you would likely want to interact with the kids, they seeing you and you seeing them via cameras pointed inward into the self-driving car.

Without a human driver, the AI is expected to do all of the driving. When you go on a cross-country road trip, you often discover “hidden” places to visit that are remote and not on the normal beaten path. The question will be how good is the AI when confronted with driving in an area that perhaps no GPS exists per se. Driving on city roads that have been well mapped is one thing. Driving on dirt roads that are not mapped or for which no map is available, this can be a trickier aspect. Suppose too that you want to have the self-driving car purposely go off-road. The AI has to be able to do that kind of driving, assuming that there is no provision for a human driver and only the AI is able to drive the car.

An AI self-driving car at a Level 5 will normally have some form of Over-The-Air (OTA) capability. This allows the AI to get updated by the auto maker or tech firm, and also for the AI to report what is has discovered into the auto maker or tech firm cloud for collective learning purposes. On a cross country road trip, the odds are that there will be places that have no immediate electronic communication available. Suppose there’s an urgent patch that the OTA needs to provide to the AI self-driving car? This can be dicey when doing a family road trip to off-road locations.

See my article about OTA: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

Suppose the family car, an AI self-driving car, suffers some kind of mechanical breakdown during the trip? What then? Keep in mind that a self-driving car is still a car. This means that parts can break or wear out. This means that you’ll need to get the car to a repair shop. And, with the sophisticated sensors on an AI self-driving car, it will likely have more frequent breakdowns and will require more sophisticated repair specialists and cost more to be repaired. The road trip could be marred by not being able to find someone in a small town that can deal with your broken down AI self-driving car.

See my article about automotive recalls and AI self-driving cars: https://aitrends.com/ai-insider/auto-recalls/

The AI of the self-driving car will become crucial as your driving “pilot” and companion, as it were. Take us to the next town, might be a command that the human occupants utter. One of the children might suddenly blurt out “I need to go to the bathroom” – in the olden days the parents would say hold it until you reach the next suitable place. What will the AI say? Presumably, if its good at what it does, it would have looked up where the next bathroom might be, and offer to stop there. This though is trickier than it seems. We cannot assume that the entire United States will be so well mapped that every bathroom can be looked up. The AI might need to be using its sensors to identify places that might appear to have a bathroom, in the same manner that a parent would furtively look at the window at a gas station or a rest stop.

See my article about NLP and voice commands for AI self-driving cars: https://aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

There is also the possibility of using V2V (vehicle to vehicle communications) to augment the family road trip. With V2V, an AI self-driving car can potentially electronically communicate with another AI self-driving car. Maybe up ahead there is an AI self-driving car that has discovered that the paved road has large ruts and it is dangerous to drive there. This might be relayed to AI self-driving cars a mile back, so those AI self-driving cars can avoid the area or at least be prepared for what is coming. The AI of those self-driving cars could even warn the family (the human occupants) to be ready for a bumpy ride for the mile up ahead.

There is too the possibility of V2I (vehicle to infrastructure communications). This involves having the roadway infrastructure electronically communicate with the AI self-driving car. It could be that a bridge is being repaired, but you wouldn’t know this from simply looking at a map. The bridge itself might be beaming out a signal that would forewarn cars within a few miles that the bridge is inoperable. Once again the AI self-driving car could thus re-plan the journey, and also warn the occupants about what’s going on.

One aspect that the AI can provide that might or might not have been done by a parent would be to explain the historical significance and other useful facets about where you are. Have you been on a family road trip and researched the upcoming farm that was once run by a U.S. president, or maybe there’s a museum where the first scoop of ice cream was ever dished out? A family road trip is often done to see and understand our heritage. What came before us? How did the country get formed? The AI can be a tour guide, in addition to driving the car.

See my article about AI as tour guide for a self-driving car: https://aitrends.com/selfdrivingcars/extra-scenery-perception-esp2-self-driving-cars-beyond-norm/

As perhaps is evident, the interior of the self-driving car has numerous possibilities in terms of how it might be reshaped for the advent of true Level 5 AI self-driving cars. For a family road trip, the interior can hopefully foster togetherness, while also allowing for privacy. It might accommodate sleeping while driving from place to place. The AI will be the driver, and be guided by where the human occupants want to go. In addition to driving, the AI can be a tour guide and perform various other handy tasks too. This is not all rosy though, and the potential for lack of electronic communications could hamper the ride, along with the potential for mechanical breakdowns that might be hard to get repaired.

No more veiled threats from the front seats to the back seats. I suppose some other veiled threats will culturally develop to replace those. Maybe you tell the children, behave yourselves or I won’t let you use the self-driving car to go to the theme park. Will we have AI self-driving cars possibly zipping along our byways with no adults present and only children, as they do a “family” road trip? That’s a tough one to ponder for now. In any case, enjoy the family road trips of today, using a conventional car or even a self-driving car up to the level 5. Once we have level 5 AI self-driving cars, it will be a whole new kind of family road trip experience.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Shiggy Challenge and Dangers of an In-Motion AI Self-Driving Car

By Lance Eliot, the AI Trends Insider I’m hoping that you have not tried to do the so-called Shiggy Challenge. If you haven’t done it, I further hope that my telling you about it does not somehow spark you to go ahead and try doing it. For those of you that don’t know about it […]

By Lance Eliot, the AI Trends Insider

I’m hoping that you have not tried to do the so-called Shiggy Challenge. If you haven’t done it, I further hope that my telling you about it does not somehow spark you to go ahead and try doing it. For those of you that don’t know about it and have not a clue about what it is, be ready to be “amazed” at what is emerging as a social media generated fad. It’s a dangerous one.

Here’s the deal.

You are supposed to get out of a moving car, leaving the driver’s seat vacant, and do a dance while nearby to the continually moving forward car, and video record your dancing (you are also moving forward at the same pace as the moving car), and then jump back into the car to continue driving it.

If you ponder this for a moment, I trust that you instantly recognize the danger of this and (if I might say) the stupidity of it (or does that make me appear to be old-fashioned?).

As you might guess, already there have been people that hurt themselves while trying to jump out of the moving car, spraining an ankle, hurting a knee, banging their legs on the door, etc. Likewise, they have gotten hurt while trying to jump back into the moving car (collided with the steering wheel or the seat arm, etc.).

There are some people that while dancing outside the moving car became preoccupied and didn’t notice that their moving car was heading toward someone or something. Or, they weren’t themselves moving forward fast enough to keep pace with the moving car. And so on. There have been reported cases of the moving car blindly hitting others and also in some cases hitting a parked car or other objects near or in the roadway.

Some of the videos show the person having gotten out of their car and then the car door closing unexpectedly, and, guess what, the car turns out to now have all the car doors locked. Thus, the person could not readily get back into the car to stop it from going forward and potentially hitting someone or something.

This is one of those seemingly bizarre social media fads that began somewhat innocently and then the ante got upped with each person seeking fame by adding more danger to it. As you know, people will do anything to try and get views. The bolder your video, the great the chance it will go viral.

This challenge began in a somewhat simple way. The song “In My Feelings” by Drake was released and at about the same time there was a video made by an on-line personality named Shiggy that showed Shiggy taking a video of himself dancing to the tune (posted on his Instagram site). Other personalities and celebrities then opted to do the same dance, video recording themselves dancing to the Drake song, and they posted their versions. This spawned a mild viral sensation of doing this.

But, as with most things on social media, there became a desire to do something more outlandish. At first, this involved being a passenger in a slowly moving car, getting out, doing the Shiggy inspired dance, and then jumping back in. This is obviously not recommended, though at least there was still a human driver at the wheel. This then morphed into the driver being the one to jump out, and either having a passenger to film it, or setting up the video to do a selfie recording of themselves performing the stunt.

Some of the early versions had the cars moving at a really low speed. It seems now that some people have cars that crawl along at a much faster speed. It further seems that some people don’t think about the dangers of this activity and they just “go for it” and figure that it will all work out fine and dandy. It often doesn’t. Not surprising to most of us, I’d dare say.

The craze is referred to as either the Shiggy Challenge or the In My Feelings challenge (#InMyFeelings), and some more explicitly call it the moving car dance challenge. This craze has even got the feds involved. The National Transportation Safety Board (NTSB) issued a tweet that said this:” #OntheBlog we’re sharing concerns about the #InMyFeelings challenge while driving. #DistractedDriving is dangerous and can be deadly. No call, no text, no update, and certainly no dance challenge is worth a human life.”

Be forewarned that this antic can get you busted, including a distracted driving ticket, or worse still a reckless driving charge.

Now that I’ve told you about this wondrous and trending challenge, I want to emphasize that I only refer to it as an indicator of something otherwise worthy of discussion herein, namely the act of getting out of or into a moving car. I suppose it should go without stating that getting into a moving car is highly dangerous and discouraged. The second corollary equally valid would be that getting out of a moving car is highly dangerous and discouraged.

I’m sure someone will instantly retort that hey, Lance, there are times that it is necessary to get out of or into a moving car. Yes, I’ve seen the same spy movies as you, and I realize that when James Bond is in a moving car and being held at gun point, maybe the right spy action is to leap out of the car. Got it. Seriously, I’ll be happy to concede that there are rare situations whereby getting into a moving car or out of a moving car might be needed, let’s say the car is on fire and in motion or you are being kidnapped, there will be rare such moments. By-and-large, I would hope we all agree that those are rarities.

Sadly, there are annually a number of reported incidents of people getting run over by their own car. Somewhat recently, a person left their car engine running, they got out of the car to do something such as drop a piece of mail into a nearby mailbox, and the car inadvertently shifted into gear and ran them over. These oddities do happen from time to time. Again, extremely rare, but further illustrate the dangers of getting out of even a non-moving car for which the engine is running.

Prior to the advent of seat belts, and the gradual mandatory use and acceptance of seat belts in cars, there were a surprisingly sizable number of reported incidents of people “falling” out of their cars. Now, it could be that some of them jumped out while the car was moving and so it wasn’t particularly the lack of a seat belt involved. On the other hand, there are documented cases of people sitting in a moving car, and not wearing a seat belt, while the car was in motion, and their car door open unexpectedly, with them then proceeding to accidentally hang outside of the car (often clinging to the door), or falling entirely out of the car onto the street.

This is why you should always wear your seat belt. Tip for the day.

For the daredevils of you, it might not be apparent why it is so bad to leave a moving car. If you are a passenger, you have a substantial chance of falling to the street and getting injured. Or, maybe you fall to the street and get killed by hitting the street with your head. Or, maybe you hit an object like a fire hydrant and get injured or killed. Or, maybe another car runs you over. Or, maybe the car you exited manages to drive over you. I think that paints the picture pretty well.

I’d guess that the human driver of the car might be shocked to have you suddenly leave the moving car. This could cause the human driver to make some kind of panic or erratic maneuver with the car. Thus, your “innocent” act of leaving the moving car could cause the human driver to swerve into another car, maybe injuring or killing other people. Or, maybe you roll onto the ground and seem OK, but then the human driver turns the car to try and somehow catch you and actually hits you, injuring you or killing you. There are numerous acrobatic variations to this.

Suppose that it’s the human driver that opts to leave the moving car? In that case, the car is now a torpedo ready to strike someone or something. It’s an unguided missile. Sure, the car will likely start to slow down because the human driver is no longer pushing on the accelerator pedal, but depending upon the speed when the driver ejected, the multi-ton car still has a lot of momentum and chances of injuring or killing or hitting someone or something. If there are any human occupants inside the car, they too are now at the mercy of a car that is going without any direct driving direction.

Risks of Exiting a Moving Car

Let’s recap, you can exit from a moving car and these things could happen:

  •         You directly get injured (by say hitting the street)
  •         You directly get killed (by hitting the street with your head, let’s say)
  •         You indirectly get injured (another car comes along and hits you)
  •         You indirectly get killed (the other car runs you over)
  •         Your action gets someone else injured (another car crashes trying to avoid you)
  •         Your action gets someone else killed (the other car rams a car and everyone gets killed)

I’m going to carve out a bit of an exception to this aspect of leaving a moving car. If you choose to leave the moving car or do so by happenstance, let’s call that a “normal” exiting of a moving car. On the other hand, suppose the car gets into a car accident, unrelated for the moment to your exiting, and during the accident you are involuntarily thrown out of the car due to the car crash. That’s kind of different than choosing to exit the moving car per se. Of course, this happens often when people that aren’t wearing seat belts get into severe car crashes.

Anyway, let’s consider that there’s the bad news of exiting a moving car, and we also want to keep in mind that trying to get into a moving car has its own dangers too. I remember a friend of mine in college that opted to try jumping into the back passenger seat of a moving car (I believe some drinking had been taking place). His pal opened the back door, and urged him to jump in. He was lucky to have landed into the seat. He could have easily been struck by the moving car. He could have fallen to the street and gotten run over by the car. Again, injuries and potential death, for him, and for other occupants of the car, and for other nearby cars too.

I’d like to enlarge the list of moving car aspects to these:

  •         Exiting a moving car
  •         Entering a moving car
  •         Riding on a moving car
  •         Hanging onto a moving car
  •         Facing off with a moving car
  •         Chasing after a moving car
  •         Other

I’ve covered already the first two items, so let’s consider the others on the list.

There are reports from time-to-time of people that opted to ride on the hood of a car, usually for fun, and unfortunately they fell off and got hurt or killed once the car got into motion.

Hanging onto a moving car was somewhat popularized by the “Back To The Future” movie series when Marty McFly (Michael J. Fox) opts to grab onto the back of a car while he’s riding his skateboard. I’m not blaming the movie for this and realize it is something people already had done, but the movie did momentarily increase the popularity of trying this dangerous act.

Facing off with a moving car has sometimes been done by people that perhaps watch too many bull fights. They seem to think that they can hold a red cape and challenge the bull (the car). In my experience, the car is likely to win over the human standing in the street and facing off with the car. It’s a weight thing.

Chasing after a moving car happens somewhat commonly in places like New York City. You see a cab, it fails to stop, you are in a hurry, so you run after the cab, yelling at the top of your lungs. With the advent of Uber and other ridesharing services, this doesn’t happen as much as it used to. Instead, we let our mobile apps do our cab or rideshare hailing for us.

What does all of this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and one aspect that many auto makers and tech firms are not yet considering deals with the aforementioned things that people do regarding moving cars.

Some of the auto makers and tech firms would say that these various actions by humans, such as exiting a moving car or trying to get into a moving car, are considered an “edge” problem. An edge problem is one that is not at the core of the overarching problem being solved. If you are in the midst of trying to get AI to drive a car, you likely consider these cases of people exiting and entering a moving car to be such a remote possibility that you don’t put much attention to it right now. You figure it’s something to ultimately deal with, but getting the car to drive is foremost in your mind right now.

I’ve had some AI developers that tell me that if a human is stupid enough to exit from a moving car, they get what they deserve. Same for all of the other possibilities, such as trying to enter a moving car, chasing after a moving car, etc. The perspective is that the AI has enough to do already, and dealing with stupid human tricks (aka David Letterman!), that’s just not very high priority. Humans do stupid things, and these AI developers shrug their shoulders and say that an AI self-driving car is not going to ever be able to stop people from being stupid.

This narrow view by those AI developers is unfortunate.

I can already predict that there will be an AI self-driving car that while driving on the public roadways will have an occupant that opts to jump out of the moving self-driving car. Let’s say that indeed this is a stupid act and the person had no particularly justifiable cause to do so. If the AI self-driving car proceeds along and does not realize that the person jumped out, and the AI blindly continues to drive ahead, I’ll bet there will be backlash about this. Backlash against the particular self-driving car maker. Backlash against possibly the entire AI self-driving car industry. It could get ugly.

For my explanation of the egocentric designs of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For lawsuits about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/

For why AI self-driving cars need to be able to do defensive driving, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

Let’s take a moment and clarify too what is meant by an AI self-driving car. There are various levels of capabilities of AI self-driving cars. The topmost level is considered Level 5. A Level 5 AI self-driving car is one in which the AI is fully able to drive the car, and there is no requirement for a human driver to be present. Indeed, often a Level 5 self-driving car has no provision for human driving, encompassing that there aren’t any pedals and not a steering wheel available for a human to use. For self-driving cars less than a Level 5, it is expected that a human driver will be present and that the AI and the human driver will co-share the driving task. I’ve mentioned many times that this co-sharing arrangement allows for dangerous situations and adverse consequences.

For more about the co-sharing of the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For human factors aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/not-fast-enough-human-factors-ai-self-driving-cars-control-transitions/

The level of an AI self-driving car is a crucial consideration in this discussion about people leaping out of a moving self-driving car or taking other such actions.

Consider first the self-driving cars less than a Level 5. If the human driver that’s supposed to be in the self-driving car is the one that jumps out, this leaves the AI alone to continue driving the car (assuming that no other human driver is an occupant and able to step into the human driving role of the co-sharing task). We likely don’t want the AI to now be alone as the driver, since for levels less than 5 it is considered a precondition that there be a human driver present. As such, the AI needs to ascertain that the human driver is no longer present, and as a minimum proceed to take some concerted effort to safely bring the self-driving car to a proper and appropriate halt.

Would we want the AI in the less-than level 5 self-driving car to take any special steps about the exited human? This is somewhat of an open question because the expectation of what the AI can accomplish at the less-than level 5 is that it is not fully yet sophisticated. It could be that we might agree that at the less-than level 5, the most we can expect is that the AI will try to safely bring the self-driving car to a halt. It won’t try to somehow go around and pick-up the person or take other actions that we would expect a human driver to possibly undertake.

This brings us to the Level 5 self-driving car. It too should be established to detect that someone has left the moving self-driving car. In this case, it doesn’t matter whether the person that left is a driver or not, because no human driver is needed anyway. In that sense, in theory, the driving can continue. It’s now a question of what to do about the human that left the moving car.

In essence, with the Level 5 self-driving car, we have more options of what to have the AI do in this circumstance. It could just ignore that a human abruptly left the car, and continue along, acting as though nothing happened at all. Or, it could have some kind of provision of action to take in such situations, and invoke that action. Or, it could act similar to the less-than Level 5 self-driving cars and merely seek to safely and appropriately bring the self-diving car to a halt.

One would question the approach of not doing anything and yet being aware that a human left the self-driving car while in motion, this seems counter intuitive to what we would expect or hope that the AI would do. If the AI is acting like a human driver, we would certainly expect that the human driver would do something overtly about the occupant that has left the moving car. Call 911. Slow down. Turn around. Do something. Unless the human driver and the occupants are somehow in agreement about leaving the self-driving car, and maybe they made some pact to do so, it would seem prudent and expected that a human driver would do something to come to the aid of the other person. Thus, so should the AI.

You might wonder how would the AI even realize that a human has left the car?

Consider that there are these key aspects of the driving task by the AI:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls commands issuance

See my article about the framework of AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

The AI self-driving car will likely have sensors pointing outward of the car, such as the use of radar, cameras, LIDAR, sonar, and the like. These provide an indication of what is occurring outside of the self-driving car in the surrounding environment.

It is likely that there will also be sensors pointing inward into the car compartment. For example, it is anticipated that there will be cameras and an audio microphone in the car compartment. The microphone allows for the human occupants to verbally interact with the AI system, similar to interacting with a Siri or Alexa. The camera would allow those within the self-driving car to be seen, such as if the self-driving car is being used to drive your children to school that you could readily see that they are doing OK inside the AI self-driving car.

For more about the natural language interaction with human occupants in a self-driving car, see my article: https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

I’ll walk you through a scenario of an AI self-driving car at a Level 5 and the case of someone that opts to exit from the self-driving car while it is in motion.

Joe and Samatha have opted to use the family AI self-driving car to go to the beach. They both gather up their beach towels and sunscreen, and get into the AI self-driving car. Joe tells the AI to take them to the beach. Dutifully, the AI system repeats back that it will head to the beach and indicates an estimated arrival time. Samatha and Joe settle into their seats and opt to watch a live video stream of a volleyball tournament taking place at the beach and for which they hope to arrive there before it ends.

At this juncture, the AI system would have used the inward facing camera to detect that two people are in the self-driving car. In fact, it would recognize them since it is the family car and they have been in it many times before. The AI sets the internal environment to their normal preferences, such as the temperature, the lighting, and the rest. It proceeds to drive the car to the beach.

Once the self-driving car gets close to the beach, turns out there’s lots of traffic as many other people opted to drive to the beach that day. Joe starts to get worried that he’s going to miss seeing the end of the volleyball game in-person. So, while the self-driving car is crawling along at about five to eight miles per hour in solid traffic, Joe suddenly decides to open the car door and leap out. He then runs over to the volleyball game to see the last few moments of the match.

Level 5 Self-Driving Car Thinks About Passenger Who Jumped Out

The AI system would have detected that the car door had opened and closed. The inward facing cameras would have detected that Joe had moved toward the door and exited the door. The outward facing cameras, the sonar, the radar, and the LIDAR would all have detected him once he got out of the self-driving car. The sensor fusion would have put together the data from those outward facing sensors have been able to ascertain that a human was near to the self-driving car, and proceeding away from the self-driving car at a relatively fast pace.

The virtual world model would have contained an indicator of a human near to the self-driving car, once Joe had gotten out of the self-driving car. And, it would also have indicators of the other nearby cars. It is plausible then that the AI would via the sensors be aware that Joe had been in the self-driving car, had gotten out of it, and was then moving away from it.

The big question then is what should the AI action planning do? If Joe’s exit does not pose a threat to the AI self-driving car, in the sense that Joe moved rapidly away from it, and so he’s not a potential inadvertent target of the self-driving car by its moving forward, presumably there’s not much that needs to be done. The AI doesn’t need to slow down or stop the car. But, this is unclear since it could be that Joe somehow fell out of the car, and so maybe the self-driving car should come to a halt safely.

Here’s where the interaction part comes to play. The AI could potentially ask the remaining human occupant, Samantha, about what has happened and what to do. It could have even called out to Joe, when he first opened the door to exit, and asked what he’s doing. Joe, had he been thoughtful, could have even beforehand told the AI that he was planning on jumping out of the car while it is in motion, and thus a kind of “pact” would have been established.

These aspects are not so easily decided upon. Suppose the human occupant is unable to interact with the AI, or refuses to do so? This is a contingency that the AI needs to contend with. Suppose the human is purposely doing something highly dangerous? Perhaps in this case that when Joe jumped out, there was another car coming up that the AI could detect and knew might hit Joe, what should the AI have done?

Some say that maybe the best way to deal with this aspect of leaping out of the car involves forcing the car doors to be unable to be opened by the human occupants when inside the AI self-driving car. This might seem appealing, as an easy answer, but it fails to recognize the complexity of the real-world. Will people accept the idea that they are locked inside an AI self-driving car and cannot get out on their own? Doubtful. If you say that just have the humans tell the AI to unlock the door when they want to get out, and the AI can refuse when the car is in motion, this again will likely be met with skepticism by humans as a viable means of human control over the automation.

A similar question though does exist about self-driving cars and children.

If AI self-driving cars are going to be used to send your children to school or play, do you want those children to be able to get out of the self-driving car whenever they wish? Probably not. You would want the children to be forced to stay inside. But, there’s no adult present to help determine when unlocking the doors is good or not to do. Some say that by having inward facing cameras and a Skype like feature, the parents could be the ones that instruct the AI via live streaming to go ahead and unlock the doors when appropriate. This of course has downsides since it makes the assumption that there will be a responsible adult available for this purpose and that they’ll have a real-time connection to the self-driving car, etc.

Each of the other actions by humans such as entering the car while in-motion, chasing after a self-driving car, hanging onto a self-driving car, riding on top of a self-driving car, and so on, they all have their own particulars as to what the AI should and maybe should not do.

Being able to detect any of these human actions is the “easier” part since it involves finding objects and tracking those objects (when I say easy, I am not saying that the sensors will work flawlessly and nor that it can necessarily reliably make such detections, I am simply saying that the programming for this is clearer than the AI action planning is).

Using machine learning or similar kinds of automation for figuring out what to do is unlikely as a means of getting out of the pickle of what the AI should do. There are generally few instances of these kind, and each instance would tend to have its own unique circumstances. It would be hard to have a large enough training set. There would also be the concern that the learning would overfit to the limited data and thus not be viable in generalizable situations that are likely to arise.

Our view of this is that it is something requiring templates and programmatic solutions, rather than an artificial neural network or similar. Nonetheless, allow me to emphasize that we still see these as circumstances that once encountered should go up to the cloud of the AI system for purposes of sharing with the rest of the system and for enhancing the abilities of the on-board AI systems that otherwise have not yet encountered such instances.

For understanding the OTA capabilities of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

The odds are high that human occupants will be tempted to jump out of a moving AI self-driving car more so than a human driven car, or similarly try to get into one that is moving. I say this because at first, humans will likely be timid with the AI and be hesitant to do anything untoward, but after a while the AI will become more accepted and humans will become bolder. If your friend or parent is driving the car, you are likely more socially bound to not do strange tricks, you would worry that they might get in trouble. With the AI driving the car, you have no such social binding per se. I’m sure that many maverick teenagers will delight in “tricking” the AI self-driving car into doing all sorts of Instagram worthy untoward things.

Of course, it’s not always just maverick kinds of actions that would occur. I’ve had situations wherein I was driving in an area that was unfamiliar, and a friend walked ahead of my car, guiding the way. If you owned an AI self-driving car of Level 5, you might want it to do the same — you get out of the self-driving car and have it follow you. In theory, the self-driving car should come to a stop before you get out, and likewise be stopped when you want to get in, but is this always going to be true? Do we want to have such unmalleable rules for our AI self-driving cars?

Should your AI self-driving car enable you to undertake the Shiggy Challenge?

In theory, a Level 5 AI self-driving car could do so and even help you do so. It could do the video recording of your dancing. It could respond to your verbal commands to slow down or speed-up the car. It could make sure to avoid any upcoming cars and thus avert the possibility of ramming into someone else while you are dancing wildly to “In My Feelings.” This is relatively straightforward.

But, as a society, do we want this to be happening? Will it encourage behavior that ultimately is likely to lead to human injury and possibly death? We can add this to a long list of the ethics aspects of AI self-driving cars. Meanwhile, it’s something that cannot be neglected, else we’ll for sure have AI that’s unaware and those “stupid” humans will get themselves into trouble and the AI might get axed because of it.

As the song says: “Gotta be real with it, yup.”

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Crossing the Rubicon and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Julius Caesar is famously known for his radical act in 49 BC of defying authority by marching his army across the Rubicon river. Unless you happen to be a historian, you might not be aware that the Roman Senate had explicitly ordered Caesar to disband his army, return […]

By Lance Eliot, the AI Trends Insider

Julius Caesar is famously known for his radical act in 49 BC of defying authority by marching his army across the Rubicon river. Unless you happen to be a historian, you might not be aware that the Roman Senate had explicitly ordered Caesar to disband his army, return to Rome, and not to bring his troops across the Rubicon. His doing so was an outright act of defiance.

Not only was Caesar defiant, he was risking everything by taking such a bold and unimaginable act. The government of Rome and its laws were very clear cut that that any imperium (a person appointed with the right to command) that dared to cross the Rubicon would forfeit their imperium, meaning they would no longer hold the right to command troops. Furthermore, it was considered a capital offense that would cause the commander to become an outlaw. The commander would be condemned to death, and — just to give the commander some pause for thought, all of the troops that followed the commander across the Rubicon would also be condemned to death. Presumably, the troops would not be willing to risk their own lives, even if the commander was willing to risk his life.

As we now know, Caesar made the crossing. When he did so, he reportedly exclaimed “alea iacta est” which loosely translated means that the die has been cast. We use today the idiom “crossing the Rubicon” to suggest a circumstance where you’ve opted to go beyond a point of no return. There is no crossing back. You can’t undo what you’ve done. In the case of Caesar, his gamble ultimately kind of paid-off, as he was never punished per se for his act of rebellion, and he led the Roman Empire, doing so until his assassination in 44 BC.

I’m sure that most of us have had situations where we felt like we were crossing the Rubicon.

One time I was out in the wilderness as a scout master and decided to take the scouts over to a mountain area that was readily hiked over to. While doing the hike, I began to realize that we were going across a dry streambed. Sure enough, when we reached the base of the mountain, rain began to fall, and the streambed began to fill with water. Getting back across it would not have been easy. The more the rain fell, the faster the stream became. Eventually, the stream was so active that we were now stuck on the other side of it. We had crossed our own Rubicon.

At work, you’ve probably had projects that involved making some difficult go or no-go decisions. At one company, I had a team of developers and we were going to create a new system to keep track of VHS video tapes, but we also knew that DVD was emerging. Should we make the system for VHS or for DVD? We only had enough resources to do one. After considering the matter, we opted to hope that DVD was going to catch-on and so we proceeded to focus on DVD’s. We got lucky and it turned out to be one of the first such systems and even earned an award for its innovation. Crossed the Rubicon and luckily landed on the right side.

Of course, crossing the Rubicon can lead to bad results. Caesar was fortunate that he was not right away killed for his insubordination. Maybe his own troops might have even tried to kill him, since there were bound to be some that didn’t want to get caught up in the whole you-are-condemned to death thing. The recent news story about the teenage soccer team in Thailand that went into the caves and became lost, and then the rain closed off their exit, it’s something that they all easily could have died in those caves, were it not for the tremendous and lucky effort that ultimately saved them.

What does this all have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI for self-driving cars. As we do so, there are often very serious and crucial “crossing the Rubicon” kinds of decisions to be made. These same decisions are being made right now by auto makers and tech firms also developing AI self-driving cars.

Let’s take a look at some of those kinds of difficult and nearly undoable decisions that need to be made.

  •         LIDAR

LIDAR is a type of sensor that can be used for an AI self-driving car. It makes use of Light and Radar to help ascertain the world around the self-driving car. Beams of light are sent out from the sensor, the light bounces back like a radar wave, and the sensor is able to gauge the shapes of nearby objects by the length of time involved in the returns of the light waves. This can be a handy means to have the AI determine if there is a pedestrian that is standing ahead of the self-driving car and at a distance of say 15 feet. Or that there is a fire hydrant over to the right of the self-driving car at a distance of 20 feet. And so on.

For my assessment of LIDAR for AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/lidar-secret-sauce-self-driving-cars/

AI self-driving cars tend to use conventional radar to try and identify the surroundings, they use sonic sensors to do likewise, and they use cameras to capture visual images and try to analyze what’s around via vision related processing. They can also use LIDAR. There is no stated requirement that an AI self-driving car has to use any of those kinds of sensors. It is up to whatever the designers of the self-driving car decide to do.

That being said, it is hard to imagine that a self-driving car could properly operate in the real-world if you didn’t have cameras on it and weren’t doing vision processing of the images. You could maybe decide you’ll only use cameras, but that’s a potential drawback since there are going to be situations where vision alone won’t provide a sufficient ability to sense the real-world around the self-driving car. Thus, you’d likely want to add at least radar. Now, with the cameras and radar, you have a fighting chance of being able to have a self-driving car that can operate in the real-world. Adding sonar would help further.

What about LIDAR? Well, if you only had LIDAR, you’d probably not have much of an operational self-driving car, so you’d likely want to add cameras too. Now, with LIDAR and cameras, you have a fighting chance. If you also add radar, you’ve further increased the abilities. Add sonic sensors and you’ve got even more going for you.

Indeed, you might say to yourself, hey, I want my self-driving car to have as many kinds of sensors that will increase the capabilities of the self-driving car to the maximum possible. Therefore, if you already had cameras, radar, and sonar, you’d likely be inclined to add LIDAR. That being said, you also need to be aware that nothing in life is free. If you add LIDAR, you are adding the costs associated with the LIDAR sensor. You are also increasing the nature of the AI programming required to be able to collect the LIDAR data and analyze it.

There are these major stages of processing for self-driving cars:

o   Sensor data collection and interpretation

o   Sensor fusion

o   Virtual model updating

o   AI action plan updating

o   Car controls commands issuance

See my framework about AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

If you add LIDAR to the set of sensors for your self-driving car, you also presumably need to add the software needed to do the sensor data collection and interpretation of the LIDAR. You also presumably need to boost the sensor fusion to be able to handle trying to figure out how to reconcile the LIDAR results, the radar results, the camera vision processing results, and the sonar results. Some would say that makes sense because it’s like reconciling your sense of smell, sense of sight, sense of touch, sense of hearing, and that if you lacked one of those senses you’d have a lesser ability to sense the world. You would likely argue that the overhead of doing the sensor fusion is worth what you’d gain.

Nearly all of the auto makers and tech firms would agree that LIDAR is essential to achieving a true AI self-driving car. A true AI self-driving car is considered by industry standards to be a self-driving car of a Level 5. There are levels less than 5 that are self-driving cars requiring a human driver. These involve co-sharing of the driving task with a human driver. For a Level 5 self-driving car, the idea is that the self-driving car is driven only by the AI, and there is no need for a human driver. The Level 5 self-driving car even is likely to omit entirely any driving controls for humans, and the Level 5 is expected to be able to drive the car as a human would (in terms of being able to handle any driving task to the same degree a human could do so).

For my article about the levels of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

Tesla Foregoes LIDAR

It might then seem obvious that of course all self-driving cars would use LIDAR. Not so for Tesla. Tesla and Elon Musk have opted to go without LIDAR. One of Elon Musk’s most famous quotes for those in the self-driving car field is this one:

“In my view, it’s a crutch that will drive companies to a local maximum that they will find very hard to get out of. Perhaps I am wrong, and I will look like a fool. But I am quite certain that I am not.”

https://www.theverge.com/2018/2/7/16988628/elon-musk-lidar-self-driving-car-tesla

This is the crossing of the Rubicon for Tesla.

Right now and for the foreseeable future, they are not making use of LIDAR. It could be that they’ve made a good bet and everyone else will later on realize they’ve needlessly deployed LIDAR. Or, maybe there’s more than one way to skin a cat, and it will turn out that Tesla was right about being able to forego LIDAR, while the other firms were right to not forego it. Perhaps both such approaches will achieve the same ends of getting us to a Level 5 self-driving car.

For Tesla, if they are betting wrong, it would imply that they will be unable to achieve a Level 5 self-driving car. And if that’s the case, and the only way to get there is to add LIDAR, they would then need to add it to their self-driving cars. This would be a likely costly endeavor to retrofit and might or might not be viable. They might then opt to redesign future designs and write-off the prior models as unalterable, but at that point will be behind other auto makers, and will need to after-the-fact figure out how to integrate it into everything else. Either way, it’s going to be costly and could cause significant delays and a falling behind of the rest of the marketplace.

It would also cause Tesla to have to eat crow, as it were, since they’ve all along advertised that your Tesla has “Full Self-Driving Hardware on All Cars” – which might even get them caught in lawsuits by Tesla owners that argue they were ripped-off and did not actually get all the hardware truly needed for a self-driving car. This could lead to class action lawsuits. It could drain the company of money and focus. It would likely cause the stock to drop like a rock.

For my article about product liability for AI self-driving cars, see: https://aitrends.com/selfdrivingcars/product-liability-self-driving-cars-looming-cloud-ahead/

For my article about class action lawsuits against AI self-driving car makers see: https://aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/

This does not mean that Tesla couldn’t re-cross the Rubicon and opt to add LIDAR, but it just shows that when you’ve made the decision to cross the Rubicon, going back is often somewhat infeasible or going to be darned hard to do.

Perhaps Elon Musk had uttered “alea iacta est” when he made this rather monumental decision.

  •         Straight to Level 5

Another potential crossing of the Rubicon involves deciding whether to get to Level 5 by going straight to it, or instead to get there by progressing via Level 3 and Level 4 first.

Some believe that you need to crawl before you walk, and walk before your run, in order to progress in this world. For self-driving cars, this translates into achieving Level 3 self-driving cars first. Then, after maturing with Level 3, move into Level 4. After maturing with Level 4, move into Level 5. This is the proverbial “baby steps” at a time kind of approach.

Others assert that there’s no need to do this progressively. You can skip past the intermediary levels. Just aim directly to get to Level 5. Some would say it is a waste of time to do the intermediary levels. Others would claim you’ll not get to Level 5 if you don’t cut your teeth first on the lower levels. No one knows for sure.

Meanwhile, Waymo has pretty much made a bet that you can get straight to Level 5 and there’s no need to do the intermediaries. They rather blatantly eschew the intermediary steps approach. They have taken the bold route of get to the moon or bust. No need to land elsewhere beforehand. Will they be right? Suppose their approach falls flat and it turns out those that got to Level 4 are able to make the leap to Level 5, meanwhile maybe the efforts underway on Level 5 aren’t able to be finalized.

For more about the notion that Level 5 is like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

Does this mean that Waymo cannot re-cross the Rubicon and opt to first settle for a Level 4. As with all of these crossings, they could certainly back-down, though it would likely involve added effort, costs, and so on.

  •         Machine Learning Models

When developing the AI for self-driving cars, by-and-large it involves making use of various machine learning models. Tough choices are made about which kinds of neural networks to craft and what forms of learning algorithms to employ. Decisions, decisions, decisions.

Trying to later on change these decisions can be difficult and costly. It’s another crossing of the Rubicon.

For my article about machine learning and AI self-driving cars, see: https://aitrends.com/ai-insider/machine-learning-benchmarks-and-ai-self-driving-cars/

  •         Virtual World Model

At the crux of most AI self-driving car systems there is a virtual world model. It is used to bring together all of the information and interpretations about the world surrounding the self-driving car. It embodies the latest status gleaned from the sensors and the sensor fusion. It is used for the creation of AI action plans. It is crucial for doing what-if scenarios in real-time for the AI to try and anticipate what might happen next.

In that sense, it’s like having to decide whether to use a Rubik’s cube or use a Rubik’s snake or a Rubik’s domino. Each has its own merits. Whichever one you pick, everything else gets shaped around it. Thus, if you put at the core a virtual world model structure that is of shape Q, you are going to base the rest of the AI on that structure. It’s no easy thing to then undo and suddenly shift to shape Z. It would be costly and involve gutting much of the AI system you’d already built.

It’s once again a crossing of the Rubicon.

  •         Particular Brand/Model of Car

Another tough choice in some cases is which brand/model of car to use as the core car underlying your AI self-driving car. For the auto makers, they are of course going to choose their own brand/model. For the tech firms that are trying to make the AI of the self-driving car, the question arises as to whom do you get into bed with. The AI you craft will be to a certain extent particular to that particular car.

I know that some of you will object and say that the AI, if properly written, should be readily ported over to some other self-driving car. This is much harder than it seems. I assure you it’s not just like re-compiling your code and voila it works on a different kind of car.

Furthermore, many of these tech firms are painting themselves into a corner. They are writing their AI code with magic numbers and other facets that will make porting the AI system nearly impossible. Without good commenting and thinking ahead about generalizing your system, it’s going to be stuck on whatever brand/model you started with. The rush right now to get the stuff to work is more important than making it portable. There are many that will be shocked down the road that they suddenly realize they cannot overnight shift onto some other model car.

See my article about kits for AI self-driving cars: https://aitrends.com/selfdrivingcars/kits-and-ai-self-driving-cars/

See my article about idealism and AI self-driving cars: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

  •         Premature Roadway Release

This last example of crossing the Rubicon has to do with putting AI self-driving cars onto public roadways, perhaps doing so prematurely.

The auto makers and tech firms are eager to put their self-driving cars onto public roadways. It is a sign to the world that there is progress being made. It helps boost stock prices. It helps for the AI itself to gain “experience” from being driven miles upon miles. It helps the AI developers as they tune and fix the AI systems and do so based on real-world encounters by the self-driving car.

That’s all well and good, except for the fact that it is a grand experiment upon the public. If the self-driving cars have problems and get into accidents, it’s not going to be good times for self-driving cars. Indeed, it’s the bad apple in the barrel in that even if only one specific brand of self-driving car gets into trouble, the public will perceive this as the entire barrel is bad.

If the public becomes disenchanted with AI self-driving cars, you can bet that regulators will change their tune and no longer be so supportive of self-driving cars. A backlash will most certainly occur. This could slow down AI self-driving car progress. It could somewhat curtail it, but it seems unlikely to stop it entirely. Right now, we’re playing a game of dice and just hoping that few enough of the AI self-driving cars on the roadways have incidents that it won’t become a nightmare for the whole industry.

For more about this rolling of the dice, see my article about responsibility and AI self-driving cars: https://aitrends.com/ai-insider/responsibility-and-ai-self-driving-cars/

This then is another example of crossing the Rubicon.

Putting AI self-driving cars onto the roadways, which if it turns out premature, might make it difficult to continue forward with self-driving cars, at least not at the pace that it is today.

For the AI self-driving car field, there are a plethora of crossings of the Rubicon. Some decision makers are crossing the Rubicon and doing so like Caesar, fully aware of the chances they are taking, and betting that in the end they’ve made the right choice. There are some decision makers that are blissfully unaware that they have crossed the Rubicon, and only once something untoward happens will they realize that oops, they made decisions earlier that now haunt them. Each of these decisions are not necessarily immutable and undoable per se, it’s more like there is a cost and adverse impact if you’ve made the wrong choice and need to backtrack or redo what you’ve done.

I’d ask that all of you involved in AI self-driving cars make sure to be cognizant of the Rubicon’s you’ve already crossed, and which ones are still up ahead. I’m hoping that by my raising your awareness, in the end you’ll be able to recite the immortal words of Caesar: Veni, vidi, vici (which translates loosely into I came, and I saw, and I conquered).

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Catalia Health Tries Free Interactive Robots to for In-Home Patient Care

A little more than three-and-a-half years ago, Cory Kidd founded Catalia Health based on the work he did at the MIT Media Lab and Boston University Medical Center. Headquartered in San Francisco, the company’s overarching goal is to improve patient engagement and launch behavior change. But the way it goes about meeting that mission is unique. Through Catalia […]

A little more than three-and-a-half years ago, Cory Kidd founded Catalia Health based on the work he did at the MIT Media Lab and Boston University Medical Center.

Headquartered in San Francisco, the company’s overarching goal is to improve patient engagement and launch behavior change. But the way it goes about meeting that mission is unique.

Through Catalia Health’s model, each patient is equipped with an interactive robot to put in their home. Named Mabu, the robot learns about each patient and their needs, including medications and treatment circumstances.

Mabu can then have tailored conversations with a patient about their routine and how they’re feeling. The information from those talks securely goes back to the patient’s pharmacist or healthcare provider, giving them an update on the individual’s progress and alerting them if something goes wrong.

Right now, the company is focused on bringing Mabu to patients with congestive heart failure. It is currently working with Kaiser Permanente on that front. But Catalia Health is also doing work on other disease states, such as rheumatoid arthritis and late-stage kidney cancer.

“We’re not replacing a person,” Kidd, the startup’s CEO, said in a recent phone interview. “[Providers have] the ability now to have a lot more insight on all their patients on a much more frequent basis.”

Why use a robot as a means to gather such insight?

Kidd explained: “We get intuitively that face-to-face [interaction] makes a difference. Psychologically, we know what that difference is: We create a stronger relationship and we find the person to be more credible. The robot can literally look someone in the eyes, and we get the psychological effects of face-to-face interaction.”

The robot — and face-to-face interaction — helps keep patients engaged over a long period of time, Kidd added.

As for its business model, Catalia Health works directly with pharma companies and health systems. These organizations pay the startup on a per patient, per month basis. The patient using Mabu doesn’t have to pay.

The company is also currently offering interested heart failure patients a free trial of Mabu. The patient simply has to give Catalia feedback on their experience.

“That’s ongoing and very active right now,” Kidd said of the free trial effort.

In late 2017, the company closed a $4 million seed round, following two previous funding rounds amounting to more than $7.7 million. Ion Pacific led the $4 million round. Khosla Ventures, NewGen Ventures, Abstract Ventures and Tony Ling also participated.

Read the source article at MedCityNews.

Meet the Man Who Invented the Self-Driving Car -in 1986

The other drivers wouldn’t have noticed anything unusual as the two sleek limousines with German license plates joined the traffic on France’s Autoroute 1. But what they were witnessing — on that sunny, fall day in 1994 — was something many of them would have dismissed as just plain crazy. It had taken a few […]

The other drivers wouldn’t have noticed anything unusual as the two sleek limousines with German license plates joined the traffic on France’s Autoroute 1.

But what they were witnessing — on that sunny, fall day in 1994 — was something many of them would have dismissed as just plain crazy.

It had taken a few phone calls from the German car lobby to get the French authorities to give the go-ahead. But here they were: two gray Mercedes 500 SELs, accelerating up to 130 kilometers per hour, changing lanes and reacting to other cars — autonomously, with an onboard computer system controlling the steering wheel, the gas pedal and the brakes.

Decades before Google, Tesla and Uber got into the self-driving car business, a team of German engineers led by a scientist named Ernst Dickmanns had developed a car that could navigate French commuter traffic on its own.

Ernst Dickmanns, a German scientist | Janosch Delcker/POLITICO

The story of Dickmann’s invention, and how it came to be all but forgotten, is a neat illustration how technology sometimes progresses: not in small steady steps, but in booms and busts, in unlikely advances and inevitable retreats —“one step forward and three steps back,” as one AI researcher put it.

It’s also a warning of sorts, about the expectations we place on artificial intelligence and the limits of some of the data-driven approaches being used today.

“I’ve stopped giving general advice to other researchers,” said Dickmanns, now 82 years old. “Only this much: One should never completely lose sight of approaches that were once very successful.”

From the skies to the street

Before becoming the man “who actually invented self-driving cars”, as Berkeley computer scientist Jitendra Malik put it, Dickmanns spent the first decade of his professional life analyzing the trajectories space ships take when they reenter the Earth’s atmosphere.

Trained as an aerospace engineer, he quickly rose through the ranks of West Germany’s ambitious aerospace community so that in 1975, still under 40, he secured a position at a new research university of Germany’s armed forces.

By this point, he had already started mulling what would soon become his life mission: teaching vehicles how to see. The place to start, Dickmanns became increasingly convinced, was not spaceships but cars. Within a few years, he had bought a Mercedes van, installed it with computers, cameras and sensors, and began running tests on the university premises in 1986.

“The colleagues at the university said, well, he’s an oddball, but he’s got a track record [of achievements in aerospace technology,] so let’s just let him do it,” Dickmanns said during an interview at his family house, located steps from an onion-domed church in Hofolding, a small town outside of Munich.

In 1986, Dickmanns’ van became the first vehicle to drive autonomously — on the skidpan at his university. The next year, he sent it down an empty section of a yet-to-be-opened Bavarian autobahn at speeds approaching 90 kilometers per hour. Soon afterward, Dickmanns was approached by the German carmaker Daimler. Together, they secured funding from a massive pan-European project, and in the early 1990s, the company came up with an idea that first seemed “absurd” to Dickmanns.

“Can’t you equip one of our large passenger cars for the final demonstration of the project in Paris in October [of 1994], and then drive on the three-lane motorway in public traffic?” he remembered officials asking.

He had to take a deep breath, “but then I told them that with my team, and the methods we’re using, I think we’re capable of doing that.”

Read the source article at Politico.