Coopetition and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Competitors usually fight tooth and nail for every inch of ground they can gain over the other. It’s a dog eat dog world and if you can gain an advantage over your competition, so the better you shall be. If you can even somehow drive your competition out […]

By Lance Eliot, the AI Trends Insider

Competitors usually fight tooth and nail for every inch of ground they can gain over the other. It’s a dog eat dog world and if you can gain an advantage over your competition, so the better you shall be. If you can even somehow drive your competition out of business, well, as long as it happened legally, there’s more of the pie for you.

Given this rather obvious and strident desire to beat your competition, it might seem like heresy to suggest that you might at times consider backing down from being at each other’s throats and instead, dare I say, possibly cooperate with your competition. You might not be aware that the US Postal Service (USPS) has cooperative arrangements with FedEx and UPS – on the surface this seems wild to think that these competitors, obviously all directly competing as shippers, would consider working together rather than solely battling each other.

Here’s another example, Wintel. For those of you in the tech arena, you know well that Microsoft and Intel have seemingly forever cooperated with each other. The Windows and Intel mash-up, Wintel, has been pretty good for each of them respectively and collectively. When Intel’s chips became more powerful, it aided Microsoft in speeding up Windows and being able to add more features and heavier ones. As people used Windows and wanted faster speed and greater capabilities, it sparked Intel to boost their chips, knowing there was a place to sell them, and make more money by doing so. You could say it is a synergistic relationship between those two firms that in combination has aided them both.

Now, I realize you might object somewhat and insist that Microsoft and Intel are not competitors per se, thus, the suggestion that this was two competitors that found a means to cooperate seems either an unfair characterization or a false one.  You’d be somewhat on the mark to have noticed that they don’t seem to be direct competitors, though they could be if they wanted to do so (Microsoft could easily get into the chip business, Intel could easily get into the OS business, and they’ve both dabbled in each other’s pond from time-to-time). Certainly, though it’s not as strong straight-ahead competition example as would be the USPS, FedEx, UPS kind of cooperative arrangement.

There’s a word used to depict the mash-up of competition and cooperation, namely coopetition.

The word coopetition grew into prominence in the 1990s. Some people instantly react to the notion of being both a competitor and a cooperator as though it’s a crazy idea. What, give away my secrets to my competition, are you nuts? Indeed, trying to pull-off a coopetition can be tricky, as I’ll describe further herein. Please also be aware that occasionally you’ll see the use of the more informal phrasing of “frenemy” to depict a similar notion (another kind of mash-up, this one being between the word “friend” and the word “enemy”).

There are those that instantly recoil in horror at the idea of coopetition and their knee jerk reaction is that it must be utterly illegal. They assume that there must be laws that prevent such a thing. Generally, depending upon how the coopetition is arranged, there’s nothing illegal about it per se. The coopetition can though veer in a direction that raises legal concerns and thus the participants need to be especially careful about what they do, how they do it, and what impact it has on the marketplace.

It’s not particularly the potential for legal difficulties that tends to keep coopetition from happening. By and large, the means to structure a coopetition arrangement, via say putting together a consortium, it can be done with relatively little effort and cost. The real question and the bigger difficulty is whether the competing firms are able to find middle ground that allows them to enter into a coopetition agreement.

Think about today’s major high-tech firms.

Most of them are run by strong CEO’s or founders that relish being bold and love smashing their competition. They often drive their firm to have a kind of intense “hatred” for the competition and want their firm to crush the competition. Within a firm, there is often a cultural milieu formed that their firm is far superior, and the competition is unquestionably inferior. Your firm is a winner, the competing firm is a loser. That being said, they don’t want you to let down your guard, in the sense that though the other firm is an alleged loser, they can pop-up at any moment and be on the attack, so you need to be on your guard. To some degree, there’s a begrudging respect for the competition, paradoxically mixed with disdain for the competition.

These strong personalities will generally tend to keep the competitive juices going and not permit the possibility of a coopetition option. On the other hand, even these strong personalities can be motivated to consider the coopetition approach, if the circumstances or the deal looks attractive enough. With a desire to get bigger and stronger, if it seems like a coopetition could get you there, the most egocentric of leaders is willing to give the matter some thought. Of course, it’s got to be incredibly compelling, but at least it is worthy of consideration and not out of hand to float the idea.

What could be compelling?

Here’s a number for you, $7 trillion dollars.

Allow me to explain.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. We do so because it’s going to be a gargantuan market, and because it’s exciting to be creating something that’s on par with a moonshot.

See my article about how making AI self-driving cars is like a moonshot:https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

See my article that provides a framework about AI self-driving cars:https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

Total AI Self-Driving Car Market Estimated at $7 Trillion

Suppose you were the head of a car maker, or the head of a high-tech firm that wanted or is making tech for cars, and I told you that the potential market for AI self-driving cars is estimated at $7 trillion dollars by the year 2050 (as predicted in Fortune magazine, see: http://fortune.com/2017/06/03/autonomous-vehicles-market/).

That’s right, I said $7 trillion dollars. It’s a lot of money. It’s a boatload, and more, of money. The odds are that you would want to do whatever you could to get a piece of that action. Even a small slice, let’s say just a few percentages, would make your firm huge.

Furthermore, consider things from the other side of that coin. Suppose you don’t get a piece of that pie. Whatever else you are doing is likely to become crumbs. If you are making conventional cars, the odds are that few will want to buy them anymore. There are some AI self-driving car pundits that are even suggesting that conventional cars would be outlawed by 2050. The logic is that if you have conventional cars being driven by humans on our roadways in the 2050’s, it will muck up the potential nirvana of having all AI self-driving cars that presumably will be able to work in unison and thus get us to the vaunted zero fatalities goal.

For my article that debunks the zero fatalities goal, see:https://aitrends.com/selfdrivingcars/self-driving-cars-zero-fatalities-zero-chance/

If you are a high-tech firm and you’ve not gotten into the AI self-driving car realm, your fear is that you’ll also miss out on the $7 trillion dollar prize. Suppose that your high-tech competitor got into AI self-driving cars early on and they became the standard, kind of like how there was a fight between VHS and Betamax. Maybe it’s wisest to get into things early and become the standard.

Or, alternatively, maybe the early arrivers will waste a lot of money trying to figure out what to do, so instead of falling into that trap, you wait on the periphery, avoiding the drain of resources, and then jump in once the others have flailed around. Many in Silicon Valley seem to believe that you have to be the first into a new realm. This is actually a false awareness since many of the most prominent firms in many areas weren’t there first, they instead came along somewhat after others had poked and tried and based on the heels of those true first attempts did the other firm step in and become a household name.

Let’s return to the notion of coopetition. I assume we can agree that generally the auto makers aren’t very likely to want to be cooperative with each other and usually consider themselves head-on competitors. I realize there have been exceptions, such as the deal that PSA Peugeot Citroen and Toyota made to produce the Peugeot 107 and the Toyota Aygo, but those such arrangements are somewhat sparse. Likewise, the high-tech firms tend to strive towards being competitive with each other, rather than cooperative. Again, there are exceptions such as a willingness to serve on groups that are putting together standards and protocols for various architectural and interface aspects (think of the World Wide Web Consortium, W3C, as an example).

We’ve certainly already seen that auto makers and high-tech firms are willing to team-up for the AI self-driving cars realm.

In that sense, it’s kind of akin to the Wintel type of arrangement. I don’t think we’d infer they are true coopetition arrangements since they weren’t especially competing to begin with. Google’s Waymo has teamed up with Chrysler to outfit the Pacifica minivans with AI self-driving car aspects. Those two firms weren’t especially competitors. I realize you could assert that Google could get into the car business and be an auto maker if it wanted to, which is quite the case and they could buy their way in or even start something from scratch. You could also assert that Chrysler is doing its own work on high-tech aspects for AI self-driving cars and in that manner might be competing with Waymo. It just doesn’t though quite add-up to them being true competitors per se, at least not right now.

So, let’s put to the side the myriad of auto maker and high-tech firm cooperatives underway and say that we aren’t going to label those as coopetitions. Again, I realize you can argue the point and might say that even if they aren’t competitors today, they could become competitors a decade from now. Yes, I get that. Just go along with me on this for now and we can keep in mind the future possibilities too.

Consider these thought provoking questions:

  •         Could we get the auto makers to come together into a coopetition arrangement to establish the basis for AI self-driving cars?
  •         Could we get the high-tech firms to come together into a coopetition arrangement to establish the basis for AI self-driving cars?
  •         Could we get the auto makers and tech firms that are already in bed with each other to altogether come together to enter into a coopetition arrangement?

I get asked these questions during a number of my industry talks. There are some that believe the goal of achieving AI self-driving cars is so crucial for society, so important for the benefit of mankind, that it would be best if all of these firms could come together, shake hands, and forge the basis for AI self-driving cars.

For my article about idealists in AI self-driving cars, see:https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

Why would these firms be willing to do this? Shouldn’t they instead be wanting to “win” and become the standard for AI self-driving cars? The tempting $7 trillion dollars is a pretty alluring pot of gold. Seems premature to already throw in the towel and allow other firms to grab a piece of the pie. Maybe your efforts will knock them out of the picture. You’ll have the whole kit and caboodle yourself.

Those proposing a coopetition notion for AI self-driving cars are worried that the rather “isolated” attempts by each of the auto makers and the tech firms is going either lead to failure in terms of true AI self-driving cars, or it will stretch out for a much longer time than needed. Suppose you could have true AI self-driving cars by the year 2030, if you did a coopetition deal, versus that suppose it wasn’t until 2050 or 2060 that true AI self-driving cars would emerge. This means that for perhaps 20 or 30 years there could have been true AI self-driving cars, doing so to the benefit of us all, and yet we let it slip off due to being “selfish” and allowing the AI self-driving car makers to duke it out.

For selfishness and AI self-driving cars, see my article:https://aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/

You’ve likely see science fictions movies about a giant meteor that is going to strike earth and destroy all that we have, or an alien force from Mars that is heading to earth and likely to enslave us all. In those cases, there has been a larger foe to contend with. As such, it got all of the countries of the world to set aside their differences and band together to try and defeat the larger foe. I’m not saying that would happen in real life, and perhaps instead everyone would tear each other apart, but anyway, let’s go with the happy face scenario and say that when faced with tough times, we could get together those that otherwise despise each other or see each other as their enemies, and they would become cooperative.

That’s what some want to have happen in the AI self-driving cars realm. The bigger foe is the number of annual fatalities due to car accidents. The bigger foe also includes the issue of a lack of democratization of mobility, which is what it is hoped that AI self-driving cars will bring forth, a greater democratization. The bigger foe is the need to increase mobility for those that aren’t able to be mobile. In other words, the basket of benefits for AI self-driving cars, and the basket of woes that it will overturn, the belief is that for those reasons the auto makers and tech firms should band together into a coopetition.

Zero-Sum Versus Coopetition in Game Theory

Game theory comes to play in coopetition.

If you believe in a zero-sum game, whereby the pie is just one size and those that get a bigger piece of the pie are doing so at the loss of others that will get a smaller piece of the pie, the win-lose perspective makes it hard to consider participating in a coopetition. On the other hand, if it could be a win-win possibility, whereby the pie can be made bigger, and thus the participants each get sizable pieces of pie, it makes being in the coopetition seemingly more sensible.

How would things fare in the AI self-driving cars realm? Suppose that an auto maker X that has teamed up with high-tech firm Y, they are the XY team, and they are frantically trying to be the first with a true AI self-driving car. Meanwhile, we’ve got auto maker Q and its high-tech partner firm Z, and so the QZ team is also frantically trying to put together a true AI self-driving car.

Would XY be willing to get into a coopetition with QZ, and would QZ want to get into a coopetition with XY?

If XY believes they need no help and will be able to achieve an AI self-driving car and do so on a timely basis and possibly beat the competition, it seems unlikely they would perceive value in doing the coopetition. You can say the same about QZ, namely, if they think they are going to be the winner, there’s little incentive to get into the coopetition.

Some would argue that they could potentially shave on costs of trying to achieve an AI self-driving car by joining together. Pool resources. Do R&D together. They could possibly do some kind of technology transfer amongst each other, with one having gotten more advanced in some area than the other, and thus they trade with each on the things they each have gotten farthest along on. There’s a steep learning curve on the latest in AI and so the XY and QZ could perhaps boost each other up that learning curve. Seems like the benefits of being in a coopetition are convincing.

And, it is already the case that these auto makers and tech firms are eyeing each other. They each are intently desirous of knowing how far along the other is. They are hiring away key people from each other. Some would even say there is industrial espionage underway. Plus, in some cases, there are AI self-driving car developers that appear to have stepped over the line and stolen secrets about AI self-driving cars.

See my article about the stealing of secrets of AI self-driving cars:https://aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

This coopetition is not so easy to arrange, let alone to even consider. You are the CEO of the auto maker X, which has already forged a relationship with the high-tech firm Y. The marketplace perceives that you are doing the right thing and moving forward with AI self-driving cars. This is a crucial perception for any auto maker, since we’ve already seen that the auto makers will get drummed by the marketplace, such as their shares dropping, if they don’t seem to be committed to achieving an AI self-driving car. It’s become a key determiner for the auto maker and its leadership.

The marketplace figures that your firm, you the auto maker, will be able to achieve AI self-driving cars and that consumers will flock to your cars. Consumers will be delighted that you have AI self-driving cars. The other auto makers will fall far behind in terms of sales as everyone switches over to you. In light of that expectation, it would be somewhat risky to come out and say that you’ve decided to do a coopetition with your major competitors.

I’d bet that there would be a stock drop as the marketplace reacted to this approach. If all the auto makers were in the coopetition, I suppose you could say that the money couldn’t flow anywhere else anyway.

On the other hand, if only some of the auto makers were in the coopetition, it would force the marketplace into making a bet. You might put your money into the auto makers that are in the coopetition, under the belief they will succeed first, or you might put your money into the other auto makers that are outside the coopetition, under the belief they will win and win bigger because they aren’t having to share the pie.

Speaking of which, what would be the arrangement for the coopetition? Would all of the members participating have equal use of the AI self-driving car technologies developed? Would they be in the coopetition forever or only until a true AI self-driving car was achieved, or until some other time or ending state? Could they take whatever they got from the coopetition and use it in whatever they wanted, or would there be restrictions? And so on.

I’d bet that the coopetition would have a lot of tension. There is always bound to be professional differences of opinion. A member of the coopetition might believe that LIDAR is essential to achieving a true AI self-driving car, while some other member says they don’t believe in LIDAR and see it as a false hope and a waste of time. How would the coopetition deal with this?

For other aspects about differences in opinions about AI self-driving car designs, see my article: https://aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

Also, see my article about egocentric designs:https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

Normally, a coopetition is likely to be formulated when the competitors are willing to find a common means to contend with something that is relatively non-strategic to their core business. If you believe that AI self-driving cars are the future of the automobile, it’s hard to see that it wouldn’t be considered strategic to the core business. Indeed, even though today we don’t necessarily think of AI self-driving cars as a strategic core per se, because it’s still so early in the life cycle, anyone with a bit of vision can see that soon enough it will be.

If the auto makers did get together in a coopetition, and they all ended-up with the same AI self-driving car technology, how else would they differentiate themselves in the marketplace? I realize you can say that even today the auto makers are pretty much the same in the sense that they offer a car that has an engine and has a transmission, etc. The “technology” you might say is about the same, and yet they do seem to differentiate each other. Often, the differentiation is more on style of the car, the looks of the car, rather than the tech side of things.

For how auto makers might be marketing AI self-driving cars in the future, see my article: https://aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/

For those that believe that the AI part of the self-driving car will end-up being the same for cars of the future, and it won’t be a differentiator to the marketplace, this admittedly makes the case for banding into a coopetition on the high-tech stuff. If the auto makers believe that the AI will be a commodity item, why not get into a coopetition, figure this arcane high-tech AI stuff out, and be done with it. No sense in fighting over something that anyway is going to be generic across the board.

At this time, it appears that the auto makers believe they can reach a higher value by creating their own AI self-driving car, doing so in conjunction with a particular high-tech firm that they’ve chosen, rather than doing so via a coopetition. Some have wondered if we’ll see a high-tech firm that opts to build its own car, maybe from scratch, but so far that doesn’t seem to be the case (in spite of the rumors about Apple, for example). There are some firms that are developing both the car and the high-tech themselves, such as Tesla, and see no need to band with another firm, as yet.

Right now, the forces appear to be swayed toward the don’t side of doing a coopetition. Things could change. Suppose that no one is able to achieve a true AI self-driving car? It could be that the pressures become large enough (the bigger foe) that they auto makers and tech firms consider the coopetition notion. Or, maybe the government decides to step in and forces some kind of coopetition, doing so under the belief that it is a societal matter and regulatory guidance is needed to get us to true AI self-driving cars. Or, maybe indeed aliens from Mars start to head here and we realize that if we just had AI self-driving cars we’d be able to fend them off.

For my piece about conspiracy theories and AI self-driving cars, see:https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

There’s the old line about if you can’t beat them, join them. For the moment, it’s assumed that the ability to beat them is greater than the join them alternative. The year 2050 is still off in the future and anything might happen on the path to that $7 trillion dollars.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

A Look Inside Facebook’s AI Machine

By Steven Levy, Wired When asked to head Facebook’s Applied Machine Learning group — to supercharge the world’s biggest social network with an AI makeover — Joaquin Quiñonero Candela hesitated. It was not that the Spanish-born scientist, a self-described “machine learning (ML) person,” hadn’t already witnessed how AI could help Facebook. Since joining the company in […]

By Steven Levy, Wired

When asked to head Facebook’s Applied Machine Learning group — to supercharge the world’s biggest social network with an AI makeover — Joaquin Quiñonero Candela hesitated. It was not that the Spanish-born scientist, a self-described “machine learning (ML) person,” hadn’t already witnessed how AI could help Facebook. Since joining the company in 2012, he had overseen a transformation of the company’s ad operation, using an ML approach to make sponsored posts more relevant and effective. Significantly, he did this in a way that empowered engineers in his group to use AI even if they weren’t trained to do so, making the ad division richer overall in machine learning skills. But he wasn’t sure the same magic would take hold in the larger arena of Facebook, where billions of people-to-people connections depend on fuzzier values than the hard data that measures ads. “I wanted to be convinced that there was going to be value in it,” he says of the promotion.

Despite his doubts, Candela took the post. And now, after barely two years, his hesitation seems almost absurd.

How absurd? Last month, Candela addressed an audience of engineers at a New York City conference. “I’m going to make a strong statement,” he warned them. “Facebook today cannot exist without AI. Every time you use Facebook or Instagram or Messenger, you may not realize it, but your experiences are being powered by AI.”

Last November I went to Facebook’s mammoth headquarters in Menlo Park to interview Candela and some of his team, so that I could see how AI suddenly became Facebook’s oxygen. To date, much of the attention around Facebook’s presence in the field has been focused on its world-class Facebook Artificial Intelligence Research group (FAIR), led by renowned neural net expert Yann LeCun. FAIR, along with competitors at Google, Microsoft, Baidu, Amazon, and Apple (now that the secretive company is allowing its scientists to publish), is one of the preferred destinations for coveted grads of elite AI programs. It’s one of the top producers of breakthroughs in the brain-inspired digital neural networks behind recent improvements in the way computers see, hear, and even converse. But Candela’s Applied Machine Learninggroup (AML) is charged with integrating the research of FAIR and other outposts into Facebook’s actual products—and, perhaps more importantly, empowering all of the company’s engineers to integrate machine learning into their work.

Because Facebook can’t exist without AI, it needs all its engineers to build with it.

My visit occurs two days after the presidential election and one day after CEO Mark Zuckerberg blithely remarked that “it’s crazy” to think that Facebook’s circulation of fake news helped elect Donald Trump. The comment would turn out be the equivalent of driving a fuel tanker into a growing fire of outrage over Facebook’s alleged complicity in the orgy of misinformation that plagued its News Feed in the last year. Though much of the controversy is beyond Candela’s pay grade, he knows that ultimately Facebook’s response to the fake news crisis will rely on machine learning efforts in which his own team will have a part.

But to the relief of the PR person sitting in on our interview, Candela wants to show me something else—a demo that embodies the work of his group. To my surprise, it’s something that performs a relatively frivolous trick: It redraws a photo or streams a video in the style of an art masterpiece by a distinctive painter. In fact, it’s reminiscent of the kind of digital stunt you’d see on Snapchat, and the idea of transmogrifying photos into Picasso’s cubism has already been accomplished.

“The technology behind this is called neural style transfer,” he explains. “It’s a big neural net that gets trained to repaint an original photograph using a particular style.” He pulls out his phone and snaps a photo. A tap and a swipe later, it turns into a recognizable offshoot of Van Gogh’s “The Starry Night.” More impressively, it can render a video in a given style as it streams. But what’s really different, he says, is something I can’t see: Facebook has built its neural net so it will work on the phone itself.

Read the source article in Wired.

Ensemble Machine Learning for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider How do you learn something? That’s the same question that we need to ask when trying to achieve Machine Learning (ML). In what way can we undertake “learning” for a computer and seek to “teach” the system to do things of an intelligent nature. That’s a holy grail […]

By Lance Eliot, the AI Trends Insider

How do you learn something?

That’s the same question that we need to ask when trying to achieve Machine Learning (ML). In what way can we undertake “learning” for a computer and seek to “teach” the system to do things of an intelligent nature. That’s a holy grail for those in AI that are aiming to avoid having to program their way into intelligent behavior. Instead, the notion is to be able to somehow get a computer to learn what to do and not need to explicitly write out every step or knowledge aspect required.

Allow me a moment to share with you a story about the nature of learning.

Earlier in my career, I started out as a professor and was excited to teach classes for both undergraduate students and graduate level students. Those first few lectures were my chance to aid those students in learning about computer science and AI. Before each lecture I spent a lot of time to prepare my lecture notes and was ready to fill the classroom whiteboard with all the key principles they’d need to know. Sure enough, I’d stride into the classroom and start writing on the board and kept doing so until the bell went off that the class session was finished.

After doing this for about a week or two, a student came to my office hours and asked if there was a textbook they could use to study from. I was taken aback since I had purposely not chosen a textbook in order to save the students money. I figured that my copious notes on the board would be better than some stodgy textbook and averted them from having to spend a fortune on costly books. The student explained that though they welcomed my approach, they were the type of person that found it easier to learn by reading a book. Trying not to offend me, the student gingerly inquired as to whether my lecture notes could be augmented by a textbook.

I considered this suggestion and sure enough found a textbook that I thought would be pretty good to recommend, and at the next session of the class mentioned it to the students, indicating that it was optional and not mandatory for the class.

While walking across the campus after a class session, another student came up to me and asked if there were any videos of my lectures. I was suspicious that the student wanted to skip coming to lecture and figured they could just watch a video instead, but this student sincerely convinced me that she found that watching a video allowed her to start and stop the lecture while trying to study the material after class sessions. She said that my fast pace during class didn’t allow time for her to really soak in the points and that by having a video she would be able to do so at a measured pace on her own time.

I considered this suggestion and provided to the class links to some videos that were pertinent to the lectures that I was giving.

Yet another student came to see me about another facet of my classes. For the undergrad lectures, I spoke the entire time and didn’t allow for any classroom discussion or interaction. This seemed sensible because the classes were large lecture halls that had hundreds of students attending. I figured it would not be feasible to carry on a Socratic dialogue similar to what I was doing in the graduate level courses where I had many 15-20 students per class. I had even been told by some of the senior faculty that trying to engage undergrads in discussion was a waste of time anyway since those newbie students were neophytes and it would be ineffective to allow any kind of Q&A with them.

Well, an undergrad student came to see me and asked if I was ever going to allow Q&A during my lectures. When I started to discuss this with the student, I inquired as to what kinds of questions was he thinking of asking. Turns out that we had a very vigorous back-and-forth on some meaty aspects of AI and it made me realize that there were perhaps students in the lecture hall that could indeed engage in a hearty dialogue during class. At my next lecture, I opted to stop every twenty minutes and gauge the reaction from the students and see if I could get a brief and useful interaction going with them. It worked, and I noticed that many of the students became much more interested in the lectures by this added feature of allowing for Q&A (even for so-called “lowly” undergraduate students, which was how my fellow faculty seemed to think of them).

Why do I tell you this story about my initial days of being a professor?

I found out pretty quickly that using only one method or approach to learning is not necessarily very wise. My initial impetus to do fast paced all-spoken lectures was perhaps sufficient for some students, but not for all. Furthermore, even the students that were OK with that narrow singular approach were likely to tap into other means of learning if I was able to provide it. By augmenting my lectures with videos, with textbooks, and by allowing for in-classroom discussion, I was providing a multitude of means to learn.

You’ll be happy to know that I learned that learning is best done via offering multiple ways to learn. Allow the learner to select which approach best fits to them. When I say this, also keep in mind that the situation might determine which mode is best at that time. In other words, don’t assume that someone that prefers learning via in-person lecture is always going to find that to be the best learning method for them. They might switch to a preference for say video or textbook, depending upon the circumstance.

And, don’t assume that each learner will learn via only one method. Student A might find that using lectures and the textbook is their best fit. Student B might find lectures to be unsuitable for learning and prefer dialogue and videos. Each learner will have their own one-or-more learning approaches that work best for them, and this varies by the nature of the topic being learned.

I kept all of this in mind for the rest of my professorial days and always tried to provide multiple learning methods to the students, so they could choose the best fit for them.

Ensemble Learning Employs Multiple Methods, Approaches

A phrase sometimes used to refer to this notion of multiple learning methods is known as ensemble learning. When you consider the word “ensemble” you tend to think of multiples of something, such as multiple musicians in an orchestra or multiple actors in a play. They each have their own role, and yet they also combine together to create a whole.

Ensemble machine learning is the same kind of concept. Rather than using only one method or approach to “teach” a computer to do something, we might use multiple methods or approaches. These multiple methods or approaches are intended to somehow ultimately work together so as to form a group. In other words, we don’t want the learning methods to be so disparate that they don’t end-up working together. It’s like musicians that are supposed to play the same song together. The hope is that the multiple learning methods are going to lead to a greater chance at having the learner learn, which in this case is the computer system as the learner.

At the Cybernetic AI Self-Driving Car Institute, we are using ensemble machine learning as part of our approach to developing AI for self-driving cars.

Allow me to further elaborate.

Suppose I was trying to get a computer system to learn some aspect of how to drive a car. One approach might be to use artificial neural networks (ANN). This is very popular and a relatively standardized way to “teach” the computer about certain driving task aspects. That’s just one approach though. I might also try to use genetic algorithms (GA). I might also use support vector machines (SVM). And so on. These could be done in an ensemble manner, meaning that I’m trying to “teach” the same thing but using multiple learning techniques to do so.

For the use of genetic algorithms in AI self-driving cars see my article: https://aitrends.com/selfdrivingcars/genetic-algorithms-self-driving-cars-darwinism-optimization/

For my article about support vector machines in AI self-driving cars see: https://aitrends.com/selfdrivingcars/support-vector-machines-svm-ai-self-driving-cars/

For my articles about machine learning for AI self-driving cars see:

Benchmarks and machine learning: https://aitrends.com/ai-insider/machine-learning-benchmarks-and-ai-self-driving-cars/

Federated machine learning: https://aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/

Explanation-based machine learning: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/

Deep reinforcement learning: https://aitrends.com/ai-insider/human-aided-training-deep-reinforcement-learning-ai-self-driving-cars/

Deep compression pruning in machine learning: https://aitrends.com/selfdrivingcars/deep-compression-pruning-machine-learning-ai-self-driving-cars-using-convolutional-neural-networks-cnn/

Simulations and machine learning: https://aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/

Training data and machine learning: https://aitrends.com/machine-learning/machine-learning-data-self-driving-cars-shared-proprietary/

Now you don’t normally just toss together an ensemble. When you put together a musical band, you probably would be astute to pick musicians that have particular musical skills and play particular musical instruments. You’d want them to end-up being complimentary with each other. Sure, some might be duplicative, such as you might have more than one guitar player, but that could be because one guitarist will be the lead guitar and the other perhaps the bass guitar player.

The same is said for doing ensemble machine learning. You’ll want to select machine learning approaches or methods that seem to make sense when considered in the totality as a group of such machine learning approaches. What is the strength of each ML chosen for the ensemble? What is the weakness of the ML chosen? By having multiple learning methods, hopefully you’ll be able to either find the “best” one for the given learning circumstance at hand, or you might be able to combine them together in a manner that offers a synergistic outcome beyond each of them performing individually.

So, you could select some N number of machine learning approaches, train them on some data, and then see which of them learned the best, as based on some kind of metrics. You might after training feed the MLs with new data and see which does the best job. For example, suppose I’m trying to train toward being able to discern street signs. So, I feed a bunch of pictures of street signs into these each ML’s of my ensemble. After they’ve each used their own respective learning approach, I then test them. I do so by feeding new pictures of street signs and see which of them most consistently can identify a stop sign versus a speed limit sign.

See my article about street signs and AI self-driving cars: https://aitrends.com/selfdrivingcars/making-ai-sense-of-road-signs/

Out of my N number of machine learning approaches that I selected for this street sign learning task, suppose that the SVM turns out to be the “best” as based on my testing after the learning has occurred. I might then decide that for the street sign interpretation I’m going to exclusively use SVM for my AI self-driving car system. This aspect of selecting a particular model out of a set of models is sometimes referred to as the “bucket of models” approach, wherein you have a bucket of models in the ensemble and you choose one out of them. Your selection is based on a kind of “bake-off” as to which is the better choice.

But, suppose that I discover that of the N machine learning approaches, sometimes the SVM is the “best” and meanwhile there are other times that the GA is better. I don’t necessarily need to confine myself to choosing only one of the learning methods for the system. What I might do is opt to use both SVM and GA, and be aware beforehand of when each is preferred to come to play. This is akin to having the two guitarists in my musical band, and each has their own strengths and weaknesses, so if I’m thoughtful about how to arrange my band when they play a concert I’ll put them each into a part of the music playing that seems best for their capabilities.  Maybe one of them starts the song, and the other ends the song. Or however arranging them seems most suitable to their capabilities.

Thus, we might choose N number of machine learning approaches for our ensemble, train them, and then decide that some subset Q are chosen to become part of the actual system we are putting together. Q might be 1, in that maybe there’s only one of the machine learning approaches that seemed appropriate to move forward with, or Q might be 2, or 3, and so on up to the number N. If we do select more than just one, the question then arises as to when and how to use the Q number of chosen machine learning approaches.

In some cases, you might use each separately, such as maybe machine learning approach Q1 is good at detecting stop signs, while Q2 is good at detecting speed limit signs. Therefore, you put Q1 and Q2 into the real system and when it is working you are going to rely upon Q1 for stop sign detection and Q2 for speed limit sign detection.

In other cases, you might decide to combine together the machine learning approaches that have been successful to get into the set Q. I might decide that whenever a street sign is being analyzed, I’ll see what Q1 has to indicate about it, and what Q2 has to indicate about it. If they both agree that it is a stop sign, I’ll be satisfied that it’s likely a stop sign, and especially if Q1 is very sure of it. If they both agree that it is speed limit sign, and especially if Q2 is very sure of it, I’ll then be comfortable assuming that it is a speed limit sign.

Various Ways to Combine the Q Sets

There are various ways you might combine together the Q’s. You could simply consider them all equal in terms of their voting power, which is generally called “bagging” or bootstrap aggregation. Or, you could consider them to be unequal in their voting power. In this case, we’re going with the idea that Q1 is better at stop sign detection, so I’ll add a weighting to its results that if it’s interpretation is a stop sign then I’ll give it a lot of weight, while if Q2 detects a stop sign I’ll give it a lower weighting because I already know beforehand it’s not so good at stop sign detection.

These machine learning approaches that are chosen for the ensemble are often referred to as individual learners. You can have any N number of these individual learners and it all depends on what you are trying to achieve and how many machine learning approaches you want to consider for the matter at-hand. Some also refer to these individual learners as base learners. A base or individual learner can be whatever machine learning approach you know and are comfortable with, and that matches to the learning task at hand, and as mentioned earlier can be ANN, SVM, GA, decision trees, etc.

Some believe that to make the learning task fair, you should provide essentially the same training data to the machine learning approaches that you’ve chosen for the matter at-hand. Thus, I might select one sample of training data that I feed into each of the N machine learning approaches. I then see how each of those machine learning approaches did based on the sample data. For example, I select a thousand street sign images and feed them into my N machine learning approaches which in this case I’ve chosen say three, ANN, SVM, GA.

Or, instead, I might take a series of samples of the training data. Let’s refer to one such sample as S1, consisting of a thousand images randomly chosen from a population of 50,000 images, and feed the sample S1 into machine learning approach Q1. I might then select another sample of training data, let’s call it S2, consisting of another randomly selected set of a thousand images, and feed it into machine learning approach Q2. And so on for each of the N machine learning approaches that I’ve selected.

I could then see how each of the machine learning approaches did on their respective sample data. I might then opt to keep all of the machine learning approaches for my actual system, or I might selectively choose which ones will go into my actual system. And, as mentioned earlier, if I have selected multiple machine learning approaches for the actual system then I’ll want to figure out how to possibly combine together their results.

You can further advance the ensemble learning technique by adding learning upon learning. Suppose I have a base set of individual learners. I might feed their results into a second-level of machine learning approaches that act as meta-learners. In a sense, you can use the first-level to do some initial screening and scanning, and then potentially have a second-level that then aims at getting into further refinement of what the first-level found. For example, suppose my first-level identified that a street sign is a speed limit sign, but the first-level isn’t capable to then determine what the speed limit numbers are. I might feed the results into a second-level that is adept at ascertaining the numbers on the speed limit sign and be able to detect what the actual speed limit is as posted on the sign.

The ensemble approach to machine learning allows for a lot of flexibility in how you undertake it. There’s no particular standardized way in which you are supposed to do ensemble machine learning. It’s an area still evolving as to what works best and how to most effectively and efficiently use it.

Some might be tempted to throw every machine learning approach into an ensemble under the blind hope that it will then showcase which is the best for your matter at-hand. This is not as easy as it seems. You need to know what the machine learning approach does and there’s an effort involved in setting it up and giving it a fair chance. In essence, there are costs to undertaking this and you shouldn’t be using a scattergun style way of doing so.

For any particular matter, there are going to be so-called weak learners and strong learners. Some of the machine learning approaches are very good in some situations and quite poor in others. You also need to be thinking about the generalizability of the machine learning approaches. You could be fooled when feeding sample data into the machine learning approaches that say one of them looks really good, but it turns out maybe it has overfitted to the sample data. This might not then do you much good once you start feeding new data into the mix.

Another aspect is the value of diversity. If you have no-diversity, such as only one machine learning approach that you are using, there are likely to be situations wherein it isn’t as good as some other machine learning approach, and you should consider having diversity. Therefore, by having more than one machine learning approach in your mix, you are gaining diversity which will hopefully pay-off for varying circumstances. As with anything else, if you have too many though of the machine learning approaches it can lead to muddled results and you might not be able to know which one to believe for a given result provided.

Keep in mind that any ensemble that you put together will require computational effort, in essence computing power, in order to not only do the training but more importantly when involved in receiving new data and responding accordingly. Thus, if you opt to have a slew of machine learning approaches that are going to become part of your Q final set, and if you are expecting them to run in real-time on-board an AI self-driving car, this is going to be something you need to carefully assess. The amount of memory consumed and the processing power consumed might be prohibitive. There’s a big difference between using an ensemble for a research-oriented task, wherein you might not have any particular time constraints, and versus when using in an AI self-driving car that has severe time constraints and also limits on computational processing available.

For those of you familiar with Python, you might consider trying using the Python-oriented scikit-learn machine learning library and try out various ensemble machine learning aspects to get an understanding of how to use an ensemble learning approach.

If we’re going to have true AI systems, and especially AI self-driving cars, the odds are that we’ll need to deploy multiple machine learning models. Trying to only program directly our way to full AI is unlikely to be feasible. As Benjamin Franklin is famous for saying: “Tell me and I forget. Teach me and I remember. Involve me and I learn.” Using an ensemble learning approach is to-date a vital technique to get us toward that involve me and learn goal. We might still need even better machine learning models, but the chances are that no matter what we discover for better ML’s, we’ll end-up needing to combine them into an ensemble. That’s how the music will come out sounding robust and fulfilling for achieving ultimate AI.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

AI Researchers Have a Plan to Save Coral Reefs

Climate change has been bleaching coral reefs, decimating the local marine species that call them home, since at least the first major observations were recorded in the Caribbean in 1980. Thankfully, new A.I. cataloguing designed to identify the geographic regions where coral is still thriving hopes to reverse the trend, saving some of the world’s most dense and […]

Climate change has been bleaching coral reefs, decimating the local marine species that call them home, since at least the first major observations were recorded in the Caribbean in 1980. Thankfully, new A.I. cataloguing designed to identify the geographic regions where coral is still thriving hopes to reverse the trend, saving some of the world’s most dense and varied aquatic ecosystems from all-but-certain extinction.

There are numerous reasons why we need to care about saving coral reefs, from the ethical to the economic. In addition to housing about a quarter of marine species, these reefs provide $375 billion USD in revenue to the world economy, according to the Guardian, and food security to half a billion people. Without them, researchers say countless species and the entire ocean fishing industry that depends on them would simply evaporate.

The problem is that there’s only so much money and so much time to devote to mitigating the damage already in progress, while the 172 nations who ratified the United Nations Framework Convention on Climate Change “Paris Agreement” race to cut back on their carbon emissions. But an international consortium of researchers say they hope that artificial intelligence can fill in the gaps, and help the reefs get the attention and resources they need to survive.

The solution involved a team of researchers deploying underwater scooters with 360-degree cameras photographing 1487 square miles of reef off the coast of Sulawesi Island in Indonesia. (Sulawesi, nestled in the middle of the Coral Triangle is surrounded by the highest concentration of marine biodiversity on the planet.)

Those images were then fed into a form of deep learning A.I. that had been taught over the course of 400 to 600 images to identify types of coral, and other reef invertebrates, to assess that region’s ecological health.

“The use of A.I. to rapidly analyze photographs of coral has vastly improved the efficiency of what we do,” Emma Kennedy, PhD., a benthic marine ecologist at the University of Queensland, said in a statement. “What would take a coral reef scientist 10 to 15 minutes now takes the machine a few seconds.”

“The machine learns in a similar way to a human brain, weighing up lots of minute decisions about what it’s looking at until it builds up a picture and is confident about making an identification.”

Kennedy and other researchers have also been using a custom, iterative clustering algorithm to identify coral reefs across the world that seem most likely to benefit from conservation resources. Their formula is based on 30 metrics known to impact coral reef ecology, broadly divided into categories like historical activity, thermal conditions, cyclone wave damage, and coral larvae behavior. A map of these prime sites for future coral conservation was published in Conservation Letters, a journal of the Society for Conservation Biology late this July.

The research was made possible by generous donations from the Australian government, the Nature Conservancy, Bloomberg Philanthropies, the Tiffany & Co. Foundation, and the Paul G. Allen Family Foundation, whose namesake’s pleasure barge has a notable record in the field of coral reef depletion.

Kennedy and her team hope that these A.I. techniques will be further refined to help manage coral reefs on the more local level as well as several ecologically significant sites, including the Meso‐American Barrier Reef and the corals in Hawaii, both of which had to be excluded from their study.

Local versions of their global study, they believe, would benefit from data that is not uniformly available for reefs internationally: information about ocean chemistry, the ‘adaptive capacity’ of local reefs to withstand climate change or other stress on their systems, or the particulars of the local economic dependence on these coral reefs.

Read the source article at Inverse.com.

Executive Interview: Dr. Russell Greiner, Professor CS and founding Scientific Director of the Alberta Machine Intelligence Institute

After earning a PhD from Stanford, Russ Greiner worked in both academic and industrial research before settling at the University of Alberta, where he is now a Professor in Computing Science and the founding Scientific Director of the Alberta Innovates Centre for Machine Learning (now Alberta Machine Intelligence Institute), which won the ASTech Award for “Outstanding Leadership in Technology” in […]

After earning a PhD from Stanford, Russ Greiner worked in both academic and industrial research before settling at the University of Alberta, where he is now a Professor in Computing Science and the founding Scientific Director of the Alberta Innovates Centre for Machine Learning (now Alberta Machine Intelligence Institute), which won the ASTech Award for “Outstanding Leadership in Technology” in 2006. He has been Program Chair for the 2004 “Int’l Conf. on Machine Learning”, Conference Chair for 2006 “Int’l Conf. on Machine Learning”, Editor-in-Chief for “Computational Intelligence”, and is serving on the editorial boards of a number of other journals. He was elected a Fellow of the AAAI (Association for the Advancement of Artificial Intelligence) in 2007, and was awarded a McCalla Professorship in 2005-06 and a Killam Annual Professorship in 2007. He has published over 200 refereed papers and patents, most in the areas of machine learning and knowledge representation, including 4 that have been awarded Best Paper prizes. The main foci of his current work are (1) bioinformatics and medical informatics; (2) learning and using effective probabilistic models and (3) formal foundations of learnability. He recently spoke with AI Trends.

Dr. Russell Greiner, Professor in Computing Science and founding Scientific Director of the Alberta Machine Intelligence Institute

Q: Who do you collaborate with in your work?
I work with many very talented medical researchers and clinicians, on projects that range from psychiatric disorders, to stroke diagnosis, to diabetes management, to transplantation, to oncology, everything from breast cancer to brain tumors. And others — I get many cold-calls from yet other researchers who have heard about this “Artificial Intelligence” field, and want to explore whether this technology can help them on their task.

Q: How do you see AI playing a role in the fields of oncology, metabolic disease, and neuroscience?

There’s a lot of excitement right now for machine learning (a subfield of Artificial Intelligence) in general, and especially in medicine, largely due to its many recent successes.  These wins are partly because we now have large data sets, including lots of patients — in some cases, thousands, or even millions of individuals, each described using clinical features, and perhaps genomics and metabolomics data, or even neurological information and imaging data. As these are historical patients, we know which of these patients did well with a specific treatment and which ones did not.  

I’m very interested in applying supervised machine learning techniques to find patterns in such datasets, to produce models that can make accurate predictions about future patients. This is very general — this approach can produce models that can be used to diagnose, or screen novel subjects, or to identify the best treatment — across a wide range of diseases.

It’s important to contrast this approach with other ways to analyze such data sets. The field of biostatistics includes many interesting techniques to find “biomarkers” — single features that are correlated with the outcomes — as a way to try to understand the etiology, trying to find the causes of the disease. This is very interesting, very relevant, very useful. But it does not directly lead to models that can decide how to treat Mr. Smith when he comes in with his particular symptoms.  

At a high level: I’m exploring ways to find personalized treatments — identifying the treatment that is best for each individual. These treatment decisions are based on evidence-based models, as they are learned from historical cases — that is, where there is evidence that the model will work effectively.

In more detail, our team has found patterns in neurological imaging, such as functional MRI scans, to determine who has a psychiatric disorder — here, for ADHD, or autism, or schizophrenia, or depression, or Alzheimer’s disease.

Another body of work has looked at how brain tumors will grow by looking at brain scans of people, using standard structural MRI imaging.  Other projects learn screening models that determine which people have adenoma (from urine metabolites), or models that predict which liver patients will most benefit from a liver transplant (from clinical features), or which cancer patients will have cachexia, etc.

Q: How can machine learning be useful in the field of Metabolomics?

Machine learning can be very useful here. Metabolomics has relied on technologies like mass spec and NMR spectroscopy to identify and quantify small molecules in a biofluid (like blood or urine); this previously was done in a very labor-intensive way, by skilled spectroscopists.

My collaborator, Dr. Dave Wishart (here at the University of Alberta) and some of our students, have designed tools to automate this process — that can effectively find the molecules present  in say blood. This means metabolic profiling is now high-throughput and automated, making it relatively easy to produce datasets that include the metabolic profiles from a set of patients, along with their outcome.  Machine learning tools can then use this labeled dataset to produce models for predicting who has a disease, for screening or for diagnosis. This has led to models that can detect cachexia (muscle wasting) and adenoma (with a local company, MTI).

Q: Can you go in to some detail on the work you have done designing algorithms to predict patient-specific survival times?

This is my current passion; I’m very excited about it.

The challenge is building models that can predict the time until an event will happen — for example, given a description of a patient with some specific disease, predict the time until his death (that is, how long he will live).  This seems very similar to the task of regression, which also tries to predict a real value for each instance –for example, predicting the price of a house based on its location, the number of rooms, and their sizes, etc.. Or given a description of a kidney patient (age, height, BMI, urine metabolic profile, etc.), predict the glomerular filtration rate of that patient, a day later.

Survival prediction looks very similar because both try to predict a number for each instance. For example, I describe a patient by his age, gender, height, and weight, and his genetic information, and metabolic information, and now I want to predict how long until his death — which is a real number.  

The survival analysis task is more challenging due to “censoring”.  To explain, consider a 5 year study that began in 1990. Over these five years, many patients have passed away, including some who lived for three years, others for 2.7 years, or 4.9 years. But many patients didn’t pass away during these 5 years –which is a good thing… I’m delighted these people haven’t died! But this makes the analysis much harder: for the  many patients alive at the end of the study, we know only that they lived at least 5 years, but we don’t know if they lived 5 years and a day or lived 30 years — we don’t know and never will know.

This makes the problem completely different from the standard regression tasks. The tools that work for predicting glomerular filtration rate or for predicting the price of a house just don’t apply here. You have to find other techniques.  Fortunately, the field of survival analysis provides many relevant tools. Some tools predict something called “risk”, which gives a number to each patient, with the understanding that this tool is predicting that patients with higher risks will die before those with lower risk. So if Mr A’s risk for cancer is 7.2 and Mr B’s is 6.3 — that is, Mr A has a higher risk — this model predicts that Mr Awill die of cancer before Mr B will. But does this mean that Mr A will die 3 days before Mr B, or 10 years — the risk score doesn’t say.

Let me give a slightly different way to use this. Recall that Mr A’s risk of dying of cancer is 7.2.  There are many websites that can do “what if” analysis: perhaps if he stops smoking, his risk reduces to 5.1.  This is better, but by how much? Will this add 2 more months to his life, or 20 years? Is this change worth the challenge of not smoking?

Other survival analysis tools predict probabilities — perhaps Ms C’s chance of 5-year disease-free survival, is currently is 65%. but if she changes her diet in certain way, this chance goes up to 78%. Of course, she wants to increase her five-year survival. But again, this is not as tangible as learning, “If I continue my current lifestyle then this tool predicts I will develop cancer in 12 years, but if I stop smoking, it goes from 12 to 30 years”.  I think this is much more tangible, and hence will be more effective in motivating people to change their lifestyle, versus changing their risk, or their 5-year survival probability.

So my team and I have provided a tool that do exactly that, by giving each person his or her individualized survival curve, which shows that person’s expected time to event. I think that will help motivate people to change their lifestyle. In addition, my colleagues and I also applied this to a liver transplant dataset, to produce a model that can determine which patient with end-stage liver failure, will benefit the most from a new liver, and so should be added to the waitlist.

Those examples all deal with time to death, but in general, survival analysis can deal with time to event, for any event. So it can be used to model a patient’s expected time to re-admission.   Here, we can seek a model that, given a description of a patient being discharged from a hospital, can predict when that patient will be readmitted — eg, if she will return to the hospital, for the same problem, soon or not.

Imagine this tool predicted that, given Ms Jones’ current status, if she leaves the hospital today, she will return within a week.   But if we keep her one more day and give some specific medications, we then predict her readmission time is 3 years. Here, it’s probably better to keep her that one more day and give one more medication. It will help the patient, and will also reduce costs.

Q: What do you see are the challenges ahead for the healthcare space in adopting machine learning and AI?

There are two questions: what machine learning can do effectively, and what it should do.

The second involves a wide range of topics, including social, political, and legal issues. Can any diagnostician — human or machine — be perfect? If not, what are the tradeoffs?  How to verify the quality of a computer’s predictions? If it makes a mistake, who is accountable? The learning system? Its designer? The data on which it was trained? Under what conditions should a learned system be accepted? … and eventually incorporated into standard of care?  Does the program need to be ‘‘convincing”, in the sense of being able to explain its reasoning — that is, explain why it asked for some specific bit of information? … or why it reached a particular conclusion? While I do think about these topics, I am not an expert here.

My interest is more in figuring what these systems can do — how accurate and comprehensive can they be? This requires getting bigger data sets — which is happening as we speak. And defining the tasks precisely — is the goal to produce a treatment policy that works in Alberta, or that works for any patient, anywhere in the world?  This helps determine the diversity of training data that is required, as well as the number of instances. (Hint: building an Alberta-only model is much easier than a universal one.) A related issue is defining exactly what the learned tool should do: In general, the learned performance system will return a “label” for each patient — which might be a diagnosis (eg, does the patient have ADHD), or a specific treatment (eg, give a SSRI [that is, a selective serotonin reuptake inhibitor]). Many clinicians assume the goal is a tool that does what they do. That would be great if there was an objective answer, and the doctor was perfect, but this is rarely the case.  First, in many situations, there is significantly disagreement between clinicians (eg, some doctors may think that a specific patient has ADHD, while others may disagree) — if so, which clinician should the tool attempt to emulate? It would be better if the label instead was some objective outcome — such as “3 year disease-free survival’’, or “progression within 1 year” (where there is an objective measure for “progression”, etc.)

This can get more complicated when the label is the best treatment — for example, given a description of the patient, determine whether that patient should get drug-A or drug-B. (That is, the task is prognostic, not diagnostic.)  While it is relatively easy to ask the clinician what she would do, for each patient, recall that clinicians may have different treatment preferences… and those preferences might not lead to the best outcome. This is why we advocate, instead, first defining what “best” means, by having a well-defined objective score for evaluating a patient’s status, post treatment.  We then define the goal of the learned performance system as finding the treatment, for each patient, that optimizes that score.

One issue here is articulating this difference, between “doing what I do” versus optimizing an objective function.  A follow-up challenge is determining this objective scoring function, as it may involve trading off, say, treatment efficacy with side-effects, etc. Fortunately, clinicians are very smart, and typically get it!  We are making in-roads.

Of course, after understanding and defining this objective scoring function, there are other challenges — including collecting data from a sufficient number of patients and possibly controls, from the appropriate distributions, then building a model from that data, and validating it, perhaps on another dataset.  Fortunately, there are an increasing number of available datasets, covering a wide variety of diseases, with subjects (cases and controls) described with a many different types of features (clinical, omics, imaging, etc etc etc). Finally comes the standard machine learning challenge of producing a model from that labeled data.  Here, too, the future is bright: There are faster machines, and more importantly, I have many brilliant colleagues developing ingenious new algorithms, to deal with many different types of information.

All told, this is a great time to be in this important field!  I’m excited to be a part of it.

Thank you Dr. Greiner!

Learn more at the Alberta Machine Intelligence Institute.

Affordability of AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider They’ll cost too much. They will only be for the elite. Having one will be a sign of prestige. It’s a rich person’s toy. The “have nots” will not be able to get one. People are going to rise-up in resentment that the general population can’t get one. […]

By Lance Eliot, the AI Trends Insider

They’ll cost too much. They will only be for the elite. Having one will be a sign of prestige. It’s a rich person’s toy. The “have nots” will not be able to get one. People are going to rise-up in resentment that the general population can’t get one. Maybe the government should step in and control the pricing. Refuse to get into one as a form of protest. Ban them because if the rest of us cannot have one, nobody should.

What’s this all about?

It’s some of the comments that are already being voiced about the potential affordability (or lack thereof) of AI self-driving cars.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and we get asked quite frequently about whether AI self-driving cars will be affordable or not. I thought you might find of interest my answer (read on).

When people clamor about the potential sky reaching cost of AI self-driving cars, you might at first wonder if people are maybe talking about flying cars, rather than AI self-driving cars. I mention this because there are some that say that flying cars will be very pricy and I think we all pretty much accept that notion. We know that jet planes are pricey, so why shouldn’t a flying car be pricey. But, an earth-based car that rolls on the ground and cannot fly in the air, nor can it submerge like a submarine, we openly question how much such a seemingly “ordinary” car should cost.

It is said that a Rolls-Royce Sweptail is priced upwards of $13 million dollars. Have there been mass protests about this? Are we upset that only a few that are wealthy can afford such a car? Not really. It is pretty much taken for granted that there are cars that are indeed very expensive. Of course, we might all consider it rather foolish of those that are willing to pump hard-earned millions of dollars into such a car. We might think them pretentious for doing so. Or, we might envy them that they have the means to buy such a car. Either way, the Rolls-Royce and other such top to-end cars are over-the-top pricey and most people not especially complain or argue about it.

Part of the reason that people seem to object to the possible high price tag on an AI self-driving car is that the AI self-driving car is being touted as a means to benefit society. AI self-driving cars are ultimately hopefully going to cut down on the number of annual driving related deaths. AI self-driving cars will provide mobility to those that need it, and that cannot otherwise achieve it, such as the poor and the elderly. If an AI self-driving car has such tremendous societal benefits, then we want to as a society ensure that society as a whole gets those benefits and that those benefits will presumably apply across the board. It’s a car of the people, for the people.

What kind of pricing then, for an AI self-driving car, are people apparently thinking of? Some that don’t have any clue of what the price might be are leaving the price tag unknown and thus it makes things easier to get into a lather about how expensive it is. It could be a zillion dollars. Or more. This though seems like a rather vacuous way to discuss the topic. It would seem that we might be better off if we start tossing around some actual numbers and then see if that’s prohibitive or not to buy an AI self-driving car.

The average transaction price (ATP) for a traditional passenger car in the United States for this year is so far around $36,000 according to various published statistics. That’s the national average.

When AI self-driving cars first get started a few years ago, the cost of the added sensors and other specialized gear for achieving self-driving capabilities was estimated at somewhere around $100,000. Meanwhile, since then, the price on those self-driving car specialized components aspects has steadily come down. As with most high-tech, the cost starts “high” and then as it is perfected and the costs to make it wringed out of the process, the price heads downward. In any case, some at the time were saying that an AI self-driving car might be around $150,000 to $200,000, though that’s a wild guess and we don’t yet know what the real pricing will be. Will it be a million dollars for an AI self-driving car? That doesn’t seem to be in anyone’s estimates at this time.

Of course, any time a new car comes out, particularly one that has new innovations, there is usually a premium price placed on the car. It’s a novelty item at first. The number of such cars is usually scarce initially, and so the usual laws of supply and demand help to punch up the price. If the car is able to be eventually mass produced, gradually the price starts to come down as more of those cars enter into the marketplace. If there are competitors that provide equivalent alternatives, the competition of the marketplace tends to drive down the price. You can refer to the Tesla models as prime examples of this kind of marketplace phenomena.

Will True AI Self-Driving Cas Be Within Financial Reach?

Suppose indeed that the first true AI self-driving cars in the low hundreds of thousands of dollars. Does that mean that those cars are out of the reach of the everyday person?

Before we jump into the answer for that question, let’s clarify what I mean by true AI self-driving cars. There are levels of self-driving cars. The topmost level is Level 5. A Level 5 AI self-driving car is able to be driven by the AI without any human intervention. In fact, there is not a human driver needed in a Level 5 car. So much so that there is unlikely to be any driving controls in a Level 5 self-driving car for a human to operate even if the human wanted to try and drive it. In theory, the AI of the Level 5 self-driving car is supposed to be able to drive the car as a human could.

Let’s therefore not consider in this affordability discussion the AI self-driving cars that are less than a Level 5. A less than level 5 self-driving car is a lot like a conventional car, though augmented in a manner that allows for co-sharing of the driving task. This means that there must be a human driver in a car that is classified as a less than Level 5 self-driving car. In spite of having whatever kind of AI in such a self-driving car, the driving task is still considered the responsibility of the human driver. No matter whether the human driver opts to take their eyes off the road, which can be an easy trap to fall into when in a less than level 5 self-driving car, and if the AI were to suddenly toss the control aspects to that human driver, it is nonetheless the human driver considered to be responsible for the driving. I’ve warned many times about the dangers this creates in the driving task.

For my article about the levels of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For my framework about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the dangers of co-shared driving and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

We’ll focus herein on the true Level 5 self-driving car. This is the self-driving car that has the full bells and whistles and really is a self-driving car. No human driver needed. This is the one that those referring to a driving utopia are actually meaning to bring up. The less than level 5’s aren’t quite so exciting, though they might well be important and perhaps stepping stones to the level 5.

Now, let’s get back to the question at hand – will a true Level 5 AI self-driving car be affordable?

We can first quibble about the word “affordable” in this context. If by affordability we mean that it should be around the same price tag as the ATP $36,000 of today’s average passenger car in the United States, I’d say that we aren’t going to see Level 5 Ai self-driving cars at that price for likely a long time until after they are quite prevalent. In other words, out the gate, it isn’t going to be that kind of price (it will be much higher). After years of growth of more and more AI self-driving cars coming into the marketplace, sure, it could possibly eventually come down to that range. Keep in mind that today there are around 200 million conventional cars in the United States, and presumably over time those cars will get replaced by AI self-driving cars. It won’t happen overnight. It will be a gradual wind down of the old ways, and a gradual wind-up of the new ways.

Imagine that the first sets of AI self-driving cars will cost in the neighborhood of several hundreds of thousands of dollars. Obviously, that price is outside the range of the average person. No argument there.

But, that’s if you only look at the problem or question in just one simple way, namely purchasing the car for purely personal use. That’s the mental trap that most fall into. They perceive of the AI self-driving car as a personal car and nothing more. I’d suggest you reconsider that notion.

It is generally predicted and accepted that AI self-driving cars are likely to be running 24×7. You can have your self-driving car going all the time, pretty much. Today’s conventional cars are only used around 5% of their available time. This makes sense because you drive your personal car to work, you park it, you work all day, you drive home. Over ninety percent of the day it is sitting and not doing anything other than being a paperweight, if you will.

For AI self-driving cars, you have an electronic chauffeur that will drive the car whenever you want. But, are you actually going to want to be going in your AI self-driving car all day long? I doubt it. So, you will have extra available driving capacity that is unused. You could just chock it up and say that’s the way the ball bounces. More than likely, you would realize that you could turn that idle time into personal revenue.

See my article about the non-stop use of AI self-driving cars: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

Here’s what is most likely to actually happen.

We all generally agree that the advent of the AI self-driving car will spur the ridesharing industry. In fact, some say that the AI self-driving car will shift our society into a ridesharing-as-an-economy model. This is why the Uber and Lyft and other existing ridesharing firms are so frantic about AI self-driving cars. Right now, ridesharing firms are able to justify what they do because they are able to connect together human drivers with cars to those that need a lift. If you eliminate the human driver out of the equation, what then if the ridesharing firm doing? That’s the scary proposition for the ridesharing firms.

This all implies that ridesharing-as-a-service will now be possible by the masses. It doesn’t matter if you have a full-time job and cannot spare the time to be a ridesharing driver, because instead you just let your AI self-driving car be your ridesharing service. You mainly need to get connected up with people that need a ridesharing lift. How will that occur? Uber and Lyft are hopeful it will occur via their platform, but it could instead be say a Facebook wherein the people are already there in the billions. This is all going to be a big shakeout coming.

Meanwhile, you buy yourself an AI self-driving car, and you use it for some portion of the time, and the rest of the time you have it earning some extra dough as a ridesharing vehicle. Nice!

This then ties into the affordability question posed earlier.

If you are going to have revenue generated by your AI self-driving car, you can then look at it as a small business of sorts. You then should consider your AI self-driving car as an investment. You are making an investment in an asset that you can put to work and earn revenue. As such, you should then consider what the revenue might be and what the cost might be to achieve that revenue.

Self-Driving Car Revenue Potential Opens Door to Affordability

This opens the door towards being able to afford an otherwise seemingly unaffordable car. Even if the AI self-driving car costs you say several hundreds of thousands of dollars, which seems doubtful as a price tag, but let’s use it as an example, you can weigh against that the revenue you can earn from that car.

For tax purposes (depending on how taxes will be regulated in the era of AI self-driving cars), you can usually deduct a car loan interest when using a car for business purposes (the deduction is only with respect to the portion of it used for business purposes). So, suppose you use your AI self-driving car for 15% of the time, and the other 85% of the time you use it for your ridesharing business, you can deduct the car loan interest normally for the 85% portion.

You can also do deductions for tax purposes, sometimes using the federal standard mileage rate, or also with actual vehicle expenses including:

  •         Depreciation
  •         Licenses
  •         Gas and oil
  •         Tolls
  •         Lease payments
  •         Insurance
  •         Garage rent
  •         Parking fees
  •         Registration fees
  •         Repairs
  •         Tires

Therefore, you need to rethink the cost of an AI self-driving car. It becomes a potential money maker and you need to consider the cost to purchase the car, the cost of ongoing maintenance and support, the cost of special taxes, the cost of undertaking the ridesharing services, and other such associated costs.

These costs are weighed in comparison to the potential revenue. You might at first only be thinking of the revenue derived from the riders that use your AI self-driving car. You might also consider that there is the opportunity for in-car entertainment that you could possibly charge a fee for (access to streaming movies, etc.), perhaps in-car provided food (you might stock the self-driving car with a small refrigerator and have other food in it), etc. You can also possibly use your AI self-driving car for doing advertising and get money from advertisers based on how many eyeballs see their ads while people are going around in your AI self-driving car.

And, this all then becomes part of your budding small business. You get various tax breaks. You might also then expand your business into other areas of related operations or even beyond AI self-driving cars entirely.

One related tie-in might be with the companies that are providing ridesharing scooters and bicycles. Suppose someone gets into your AI self-driving car and they indicate that when they reach their destination, they’d like to have a bicycle to rent. Your ridesharing service might have an arrangement with a firm that does those kinds of ridesharing services, and you get a piece of the action accordingly.

Will the average person be ready to be their own AI self-driving car mogul?

Likely not. But, fear not, a cottage industry will quickly arise that will support the emergence of small businesses that are doing ridesharing with AI self-driving cars. I’ll bet there will be seminars on how to setup your own corporation for these purposes. How to keep your ridesharing AI self-driving car always on the go. Accountants will promote their tax services to the ridesharing start-ups. There will be auto maintenance and repair shops that will seek to be your primary go-to for keeping your ridesharing money maker going. And so on.

In that sense, there will be a ridesharing-as-a-business business that booms to help new entrepreneurs on how to tap into the ridesharing-as-a-service economy. Make millions off your AI self-driving car, will be the late night TV infomercials. You’ll see ads on YouTube of a smiling person that says until they got their AI self-driving car they were stuck in a dead-end job, but now, with their money producing AI self-driving car, they are so wealthy they don’t know where to put all the money they are making. The big bonanza is on its way.

This approach of being a solo entrepreneur to afford an AI self-driving car is only one of several possible approaches. I’d guess it will be perhaps the most popular.

I’ll caution though that it is not a guaranteed path to riches. There will be some that manage to get themselves an AI self-driving car and then discover that it is not being put to ridesharing use as much as they thought. It could be that they live in an area swamped with other AI self-driving cars and so they get just leftover crumbs of ridesharing requests. Or, they are in an area that has other mass transit and no one needs ridesharing. Or, maybe few will trust using an AI self-driving car and so there won’t be many that are willing to use it for ridesharing. Another angle is that you get such a car and do so under the assumption it will be ridesharing for 85% of the time, but you instead use it for personal purposes 70% of the time and this leaves only 30% of the time for the ridesharing (cutting down on the revenue potential).

Meanwhile, there are some other alternatives, let’s briefly consider them:

  •         Solo ridesharing business as a money maker (discussed so far) of an AI self-driving car
  •         Pooling an AI self-driving car
  •         Timeshare an AI self-driving car
  •         Personal use exclusively of an AI self-driving car
  •         Other

In the case of pooling an AI self-driving car, imagine that your next door neighbor would like an AI self-driving car and so would you. The two of you realize that since the neighbor starts work at 7 a.m., while you start work at 8 a.m., and the kids of both families start school at 9 a.m., here’s what you could do. You and the neighbor split the cost of an AI self-driving car. It takes your neighbor to work at 7 a.m., comes back and takes you to work at 8 a.m., comes back and takes the kids to school by 9 a.m. In essence, you all pool the use of the AI self-driving car. There’s no revenue aspects, it’s all just being used for personal use, on a group basis. This could be done with more than just one neighbor.

The pooling would then allow you to split the cost of the AI self-driving car, making it more affordable per person. Suppose you have 3 people and they decided to evenly split the cost, this would make it so that you’d only need to afford one-third of whatever the prevailing cost would be of an AI self-driving car at that time. Voila, the cost is less, seemingly so. But, you’d need to figure out the sharing aspects and I realize it could get heated as to who gets to use the AI self-driving car when needed. It’s like having only one TV and it might be difficult at times to balance the aspect that someone wants to watch one show and someone else wants another one – say you need the AI self-driving car to take you to the store, while the kids need it to get to the ballpark.

In the case of the timeshare approach, you buy into an AI self-driving car like you would if buying into a condo in San Carlo. You purchase a time-based portion of the AI self-driving car. You can use it for whatever is the agreed amount of time. Potentially, you can opt to “invest” in more than one at a time, perhaps getting a timeshare in a passenger car that’s an AI self-driving car, and also investing in an RV that’s an AI self-driving vehicle. You would use them each at different times for their suitable purposes. With any kind of timesharing arrangement, watch out for the details and whether you can get out of it or it might have other such limitations.

There’s the purely personal use of an AI self-driving car option too, which we started this discussion by saying it might be too much for the average person to afford. Even that is somewhat malleable in that there are likely to be car loans that take into account that you are buying an AI self-driving car. The loans might be very affordable in the sense that there’s the collateral of the car, plus the AI self-driving car if needed can be repossessed and then turned into a potential money maker. The auto makers and the banks and others might be willing to cut some pretty good loans to get you into your very own AI self-driving car. As always, watch out for the interest and any onerous loan terms!

Well, before we get too far ahead of ourselves, the main point to be made is that even if AI self-driving cars are priced “high” in comparison to today’s conventional cars, it does not necessarily mean that those AI self-driving cars are only going to be only for the very rich. Instead, those AI self-driving cars are actually going to be a means to help augment the wealth of those that see this as an opportunity. Not everyone will be ready or willing to go the small business route. For many, it will be a means to not only enjoy the benefits of AI self-driving cars, but also spark them towards becoming entrepreneurs. Let’s see how this all plays out and maybe it adds another potential benefit to the emergence of AI self-driving cars.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Family Road Trip and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Have you ever taken a road trip across the United States with your family? It’s considered a core part of Americana to make such a trip. Somewhat immortalized by the now classic movie National Lampoon’s Vacation, the film showcased the doting scatter brained father Clark Griswold with his […]

By Lance Eliot, the AI Trends Insider

Have you ever taken a road trip across the United States with your family? It’s considered a core part of Americana to make such a trip. Somewhat immortalized by the now classic movie National Lampoon’s Vacation, the film showcased the doting scatter brained father Clark Griswold with his caring wife, Ellen, and their vacation-with-your-parents trapped children, Rusty and Audrey, as they all at times either enjoyed or managed to endure a cross-country expedition of a lifetime.

As is typically portrayed in such situations, the father drives the car for most of the trip and serves as the taskmaster to keep the trip moving forward, the mother provides soothing care for the family and tries to keep things on an even keel, and the children must contend with parents that are out-of-touch with reality and that are jointly determined that come heck-or-high-water their kids will presumably have a good time (at least by the definition of the parents). The move was released in 1983 and became a blockbuster that spawned other variants. Today, we can find fault with how the nuclear family is portrayed and the stereotypes used throughout the movie, but nonetheless it put on film what generally is known as the family road trip.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars and doing so with an eye towards how people will want to use AI self-driving cars. It is important to consider the behavior of how human occupants will be while inside an AI self-driving car and therefore astutely design and build AI self-driving cars accordingly.

In a conventional car, for a family road trip, it is pretty much the case that the parents sit in the front seats of the car. This makes sense since either the father or the mother will be the drivers of the car, often times switching off the driving task from one to the other. In prior times the driving task was considered to be “manly” and so usually the husband was shown driving the car. In contemporary times, whatever the nature of and gender of the parents, the point is that the licensed driving adults are most likely to be seated in the front of the car.

If there are two parents, why have both in the front seat, you might ask? Couldn’t you put one of the children up in the front passenger seat, next to the parent or adult that is driving the car? You can certainly arrange things that way, but the usual notion about having the front passenger be another adult or parent is that they can be watching the roadway, serving as an extra pair of eyes for the driver. The driver might be preoccupied with the traffic in front of the car, and meanwhile the front passenger notices that further up ahead there is a bridge-out sign warning that approaching cars need to be cautious. The front passenger is a kind of co-pilot, though they don’t have ready access to the car controls and must instead verbally provide advice to the driver.

The front passenger is not always shown though in movies as a dispassionate observer that thoughtfully aids the driver. Humorous anecdotes are often shown as the front passenger suddenly points at a cow and screams out load for everyone to look. The driver could be distracted by such an exclamation and inadvertently drive off the road at the sudden yelling and pointing. Another commonly portrayed scenario is the front passenger that insists the driver take the next right turn ahead, but offering such a verbal instruction once the car is nearly past the available turn. The driver is then torn between making a radical and dangerous turn, or passing the turn entirely and then likely getting berated by the front seat passenger.

Does this seem familiar to you?

If so, you are likely a veteran of family road trips. Congratulations.

What about the children that are seated in the back seat of the car? One portrayal would be of young children with impressionable minds that are carefully studying their parents and learning the wise ways of life, doing so during the vacation and they will become more learned young adults because of the experience. Of course, this is not the stuff of reality.

Kids Converse with Out-of-Touch Parents

Instead, the movies show something that pertains more closely to reality. The kids often feel trapped. Their parents are forcing them along on a trip. It’s a trip the parents want, but not necessarily what the kids want. At times feeling like prisoners, they need to occupy themselves for hours at time on long stretches of highway. Though at first it might be keen to see an open highway and the mountains and blue skies, it is something that won’t last your attention for hours upon hours, days upon days. Boredom sets in. Conversation with the parents also can only last so long. The parents are out-of-touch with the interests, musical tastes, and other facets of the younger generation.

The classic indication is that ultimately the kids will get into a fight. Not a fisticuffs fight per se, more like an arms waving and hands slapping kind of fight. And the parents then need to turn their heads and look at the kids with laser like eyes, and tell the kids in overtly stern terms, stop that fighting back there or it will be heck to pay. No more ice cream, no more allowance, or whatever other levers the parents can use to threaten the kids to behave. Don’t make me come back there, is the usual refrain.

Sometimes one or more of the kids will start crying. Could be for just about any reason. They are tired of the trip and want it to end. They got hit by their brother or sister and want the parents to know so. Etc. The parents will often retort that the kids need to stop crying. Or, as they are want to say, they’ll give them a true reason to cry (a veiled threat). If the kids are complaining incessantly about the trip, this will likely produce the other classic veiled threat of “I’d better not hear another peep out of you!”

Does the above suggest that the togetherness of the family road trip is perhaps hollow and we should abandon the pretense of having a family trip? I don’t think so. It’s more like showing how family trips really happen. In that sense, the movie National Lampoon’s Vacation was a more apt portrayal than a Leave It To Beaver kind of portrayal, at least in more modern times.

Indeed, today’s family road trips are replete with gadgets and electronics in the car. The kids are likely to be focusing on their smartphones and tablets. The car probably has WiFi, though at times only getting intermittent reception as the trip across some of the more barren parts of the United States takes place. There might be TV’s built into the headrests so the kids can watch movies that way. One of the more popular and cynical portrayals of today’s family road trips is that there is no actual human-to-human interaction inside the car, since everyone is tuned into their own electronic device.

Given the above description of how the family road trip seems to occur, what can we anticipate for the future?

First, it is important to point out that there are varying levels of self-driving cars. The topmost level, a level 5 self-driving car, consists of having AI that can drive the car without any human intervention. This means there is no need for a human driver. The AI should be able to do all of the driving, in the same manner that a human could drive the car. At the levels less than 5, there is and must be a human driver in the car. The self-driving car is not expected to be able to drive entirely on its own and relies upon having a human driver that is at-the-ready to take over the car controls.

See my article about the levels of AI self-driving cars: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

See my article that indicates my framework for AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels less than 5, the AI self-driving car is essentially going to be a lot like a conventional car in terms of what happens during the family road trip. Admittedly, the human driver will be able to have a direct “co-pilot” of sorts to co-share in the driving task via the AI, but otherwise the car design is pretty much the same as a conventional car. This is because you need to have the human driver seated at the front of the car, and the human driver has to have access to car controls to then drive the car. With that essential premise, you can’t otherwise change too much of the interior design of the car.

As an aside, there are some that have suggested maybe we don’t need the human driver to be looking out the windshield and that we can change the car design accordingly. We could put the human driver in the back seat and have them wear a Virtual Reality headset and be connected to the controls of the car via some kind of handheld devices or foot-operated nearby devices. Cameras on the hood and top of the car would beam the visual images to the VR headset. Yes, I suppose this is all possible, but I really doubt we are going to see cars go in that direction. I would say it is a likelier bet that cars less than a level 5 will be designed to look like a conventional car, and only will the level 5 self-driving cars have a new design. We’ll see.

For a level 5 self-driving car, since there is no need for a human driver, we can completely remake the interior of the car. No need to put a fixed place at the front of the car for the human driver to sit. No need for the human driver to look out the windshield. Some of the new designs suggest that one approach would be to have swivel seats for let’s say four passengers in the normal sized self-driving car. The four swivel seats can be turned to face each other, allowing a togetherness of discussion and interaction. At other times, you can rotate the seats so that you have let’s say two facing forward as though the front seats of the car, and the two behind those that are also facing forward.

Other ideas include allowing the seats to become beds. It could be that two seats can connect together and their backs be lowered, thus allowing for a bed, one that is essentially at the front of the car and another at the back of the car. Part of the reason that some are considering designing beds into an AI self-driving car is the belief that AI self-driving cars might be used 24×7, and people might sleep in their cars while on their way to work or while on their vacations.

See my article about the non-stop 24×7 nature of AI self-driving cars: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

Another design aspect involves lining the interior of the self-driving car with some kind of TV or LEDs that would allow for the interior to be a kind of movie theatre. This would allow for watching of movies, shows, live streaming, and even for doing online education. This also raises the question as to whether any kind of glass windows are needed at all. Some assert that we don’t need windows anymore for a Level 5 self-driving car. Instead, the cameras on the outside of the car can show what would otherwise be seen if you looked out a window. The interior screens would show what the cameras show, unless you then wanted to watch a movie and thus the interior screens would switch to displaying that instead.

Are we really destined to have people sitting in self-driving car shells that have no actual windows? It seems somewhat farfetched. You would think that people will still want to look out a real window. You would think that people would want to be able roll down their window when they wish to do so. Now, you could of course have true windows and make the glass out of material that can become transparent at times, and then become blocked at other times, thus potentially have the best of both worlds. We’ll see.

Interior Seat Configuration to be Determined

For a family road trip, you could configure the seats so that all four are facing each other, and have family discussions or play games or otherwise directly interact. This might not seem attractive to some people, or might be something that they sparingly do when trying to have a family chat. As mentioned, the seats could swivel to allow more of a conventional sense of privacy while sitting in your seat. I’d suggest though that the days of the parents saying don’t make us come back there are probably numbered. The “there” will be the same place that the parents are sitting. Maybe too much togetherness? Or, maybe it will spark a renewal of togetherness?

Another factor to consider is that none of the human occupants needs to be a driver. In theory, a family road trip has always consisted of one or more drivers, and the rest were occupants. Now, everyone is going to be an occupant. Will parents feel less “useful” since they are no longer undertaking the driving task directly? Or, will parents find this a relief since they can use the time to interact with their children or catch-up on their reading or whatever?

This has another potentially profound impact on the family road trip, namely that no one needs to know how to drive a car. Thus, in theory, you could even have just and only the children in the self-driving car and have no parents or adults at all. I’d agree that this doesn’t feel like a “family” trip at that point, but it could be that the parents are at the hotel and the kids want to go see the nearby theme park, and so the parents tell the kids they can take the self-driving car there.

How should the interior of the self-driving car be reshaped or re-designed if you have only children inside the car for lengths of time? Would there be interior aspects that you’d want to be able close off from use or slide away to be hidden from use? Perhaps you would not want the children to swivel the swivel seats and be able to lock in place the swivel seats during their journey. Via a Skype-like communication capability, you would likely want to interact with the kids, they seeing you and you seeing them via cameras pointed inward into the self-driving car.

Without a human driver, the AI is expected to do all of the driving. When you go on a cross-country road trip, you often discover “hidden” places to visit that are remote and not on the normal beaten path. The question will be how good is the AI when confronted with driving in an area that perhaps no GPS exists per se. Driving on city roads that have been well mapped is one thing. Driving on dirt roads that are not mapped or for which no map is available, this can be a trickier aspect. Suppose too that you want to have the self-driving car purposely go off-road. The AI has to be able to do that kind of driving, assuming that there is no provision for a human driver and only the AI is able to drive the car.

An AI self-driving car at a Level 5 will normally have some form of Over-The-Air (OTA) capability. This allows the AI to get updated by the auto maker or tech firm, and also for the AI to report what is has discovered into the auto maker or tech firm cloud for collective learning purposes. On a cross country road trip, the odds are that there will be places that have no immediate electronic communication available. Suppose there’s an urgent patch that the OTA needs to provide to the AI self-driving car? This can be dicey when doing a family road trip to off-road locations.

See my article about OTA: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

Suppose the family car, an AI self-driving car, suffers some kind of mechanical breakdown during the trip? What then? Keep in mind that a self-driving car is still a car. This means that parts can break or wear out. This means that you’ll need to get the car to a repair shop. And, with the sophisticated sensors on an AI self-driving car, it will likely have more frequent breakdowns and will require more sophisticated repair specialists and cost more to be repaired. The road trip could be marred by not being able to find someone in a small town that can deal with your broken down AI self-driving car.

See my article about automotive recalls and AI self-driving cars: https://aitrends.com/ai-insider/auto-recalls/

The AI of the self-driving car will become crucial as your driving “pilot” and companion, as it were. Take us to the next town, might be a command that the human occupants utter. One of the children might suddenly blurt out “I need to go to the bathroom” – in the olden days the parents would say hold it until you reach the next suitable place. What will the AI say? Presumably, if its good at what it does, it would have looked up where the next bathroom might be, and offer to stop there. This though is trickier than it seems. We cannot assume that the entire United States will be so well mapped that every bathroom can be looked up. The AI might need to be using its sensors to identify places that might appear to have a bathroom, in the same manner that a parent would furtively look at the window at a gas station or a rest stop.

See my article about NLP and voice commands for AI self-driving cars: https://aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

There is also the possibility of using V2V (vehicle to vehicle communications) to augment the family road trip. With V2V, an AI self-driving car can potentially electronically communicate with another AI self-driving car. Maybe up ahead there is an AI self-driving car that has discovered that the paved road has large ruts and it is dangerous to drive there. This might be relayed to AI self-driving cars a mile back, so those AI self-driving cars can avoid the area or at least be prepared for what is coming. The AI of those self-driving cars could even warn the family (the human occupants) to be ready for a bumpy ride for the mile up ahead.

There is too the possibility of V2I (vehicle to infrastructure communications). This involves having the roadway infrastructure electronically communicate with the AI self-driving car. It could be that a bridge is being repaired, but you wouldn’t know this from simply looking at a map. The bridge itself might be beaming out a signal that would forewarn cars within a few miles that the bridge is inoperable. Once again the AI self-driving car could thus re-plan the journey, and also warn the occupants about what’s going on.

One aspect that the AI can provide that might or might not have been done by a parent would be to explain the historical significance and other useful facets about where you are. Have you been on a family road trip and researched the upcoming farm that was once run by a U.S. president, or maybe there’s a museum where the first scoop of ice cream was ever dished out? A family road trip is often done to see and understand our heritage. What came before us? How did the country get formed? The AI can be a tour guide, in addition to driving the car.

See my article about AI as tour guide for a self-driving car: https://aitrends.com/selfdrivingcars/extra-scenery-perception-esp2-self-driving-cars-beyond-norm/

As perhaps is evident, the interior of the self-driving car has numerous possibilities in terms of how it might be reshaped for the advent of true Level 5 AI self-driving cars. For a family road trip, the interior can hopefully foster togetherness, while also allowing for privacy. It might accommodate sleeping while driving from place to place. The AI will be the driver, and be guided by where the human occupants want to go. In addition to driving, the AI can be a tour guide and perform various other handy tasks too. This is not all rosy though, and the potential for lack of electronic communications could hamper the ride, along with the potential for mechanical breakdowns that might be hard to get repaired.

No more veiled threats from the front seats to the back seats. I suppose some other veiled threats will culturally develop to replace those. Maybe you tell the children, behave yourselves or I won’t let you use the self-driving car to go to the theme park. Will we have AI self-driving cars possibly zipping along our byways with no adults present and only children, as they do a “family” road trip? That’s a tough one to ponder for now. In any case, enjoy the family road trips of today, using a conventional car or even a self-driving car up to the level 5. Once we have level 5 AI self-driving cars, it will be a whole new kind of family road trip experience.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Scientists Trained AI to Write Poetry; Now It’s Toe-to-Toe With Shakespeare

If science fiction has taught us anything it’s that artificial intelligence will one day lead to the downfall of the entirety of mankind. That day is (probably) still a long way away, if it ever actually happens, but for now we get to enjoy some of the nicer aspects of AI, such as its ability […]

If science fiction has taught us anything it’s that artificial intelligence will one day lead to the downfall of the entirety of mankind. That day is (probably) still a long way away, if it ever actually happens, but for now we get to enjoy some of the nicer aspects of AI, such as its ability to write poetic masterpieces.

Researchers in Australia in partnership with the University of Toronto have developed an algorithm capable of writing poetry. Far from your generic rhymes, this AI actually follows the rules, taking metre into account as it weaves its words. The AI is good. Really good. And it’s even capable of tricking humans into thinking that its poems were penned by a man instead of a machine.

According to the researchers, the AI was trained extensively on the rules it needed to follow to craft an acceptable poem. It was fed nearly 3,000 sonnets as training, and the algorithm tore them apart to teach itself how the words worked with each other. Once the bot was brought up to speed it was tasked with crafting some poems of its own. Here’s a sample:

With joyous gambols gay and still array
No longer when he twas, while in his day
At first to pass in all delightful ways
Around him, charming and of all his days

Not bad, huh? Of course, knowing that an AI made it might make it feel more stilted and dry than if you had read it without any preconceptions, but there’s no denying that it’s a fine poem. In fact, the poems written by the AI follow the rules of poetry even more closely than human poets like Shakespeare. I guess that’s the cold machine precision kicking in.

When the bot’s verses were mixed with human-written poems, and then scoured by volunteers, the readers were split 50-50 over who wrote them. That’s a pretty solid vote of confidence in the AI’s favor, but there were still some things that gave the bot away, including errors in wording and grammar.

Still, it’s a mighty impressive achievement. Perhaps when our robot overlords enslave humanity we’ll at least be treated to some nice poetry.

Read the source article in BGR.

Shiggy Challenge and Dangers of an In-Motion AI Self-Driving Car

By Lance Eliot, the AI Trends Insider I’m hoping that you have not tried to do the so-called Shiggy Challenge. If you haven’t done it, I further hope that my telling you about it does not somehow spark you to go ahead and try doing it. For those of you that don’t know about it […]

By Lance Eliot, the AI Trends Insider

I’m hoping that you have not tried to do the so-called Shiggy Challenge. If you haven’t done it, I further hope that my telling you about it does not somehow spark you to go ahead and try doing it. For those of you that don’t know about it and have not a clue about what it is, be ready to be “amazed” at what is emerging as a social media generated fad. It’s a dangerous one.

Here’s the deal.

You are supposed to get out of a moving car, leaving the driver’s seat vacant, and do a dance while nearby to the continually moving forward car, and video record your dancing (you are also moving forward at the same pace as the moving car), and then jump back into the car to continue driving it.

If you ponder this for a moment, I trust that you instantly recognize the danger of this and (if I might say) the stupidity of it (or does that make me appear to be old-fashioned?).

As you might guess, already there have been people that hurt themselves while trying to jump out of the moving car, spraining an ankle, hurting a knee, banging their legs on the door, etc. Likewise, they have gotten hurt while trying to jump back into the moving car (collided with the steering wheel or the seat arm, etc.).

There are some people that while dancing outside the moving car became preoccupied and didn’t notice that their moving car was heading toward someone or something. Or, they weren’t themselves moving forward fast enough to keep pace with the moving car. And so on. There have been reported cases of the moving car blindly hitting others and also in some cases hitting a parked car or other objects near or in the roadway.

Some of the videos show the person having gotten out of their car and then the car door closing unexpectedly, and, guess what, the car turns out to now have all the car doors locked. Thus, the person could not readily get back into the car to stop it from going forward and potentially hitting someone or something.

This is one of those seemingly bizarre social media fads that began somewhat innocently and then the ante got upped with each person seeking fame by adding more danger to it. As you know, people will do anything to try and get views. The bolder your video, the great the chance it will go viral.

This challenge began in a somewhat simple way. The song “In My Feelings” by Drake was released and at about the same time there was a video made by an on-line personality named Shiggy that showed Shiggy taking a video of himself dancing to the tune (posted on his Instagram site). Other personalities and celebrities then opted to do the same dance, video recording themselves dancing to the Drake song, and they posted their versions. This spawned a mild viral sensation of doing this.

But, as with most things on social media, there became a desire to do something more outlandish. At first, this involved being a passenger in a slowly moving car, getting out, doing the Shiggy inspired dance, and then jumping back in. This is obviously not recommended, though at least there was still a human driver at the wheel. This then morphed into the driver being the one to jump out, and either having a passenger to film it, or setting up the video to do a selfie recording of themselves performing the stunt.

Some of the early versions had the cars moving at a really low speed. It seems now that some people have cars that crawl along at a much faster speed. It further seems that some people don’t think about the dangers of this activity and they just “go for it” and figure that it will all work out fine and dandy. It often doesn’t. Not surprising to most of us, I’d dare say.

The craze is referred to as either the Shiggy Challenge or the In My Feelings challenge (#InMyFeelings), and some more explicitly call it the moving car dance challenge. This craze has even got the feds involved. The National Transportation Safety Board (NTSB) issued a tweet that said this:” #OntheBlog we’re sharing concerns about the #InMyFeelings challenge while driving. #DistractedDriving is dangerous and can be deadly. No call, no text, no update, and certainly no dance challenge is worth a human life.”

Be forewarned that this antic can get you busted, including a distracted driving ticket, or worse still a reckless driving charge.

Now that I’ve told you about this wondrous and trending challenge, I want to emphasize that I only refer to it as an indicator of something otherwise worthy of discussion herein, namely the act of getting out of or into a moving car. I suppose it should go without stating that getting into a moving car is highly dangerous and discouraged. The second corollary equally valid would be that getting out of a moving car is highly dangerous and discouraged.

I’m sure someone will instantly retort that hey, Lance, there are times that it is necessary to get out of or into a moving car. Yes, I’ve seen the same spy movies as you, and I realize that when James Bond is in a moving car and being held at gun point, maybe the right spy action is to leap out of the car. Got it. Seriously, I’ll be happy to concede that there are rare situations whereby getting into a moving car or out of a moving car might be needed, let’s say the car is on fire and in motion or you are being kidnapped, there will be rare such moments. By-and-large, I would hope we all agree that those are rarities.

Sadly, there are annually a number of reported incidents of people getting run over by their own car. Somewhat recently, a person left their car engine running, they got out of the car to do something such as drop a piece of mail into a nearby mailbox, and the car inadvertently shifted into gear and ran them over. These oddities do happen from time to time. Again, extremely rare, but further illustrate the dangers of getting out of even a non-moving car for which the engine is running.

Prior to the advent of seat belts, and the gradual mandatory use and acceptance of seat belts in cars, there were a surprisingly sizable number of reported incidents of people “falling” out of their cars. Now, it could be that some of them jumped out while the car was moving and so it wasn’t particularly the lack of a seat belt involved. On the other hand, there are documented cases of people sitting in a moving car, and not wearing a seat belt, while the car was in motion, and their car door open unexpectedly, with them then proceeding to accidentally hang outside of the car (often clinging to the door), or falling entirely out of the car onto the street.

This is why you should always wear your seat belt. Tip for the day.

For the daredevils of you, it might not be apparent why it is so bad to leave a moving car. If you are a passenger, you have a substantial chance of falling to the street and getting injured. Or, maybe you fall to the street and get killed by hitting the street with your head. Or, maybe you hit an object like a fire hydrant and get injured or killed. Or, maybe another car runs you over. Or, maybe the car you exited manages to drive over you. I think that paints the picture pretty well.

I’d guess that the human driver of the car might be shocked to have you suddenly leave the moving car. This could cause the human driver to make some kind of panic or erratic maneuver with the car. Thus, your “innocent” act of leaving the moving car could cause the human driver to swerve into another car, maybe injuring or killing other people. Or, maybe you roll onto the ground and seem OK, but then the human driver turns the car to try and somehow catch you and actually hits you, injuring you or killing you. There are numerous acrobatic variations to this.

Suppose that it’s the human driver that opts to leave the moving car? In that case, the car is now a torpedo ready to strike someone or something. It’s an unguided missile. Sure, the car will likely start to slow down because the human driver is no longer pushing on the accelerator pedal, but depending upon the speed when the driver ejected, the multi-ton car still has a lot of momentum and chances of injuring or killing or hitting someone or something. If there are any human occupants inside the car, they too are now at the mercy of a car that is going without any direct driving direction.

Risks of Exiting a Moving Car

Let’s recap, you can exit from a moving car and these things could happen:

  •         You directly get injured (by say hitting the street)
  •         You directly get killed (by hitting the street with your head, let’s say)
  •         You indirectly get injured (another car comes along and hits you)
  •         You indirectly get killed (the other car runs you over)
  •         Your action gets someone else injured (another car crashes trying to avoid you)
  •         Your action gets someone else killed (the other car rams a car and everyone gets killed)

I’m going to carve out a bit of an exception to this aspect of leaving a moving car. If you choose to leave the moving car or do so by happenstance, let’s call that a “normal” exiting of a moving car. On the other hand, suppose the car gets into a car accident, unrelated for the moment to your exiting, and during the accident you are involuntarily thrown out of the car due to the car crash. That’s kind of different than choosing to exit the moving car per se. Of course, this happens often when people that aren’t wearing seat belts get into severe car crashes.

Anyway, let’s consider that there’s the bad news of exiting a moving car, and we also want to keep in mind that trying to get into a moving car has its own dangers too. I remember a friend of mine in college that opted to try jumping into the back passenger seat of a moving car (I believe some drinking had been taking place). His pal opened the back door, and urged him to jump in. He was lucky to have landed into the seat. He could have easily been struck by the moving car. He could have fallen to the street and gotten run over by the car. Again, injuries and potential death, for him, and for other occupants of the car, and for other nearby cars too.

I’d like to enlarge the list of moving car aspects to these:

  •         Exiting a moving car
  •         Entering a moving car
  •         Riding on a moving car
  •         Hanging onto a moving car
  •         Facing off with a moving car
  •         Chasing after a moving car
  •         Other

I’ve covered already the first two items, so let’s consider the others on the list.

There are reports from time-to-time of people that opted to ride on the hood of a car, usually for fun, and unfortunately they fell off and got hurt or killed once the car got into motion.

Hanging onto a moving car was somewhat popularized by the “Back To The Future” movie series when Marty McFly (Michael J. Fox) opts to grab onto the back of a car while he’s riding his skateboard. I’m not blaming the movie for this and realize it is something people already had done, but the movie did momentarily increase the popularity of trying this dangerous act.

Facing off with a moving car has sometimes been done by people that perhaps watch too many bull fights. They seem to think that they can hold a red cape and challenge the bull (the car). In my experience, the car is likely to win over the human standing in the street and facing off with the car. It’s a weight thing.

Chasing after a moving car happens somewhat commonly in places like New York City. You see a cab, it fails to stop, you are in a hurry, so you run after the cab, yelling at the top of your lungs. With the advent of Uber and other ridesharing services, this doesn’t happen as much as it used to. Instead, we let our mobile apps do our cab or rideshare hailing for us.

What does all of this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and one aspect that many auto makers and tech firms are not yet considering deals with the aforementioned things that people do regarding moving cars.

Some of the auto makers and tech firms would say that these various actions by humans, such as exiting a moving car or trying to get into a moving car, are considered an “edge” problem. An edge problem is one that is not at the core of the overarching problem being solved. If you are in the midst of trying to get AI to drive a car, you likely consider these cases of people exiting and entering a moving car to be such a remote possibility that you don’t put much attention to it right now. You figure it’s something to ultimately deal with, but getting the car to drive is foremost in your mind right now.

I’ve had some AI developers that tell me that if a human is stupid enough to exit from a moving car, they get what they deserve. Same for all of the other possibilities, such as trying to enter a moving car, chasing after a moving car, etc. The perspective is that the AI has enough to do already, and dealing with stupid human tricks (aka David Letterman!), that’s just not very high priority. Humans do stupid things, and these AI developers shrug their shoulders and say that an AI self-driving car is not going to ever be able to stop people from being stupid.

This narrow view by those AI developers is unfortunate.

I can already predict that there will be an AI self-driving car that while driving on the public roadways will have an occupant that opts to jump out of the moving self-driving car. Let’s say that indeed this is a stupid act and the person had no particularly justifiable cause to do so. If the AI self-driving car proceeds along and does not realize that the person jumped out, and the AI blindly continues to drive ahead, I’ll bet there will be backlash about this. Backlash against the particular self-driving car maker. Backlash against possibly the entire AI self-driving car industry. It could get ugly.

For my explanation of the egocentric designs of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For lawsuits about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/

For why AI self-driving cars need to be able to do defensive driving, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

Let’s take a moment and clarify too what is meant by an AI self-driving car. There are various levels of capabilities of AI self-driving cars. The topmost level is considered Level 5. A Level 5 AI self-driving car is one in which the AI is fully able to drive the car, and there is no requirement for a human driver to be present. Indeed, often a Level 5 self-driving car has no provision for human driving, encompassing that there aren’t any pedals and not a steering wheel available for a human to use. For self-driving cars less than a Level 5, it is expected that a human driver will be present and that the AI and the human driver will co-share the driving task. I’ve mentioned many times that this co-sharing arrangement allows for dangerous situations and adverse consequences.

For more about the co-sharing of the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For human factors aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/not-fast-enough-human-factors-ai-self-driving-cars-control-transitions/

The level of an AI self-driving car is a crucial consideration in this discussion about people leaping out of a moving self-driving car or taking other such actions.

Consider first the self-driving cars less than a Level 5. If the human driver that’s supposed to be in the self-driving car is the one that jumps out, this leaves the AI alone to continue driving the car (assuming that no other human driver is an occupant and able to step into the human driving role of the co-sharing task). We likely don’t want the AI to now be alone as the driver, since for levels less than 5 it is considered a precondition that there be a human driver present. As such, the AI needs to ascertain that the human driver is no longer present, and as a minimum proceed to take some concerted effort to safely bring the self-driving car to a proper and appropriate halt.

Would we want the AI in the less-than level 5 self-driving car to take any special steps about the exited human? This is somewhat of an open question because the expectation of what the AI can accomplish at the less-than level 5 is that it is not fully yet sophisticated. It could be that we might agree that at the less-than level 5, the most we can expect is that the AI will try to safely bring the self-driving car to a halt. It won’t try to somehow go around and pick-up the person or take other actions that we would expect a human driver to possibly undertake.

This brings us to the Level 5 self-driving car. It too should be established to detect that someone has left the moving self-driving car. In this case, it doesn’t matter whether the person that left is a driver or not, because no human driver is needed anyway. In that sense, in theory, the driving can continue. It’s now a question of what to do about the human that left the moving car.

In essence, with the Level 5 self-driving car, we have more options of what to have the AI do in this circumstance. It could just ignore that a human abruptly left the car, and continue along, acting as though nothing happened at all. Or, it could have some kind of provision of action to take in such situations, and invoke that action. Or, it could act similar to the less-than Level 5 self-driving cars and merely seek to safely and appropriately bring the self-diving car to a halt.

One would question the approach of not doing anything and yet being aware that a human left the self-driving car while in motion, this seems counter intuitive to what we would expect or hope that the AI would do. If the AI is acting like a human driver, we would certainly expect that the human driver would do something overtly about the occupant that has left the moving car. Call 911. Slow down. Turn around. Do something. Unless the human driver and the occupants are somehow in agreement about leaving the self-driving car, and maybe they made some pact to do so, it would seem prudent and expected that a human driver would do something to come to the aid of the other person. Thus, so should the AI.

You might wonder how would the AI even realize that a human has left the car?

Consider that there are these key aspects of the driving task by the AI:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls commands issuance

See my article about the framework of AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

The AI self-driving car will likely have sensors pointing outward of the car, such as the use of radar, cameras, LIDAR, sonar, and the like. These provide an indication of what is occurring outside of the self-driving car in the surrounding environment.

It is likely that there will also be sensors pointing inward into the car compartment. For example, it is anticipated that there will be cameras and an audio microphone in the car compartment. The microphone allows for the human occupants to verbally interact with the AI system, similar to interacting with a Siri or Alexa. The camera would allow those within the self-driving car to be seen, such as if the self-driving car is being used to drive your children to school that you could readily see that they are doing OK inside the AI self-driving car.

For more about the natural language interaction with human occupants in a self-driving car, see my article: https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

I’ll walk you through a scenario of an AI self-driving car at a Level 5 and the case of someone that opts to exit from the self-driving car while it is in motion.

Joe and Samatha have opted to use the family AI self-driving car to go to the beach. They both gather up their beach towels and sunscreen, and get into the AI self-driving car. Joe tells the AI to take them to the beach. Dutifully, the AI system repeats back that it will head to the beach and indicates an estimated arrival time. Samatha and Joe settle into their seats and opt to watch a live video stream of a volleyball tournament taking place at the beach and for which they hope to arrive there before it ends.

At this juncture, the AI system would have used the inward facing camera to detect that two people are in the self-driving car. In fact, it would recognize them since it is the family car and they have been in it many times before. The AI sets the internal environment to their normal preferences, such as the temperature, the lighting, and the rest. It proceeds to drive the car to the beach.

Once the self-driving car gets close to the beach, turns out there’s lots of traffic as many other people opted to drive to the beach that day. Joe starts to get worried that he’s going to miss seeing the end of the volleyball game in-person. So, while the self-driving car is crawling along at about five to eight miles per hour in solid traffic, Joe suddenly decides to open the car door and leap out. He then runs over to the volleyball game to see the last few moments of the match.

Level 5 Self-Driving Car Thinks About Passenger Who Jumped Out

The AI system would have detected that the car door had opened and closed. The inward facing cameras would have detected that Joe had moved toward the door and exited the door. The outward facing cameras, the sonar, the radar, and the LIDAR would all have detected him once he got out of the self-driving car. The sensor fusion would have put together the data from those outward facing sensors have been able to ascertain that a human was near to the self-driving car, and proceeding away from the self-driving car at a relatively fast pace.

The virtual world model would have contained an indicator of a human near to the self-driving car, once Joe had gotten out of the self-driving car. And, it would also have indicators of the other nearby cars. It is plausible then that the AI would via the sensors be aware that Joe had been in the self-driving car, had gotten out of it, and was then moving away from it.

The big question then is what should the AI action planning do? If Joe’s exit does not pose a threat to the AI self-driving car, in the sense that Joe moved rapidly away from it, and so he’s not a potential inadvertent target of the self-driving car by its moving forward, presumably there’s not much that needs to be done. The AI doesn’t need to slow down or stop the car. But, this is unclear since it could be that Joe somehow fell out of the car, and so maybe the self-driving car should come to a halt safely.

Here’s where the interaction part comes to play. The AI could potentially ask the remaining human occupant, Samantha, about what has happened and what to do. It could have even called out to Joe, when he first opened the door to exit, and asked what he’s doing. Joe, had he been thoughtful, could have even beforehand told the AI that he was planning on jumping out of the car while it is in motion, and thus a kind of “pact” would have been established.

These aspects are not so easily decided upon. Suppose the human occupant is unable to interact with the AI, or refuses to do so? This is a contingency that the AI needs to contend with. Suppose the human is purposely doing something highly dangerous? Perhaps in this case that when Joe jumped out, there was another car coming up that the AI could detect and knew might hit Joe, what should the AI have done?

Some say that maybe the best way to deal with this aspect of leaping out of the car involves forcing the car doors to be unable to be opened by the human occupants when inside the AI self-driving car. This might seem appealing, as an easy answer, but it fails to recognize the complexity of the real-world. Will people accept the idea that they are locked inside an AI self-driving car and cannot get out on their own? Doubtful. If you say that just have the humans tell the AI to unlock the door when they want to get out, and the AI can refuse when the car is in motion, this again will likely be met with skepticism by humans as a viable means of human control over the automation.

A similar question though does exist about self-driving cars and children.

If AI self-driving cars are going to be used to send your children to school or play, do you want those children to be able to get out of the self-driving car whenever they wish? Probably not. You would want the children to be forced to stay inside. But, there’s no adult present to help determine when unlocking the doors is good or not to do. Some say that by having inward facing cameras and a Skype like feature, the parents could be the ones that instruct the AI via live streaming to go ahead and unlock the doors when appropriate. This of course has downsides since it makes the assumption that there will be a responsible adult available for this purpose and that they’ll have a real-time connection to the self-driving car, etc.

Each of the other actions by humans such as entering the car while in-motion, chasing after a self-driving car, hanging onto a self-driving car, riding on top of a self-driving car, and so on, they all have their own particulars as to what the AI should and maybe should not do.

Being able to detect any of these human actions is the “easier” part since it involves finding objects and tracking those objects (when I say easy, I am not saying that the sensors will work flawlessly and nor that it can necessarily reliably make such detections, I am simply saying that the programming for this is clearer than the AI action planning is).

Using machine learning or similar kinds of automation for figuring out what to do is unlikely as a means of getting out of the pickle of what the AI should do. There are generally few instances of these kind, and each instance would tend to have its own unique circumstances. It would be hard to have a large enough training set. There would also be the concern that the learning would overfit to the limited data and thus not be viable in generalizable situations that are likely to arise.

Our view of this is that it is something requiring templates and programmatic solutions, rather than an artificial neural network or similar. Nonetheless, allow me to emphasize that we still see these as circumstances that once encountered should go up to the cloud of the AI system for purposes of sharing with the rest of the system and for enhancing the abilities of the on-board AI systems that otherwise have not yet encountered such instances.

For understanding the OTA capabilities of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

The odds are high that human occupants will be tempted to jump out of a moving AI self-driving car more so than a human driven car, or similarly try to get into one that is moving. I say this because at first, humans will likely be timid with the AI and be hesitant to do anything untoward, but after a while the AI will become more accepted and humans will become bolder. If your friend or parent is driving the car, you are likely more socially bound to not do strange tricks, you would worry that they might get in trouble. With the AI driving the car, you have no such social binding per se. I’m sure that many maverick teenagers will delight in “tricking” the AI self-driving car into doing all sorts of Instagram worthy untoward things.

Of course, it’s not always just maverick kinds of actions that would occur. I’ve had situations wherein I was driving in an area that was unfamiliar, and a friend walked ahead of my car, guiding the way. If you owned an AI self-driving car of Level 5, you might want it to do the same — you get out of the self-driving car and have it follow you. In theory, the self-driving car should come to a stop before you get out, and likewise be stopped when you want to get in, but is this always going to be true? Do we want to have such unmalleable rules for our AI self-driving cars?

Should your AI self-driving car enable you to undertake the Shiggy Challenge?

In theory, a Level 5 AI self-driving car could do so and even help you do so. It could do the video recording of your dancing. It could respond to your verbal commands to slow down or speed-up the car. It could make sure to avoid any upcoming cars and thus avert the possibility of ramming into someone else while you are dancing wildly to “In My Feelings.” This is relatively straightforward.

But, as a society, do we want this to be happening? Will it encourage behavior that ultimately is likely to lead to human injury and possibly death? We can add this to a long list of the ethics aspects of AI self-driving cars. Meanwhile, it’s something that cannot be neglected, else we’ll for sure have AI that’s unaware and those “stupid” humans will get themselves into trouble and the AI might get axed because of it.

As the song says: “Gotta be real with it, yup.”

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

AI in Edmonton: Home to Reinforcement Learning

Edmonton, Home to Reinforcement Learning, now a Foundation of AI, is Retaining AI Talent, Attracting Investment Including DeepMind The capital of Edmonton in the Canadian province of Alberta, like its counterparts in Toronto and Montreal, has a number of strengths in AI research that are attracting engineering talent and private investors. These include: — The […]

Edmonton, Home to Reinforcement Learning, now a Foundation of AI,
is Retaining AI Talent, Attracting Investment Including DeepMind

The capital of Edmonton in the Canadian province of Alberta, like its counterparts in Toronto and Montreal, has a number of strengths in AI research that are attracting engineering talent and private investors. These include:

— The University of Alberta, considered a bedrock of Reinforcement Learning (RL) thanks to pioneering work done by Prof. Richard Sutton. The Royal Bank of Canada’s RBC Research arm announced in early 2017 it would hire Prof. Sutton to advise on a new research lab opening in Alberta to research the application of AI in banking.

RBC CEO Dave McKay stated at the time, “There is a lot of investment discussion about AI creating new capabilities. And it is a tool we are very excited about harnessing it within our own organization.”

Amii (Alberta Machine Intelligence Institute), a research group set up by Prof. Sutton, has continued to attract top students from around the world.

Borealis AI is a research center funded by RBC and aligned with U Alberta and Amii,  aimed at technology transfer from AI research to commercial business opportunities. Prof. Mathew Taylor, a RL expert from Washington State University, leads research at Borealis and currently has 15 researchers focused on solving RL problems.

ACAMP (Alberta Center for Advanced Micro Nano Technology), is an industry-led product development center founded in 2007 and used by advanced technology entrepreneurs to move their innovation from proof-of-concept to manufactured product. The center provides entrepreneurs access to multidisciplinary engineers, technology experts, unique specialized equipment, and industry expertise.

Located in Edmonton’s Research Park, ACAMP has a focus on electronics hardware, firmware, sensors, and embedded systems. The center’s product development group provides a range of support at each stage of the product development process.

The firm cites client testimonials from Xtel International, Ltd., Symroc, Nanolog Audio, the University of Dayton, Medella Health and Hifi Engineering.

Prof. Sutton Recognized for Reinforcement Learning Research

Dr. Sutton is recognized for his work in reinforcement learning, an area of machine learning that focuses on making predictions without historical data or explicit examples. Reinforcement learning techniques have been shown to be powerful in determining ideal behaviors in complex environments. For example, the techniques were used to secure a first-ever victory over a human world champion in the game of Go, as have been used in recent applications in robotics and self-driving cars.

“The collaboration between RBC Research and Amii will help support the development of an AI ecosystem in Canada that will push the boundaries of academic knowledge,” stated Dr. Sutton in a press release. “With RBC’s continued support, we will cultivate the next generation of computer scientists who will develop innovative solutions to the toughest challenges facing Canada and beyond. We’ve only scratched the surface of what reinforcement learning can do in finance.”

“We are thrilled to be opening a lab in Edmonton and to collaborate with world-class scientists like Dr. Sutton and the other researchers at Amii,” stated Dr. Foteini Agrafioti, head of RBC Research. “RBC Research has built strong capabilities in deep-learning, and with this expansion, we are well poised to play a major role in advancing research in AI and impact the future of banking.”

Gabriel Woo, VP of innovation at RBC Research in Toronto, stated in  the Financial Post that while Toronto’s and Montreal’s AI ecosystems are further along, “you have a comparable academic lab at AMII, and it is home to Sutton, who literally wrote the textbook on reinforcement learning that is being read around the world. Because of that, we are partnering with them to create and fuel opportunities to help that talent stay in Edmonton.”

Woo believes the community can expect to see more investors and startups in the near future. “If we are able to provide opportunities for them to apply their research, it will attract more attention from VCs and others and increase the opportunities for commercialization.”

Edmonton Startups Have Access to Capital, Work Space

That notion was seconded by Shawn Abbott, a general partner at iNovia Capital, which backs early stage companies. “The rising tide in AI has been due to the avalanche of large-scale cloud computing capacity, which has made the techniques of scientific AI development practical,” he said in an interview with AI Trends. “AI helps make a commodity of prediction; the ability to forecast what will happen next is now available in many industries. It’s a new way to build software and to provide cognitive augmentation, the ability to support intellectual or human endeavors with software.”

The advances of Dr Sutton have been pivotal to the advance of AI generally and in Edmonton in particular. “Dr. Sutton’s group has turned out more PhDs in AI than any other group in Canada,” Abbott said.

Keeping that talent in Canada has been the focus of Startup Edmonton, funded by the Edmonton Economic Development Corp., since its founding in 2009. The group supports entrepreneurs with mentorship programs, coworking space and community events, bringing together developers, students, founders and investors. The effort has helped to some degree to stem the brain drain of AI talent from Canada. “I don’t think it’s completely stopped but it has slowed down,” said Tiffany Linke-Boyko, CEO of Startup Edmonton, in an interview with AI Trends. A more favorable cost of living in Edmonton also helps. “The expense of living in some of the US high tech cities is insane,” she said.

She called the effort to raise awareness of Edmonton as a good location to build new AI companies as off to a good start and early stage. “We still need more companies; it’s a young ecosystem with interesting momentum,” she said.

DeepMind Commitment a Boost

Edmonton got a boost with the announcement in July 2017 that DeepMind would open its first international AI research lab in downtown Edmonton. The 10-person lab, to operate in partnership with the University of Alberta, will be headed by three University of Alberta PhDs: Richard Sutton, Michael Bowling and Patrick Pilarski.

From left, Dr. Rich Sutton, Dr. Michael Bowling and Dr. Patrick Plarski, all professors of AI at the University of Alberta, will run the Edmonton research lab of DeepMind, an AI research division of Google.

“This is a huge reputational win for the University of Alberta,” stated U of A’s dean of science, Jonathan Schaeffer, himself an AI pioneer, in an account in the Edmonton Journal.  “We’ve been one of the best AI research centres in the world for more than 10 years. The academic world knows this, but the business community doesn’t. The DeepMind announcement puts us on the map in a big way. It’s going to wake up a lot of people.”

Bowling is a leading expert on AI and games. He and his team created computer programs that beat champion human poker players. Pilarski, an engineer, specializes in adapting AI to medical uses, from helping to create intelligent prosthetic limbs to reading and screening medical tests. DeepMind of London wanted them, but the three didn’t want to leave Edmonton to move to London. So DeepMind decided to come to them.

“We’ve reached a critical mass here. There’s a kind of stickiness,” stated Pilarski.”This is the right place at the right time. It’s like nowhere else in the world.”

Now the three are in a good position to attract some of their best students back to Edmonton and to recruit more top students. “A lot of our graduates are dying for a chance to use their education in Edmonton,” stated Bowling. “We’re hoping this is a catalyst for more of a tech build-up in Edmonton.”

Over the last 15 years, the Alberta government has invested $40 million in AI and machine learning research, mostly at the U of Alberta.  That steady funding lured Sutton and Bowling to Edmonton initially.

DeepMind in January announced funding for an endowed chair at the University of Alberta’s department of computer science. The person who fills the position will be given academic freedom to explore any interest that could advance the field of AI.

“The DeepMind endowed chair, together with additional funding to support AI research at the department of computing science, is a sign of our continued commitment to this cause, and we look forward to the research breakthroughs this deep collaboration will bring,” stated Demis Hassabis, founder and CEO of DeepMind, in a press release.

Interesting AI Startups in and Around Edmonton

Here is a look at selected Edmonton-area startups that incorporate AI in their products or services.

Testfire Labs: Machine Learning Underlies the Hendrix AI Assistant

Testfire Labs, founded in 2017, is a startup that uses machine learning and artificial intelligence to build productivity solutions that modernize the way people work. Testfire’s flagship product, Hendrix.ai, is an AI assistant that captures meeting notes, action items and data points by listening via a microphone.

Currently in its beta test phase, Hendrix is said to produce meeting summaries that leave out “chit chat” for clarity.

“The demands to do more with less in modern business keep increasing,” stated Dave Damer, founder and CEO, in an account on Testfire recently published in AI Trends. “AI gives us an opportunity to legitimately take things off people’s’ hands that are generally mundane tasks so they can focus on higher-value work.”

Testfire has had three rounds of funding, with the amount raised undisclosed, according to Crunchbase.

Stream Technologies, Inc.

Stream combines the power of spectroscopy and AI machine learning to make detection quick and easy. Test results normally received from a lab or from those with a certain level of expertise are now identified in near and real time.

Within the agriculture sector, customer may want to identify anything from an invasive species to a disease, to a nutrient deficiency or levels of oil in plants, seeds and fertilizers.

Stream uses a three-stage system of capture, analyze and visualize to deliver its services. The capture is executed by a multispectral camera or spectrometer; that data is fed into the Stream Analytics Engine, which creates an application to analyze the spectral data; in the visualization stage, the data is ready in minutes, either colored images or levels of the detected element.

The Analytics Engine combined machine learning techniques and neural net design specifically to show the test results from spectral images and spectrometer scans.

One example is the ability to detect the difference between organic and polyethylene leaves. After the analysis, the polyethylene leaves are colored red and the organic leaves are colored blue.  

Learn more at Stream Technologies.

DrugBank is a Leading Online Drug Database

DrugBank is a curated pharmaceutical knowledge base for precision medicine, electronic health records and drug development.

“Our mission is to enable advanced in precision medicine and drug discovery,” said co-founder and CEO Mike Wilson of OMx Personal Health Analytics, Inc., which operates DrugBank, in comments to AI Trends.

DrugBank founders Craig Knox, left, and Mike Wilson.

DrugBank provides structured drug information that covers drugs from discovery stage to approval stage. It includes comprehensive molecular information about drugs, their mechanisms, their interactions and their targets as well as detailed regulatory information including indications, marketing status and clinical trials. DrugBank has become one of the world’s most widely used reference drug resources. It is routinely used by the general public, educators, pharmacists, pharmacologists, the pharmaceutical industry and regulatory agencies.

The first version of DrugBank was released in 2006. Version 5.1.1 was released in July 2018. The online database started as a project of computer science professor Dr. David Wishart of the U of Alberta. Undergraduate students Craig Knox and Mike Wilson helped develop the tool as undergraduates. The two later made a deal with the university to commercialize the database and set up shop at Startup Edmonton, which provides workspace and support for entrepreneurs. .

“The first weekend we released it, the servers crashed because there was so much traffic coming in,” stated co-founder Craig Knox in an account in Startup Edmonton. “It was quite popular and grew in its popularity over the years.” Over the next decade, DrugBank became ubiquitous in the pharma world, with millions of global users.

“We sell subscriptions for our datasets and software for precision medicine, electronic health records, and drug development. We also provide datasets for academic researchers for free,” Wilson told AI Trends.

Now DrugBank’s commercial clients include some of the largest pharmaceutical companies in the world, as well as mid-sized companies, a growing number of pharma startups, and companies providing scientific reference software. “The value for the users is saving time by finding the information in one place,” stated Wilson.

Each month, a million users visit the site, making DrugBank the most popular drug database in the world. It has information on more than 20,000 individual drugs, including approved drugs, drugs in clinical trials and drug formulas that show potential.

With pharma research advancing rapidly, the database must be continually updated with new information. To do this, the company uses a team of nine ‘bio curators’ — representing pharmacy, medicine, biochemistry, and other fields — who comb the academic literature for new information to add to the resource daily.

New offerings use AI to provide insights for precision medicine and pharmaceutical analytics. “Our latest offering analyzes an individual’s medical history and medications and provides important insights based on an analysis of various factors including side effects, interactions and comparisons to similar medications,” Wilson said. “The offering leverages our extremely detailed structured knowledge base and a proprietary AI algorithm to provide the analysis.”

The founders spoke highly of the support they get from Startup Edmonton, which has helped them lay a foundation for a global, scalable technology product. They enjoy being located in the downtown facility with its network of entrepreneurs. “You learn from each other which is a really cool benefit,” stated Wilson.

— By John P. Desmond, AI Trends Editor

Next: AI in Vancouver