Coopetition and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Competitors usually fight tooth and nail for every inch of ground they can gain over the other. It’s a dog eat dog world and if you can gain an advantage over your competition, so the better you shall be. If you can even somehow drive your competition out […]

By Lance Eliot, the AI Trends Insider

Competitors usually fight tooth and nail for every inch of ground they can gain over the other. It’s a dog eat dog world and if you can gain an advantage over your competition, so the better you shall be. If you can even somehow drive your competition out of business, well, as long as it happened legally, there’s more of the pie for you.

Given this rather obvious and strident desire to beat your competition, it might seem like heresy to suggest that you might at times consider backing down from being at each other’s throats and instead, dare I say, possibly cooperate with your competition. You might not be aware that the US Postal Service (USPS) has cooperative arrangements with FedEx and UPS – on the surface this seems wild to think that these competitors, obviously all directly competing as shippers, would consider working together rather than solely battling each other.

Here’s another example, Wintel. For those of you in the tech arena, you know well that Microsoft and Intel have seemingly forever cooperated with each other. The Windows and Intel mash-up, Wintel, has been pretty good for each of them respectively and collectively. When Intel’s chips became more powerful, it aided Microsoft in speeding up Windows and being able to add more features and heavier ones. As people used Windows and wanted faster speed and greater capabilities, it sparked Intel to boost their chips, knowing there was a place to sell them, and make more money by doing so. You could say it is a synergistic relationship between those two firms that in combination has aided them both.

Now, I realize you might object somewhat and insist that Microsoft and Intel are not competitors per se, thus, the suggestion that this was two competitors that found a means to cooperate seems either an unfair characterization or a false one.  You’d be somewhat on the mark to have noticed that they don’t seem to be direct competitors, though they could be if they wanted to do so (Microsoft could easily get into the chip business, Intel could easily get into the OS business, and they’ve both dabbled in each other’s pond from time-to-time). Certainly, though it’s not as strong straight-ahead competition example as would be the USPS, FedEx, UPS kind of cooperative arrangement.

There’s a word used to depict the mash-up of competition and cooperation, namely coopetition.

The word coopetition grew into prominence in the 1990s. Some people instantly react to the notion of being both a competitor and a cooperator as though it’s a crazy idea. What, give away my secrets to my competition, are you nuts? Indeed, trying to pull-off a coopetition can be tricky, as I’ll describe further herein. Please also be aware that occasionally you’ll see the use of the more informal phrasing of “frenemy” to depict a similar notion (another kind of mash-up, this one being between the word “friend” and the word “enemy”).

There are those that instantly recoil in horror at the idea of coopetition and their knee jerk reaction is that it must be utterly illegal. They assume that there must be laws that prevent such a thing. Generally, depending upon how the coopetition is arranged, there’s nothing illegal about it per se. The coopetition can though veer in a direction that raises legal concerns and thus the participants need to be especially careful about what they do, how they do it, and what impact it has on the marketplace.

It’s not particularly the potential for legal difficulties that tends to keep coopetition from happening. By and large, the means to structure a coopetition arrangement, via say putting together a consortium, it can be done with relatively little effort and cost. The real question and the bigger difficulty is whether the competing firms are able to find middle ground that allows them to enter into a coopetition agreement.

Think about today’s major high-tech firms.

Most of them are run by strong CEO’s or founders that relish being bold and love smashing their competition. They often drive their firm to have a kind of intense “hatred” for the competition and want their firm to crush the competition. Within a firm, there is often a cultural milieu formed that their firm is far superior, and the competition is unquestionably inferior. Your firm is a winner, the competing firm is a loser. That being said, they don’t want you to let down your guard, in the sense that though the other firm is an alleged loser, they can pop-up at any moment and be on the attack, so you need to be on your guard. To some degree, there’s a begrudging respect for the competition, paradoxically mixed with disdain for the competition.

These strong personalities will generally tend to keep the competitive juices going and not permit the possibility of a coopetition option. On the other hand, even these strong personalities can be motivated to consider the coopetition approach, if the circumstances or the deal looks attractive enough. With a desire to get bigger and stronger, if it seems like a coopetition could get you there, the most egocentric of leaders is willing to give the matter some thought. Of course, it’s got to be incredibly compelling, but at least it is worthy of consideration and not out of hand to float the idea.

What could be compelling?

Here’s a number for you, $7 trillion dollars.

Allow me to explain.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. We do so because it’s going to be a gargantuan market, and because it’s exciting to be creating something that’s on par with a moonshot.

See my article about how making AI self-driving cars is like a moonshot:

See my article that provides a framework about AI self-driving cars:

Total AI Self-Driving Car Market Estimated at $7 Trillion

Suppose you were the head of a car maker, or the head of a high-tech firm that wanted or is making tech for cars, and I told you that the potential market for AI self-driving cars is estimated at $7 trillion dollars by the year 2050 (as predicted in Fortune magazine, see:

That’s right, I said $7 trillion dollars. It’s a lot of money. It’s a boatload, and more, of money. The odds are that you would want to do whatever you could to get a piece of that action. Even a small slice, let’s say just a few percentages, would make your firm huge.

Furthermore, consider things from the other side of that coin. Suppose you don’t get a piece of that pie. Whatever else you are doing is likely to become crumbs. If you are making conventional cars, the odds are that few will want to buy them anymore. There are some AI self-driving car pundits that are even suggesting that conventional cars would be outlawed by 2050. The logic is that if you have conventional cars being driven by humans on our roadways in the 2050’s, it will muck up the potential nirvana of having all AI self-driving cars that presumably will be able to work in unison and thus get us to the vaunted zero fatalities goal.

For my article that debunks the zero fatalities goal, see:

If you are a high-tech firm and you’ve not gotten into the AI self-driving car realm, your fear is that you’ll also miss out on the $7 trillion dollar prize. Suppose that your high-tech competitor got into AI self-driving cars early on and they became the standard, kind of like how there was a fight between VHS and Betamax. Maybe it’s wisest to get into things early and become the standard.

Or, alternatively, maybe the early arrivers will waste a lot of money trying to figure out what to do, so instead of falling into that trap, you wait on the periphery, avoiding the drain of resources, and then jump in once the others have flailed around. Many in Silicon Valley seem to believe that you have to be the first into a new realm. This is actually a false awareness since many of the most prominent firms in many areas weren’t there first, they instead came along somewhat after others had poked and tried and based on the heels of those true first attempts did the other firm step in and become a household name.

Let’s return to the notion of coopetition. I assume we can agree that generally the auto makers aren’t very likely to want to be cooperative with each other and usually consider themselves head-on competitors. I realize there have been exceptions, such as the deal that PSA Peugeot Citroen and Toyota made to produce the Peugeot 107 and the Toyota Aygo, but those such arrangements are somewhat sparse. Likewise, the high-tech firms tend to strive towards being competitive with each other, rather than cooperative. Again, there are exceptions such as a willingness to serve on groups that are putting together standards and protocols for various architectural and interface aspects (think of the World Wide Web Consortium, W3C, as an example).

We’ve certainly already seen that auto makers and high-tech firms are willing to team-up for the AI self-driving cars realm.

In that sense, it’s kind of akin to the Wintel type of arrangement. I don’t think we’d infer they are true coopetition arrangements since they weren’t especially competing to begin with. Google’s Waymo has teamed up with Chrysler to outfit the Pacifica minivans with AI self-driving car aspects. Those two firms weren’t especially competitors. I realize you could assert that Google could get into the car business and be an auto maker if it wanted to, which is quite the case and they could buy their way in or even start something from scratch. You could also assert that Chrysler is doing its own work on high-tech aspects for AI self-driving cars and in that manner might be competing with Waymo. It just doesn’t though quite add-up to them being true competitors per se, at least not right now.

So, let’s put to the side the myriad of auto maker and high-tech firm cooperatives underway and say that we aren’t going to label those as coopetitions. Again, I realize you can argue the point and might say that even if they aren’t competitors today, they could become competitors a decade from now. Yes, I get that. Just go along with me on this for now and we can keep in mind the future possibilities too.

Consider these thought provoking questions:

  •         Could we get the auto makers to come together into a coopetition arrangement to establish the basis for AI self-driving cars?
  •         Could we get the high-tech firms to come together into a coopetition arrangement to establish the basis for AI self-driving cars?
  •         Could we get the auto makers and tech firms that are already in bed with each other to altogether come together to enter into a coopetition arrangement?

I get asked these questions during a number of my industry talks. There are some that believe the goal of achieving AI self-driving cars is so crucial for society, so important for the benefit of mankind, that it would be best if all of these firms could come together, shake hands, and forge the basis for AI self-driving cars.

For my article about idealists in AI self-driving cars, see:

Why would these firms be willing to do this? Shouldn’t they instead be wanting to “win” and become the standard for AI self-driving cars? The tempting $7 trillion dollars is a pretty alluring pot of gold. Seems premature to already throw in the towel and allow other firms to grab a piece of the pie. Maybe your efforts will knock them out of the picture. You’ll have the whole kit and caboodle yourself.

Those proposing a coopetition notion for AI self-driving cars are worried that the rather “isolated” attempts by each of the auto makers and the tech firms is going either lead to failure in terms of true AI self-driving cars, or it will stretch out for a much longer time than needed. Suppose you could have true AI self-driving cars by the year 2030, if you did a coopetition deal, versus that suppose it wasn’t until 2050 or 2060 that true AI self-driving cars would emerge. This means that for perhaps 20 or 30 years there could have been true AI self-driving cars, doing so to the benefit of us all, and yet we let it slip off due to being “selfish” and allowing the AI self-driving car makers to duke it out.

For selfishness and AI self-driving cars, see my article:

You’ve likely see science fictions movies about a giant meteor that is going to strike earth and destroy all that we have, or an alien force from Mars that is heading to earth and likely to enslave us all. In those cases, there has been a larger foe to contend with. As such, it got all of the countries of the world to set aside their differences and band together to try and defeat the larger foe. I’m not saying that would happen in real life, and perhaps instead everyone would tear each other apart, but anyway, let’s go with the happy face scenario and say that when faced with tough times, we could get together those that otherwise despise each other or see each other as their enemies, and they would become cooperative.

That’s what some want to have happen in the AI self-driving cars realm. The bigger foe is the number of annual fatalities due to car accidents. The bigger foe also includes the issue of a lack of democratization of mobility, which is what it is hoped that AI self-driving cars will bring forth, a greater democratization. The bigger foe is the need to increase mobility for those that aren’t able to be mobile. In other words, the basket of benefits for AI self-driving cars, and the basket of woes that it will overturn, the belief is that for those reasons the auto makers and tech firms should band together into a coopetition.

Zero-Sum Versus Coopetition in Game Theory

Game theory comes to play in coopetition.

If you believe in a zero-sum game, whereby the pie is just one size and those that get a bigger piece of the pie are doing so at the loss of others that will get a smaller piece of the pie, the win-lose perspective makes it hard to consider participating in a coopetition. On the other hand, if it could be a win-win possibility, whereby the pie can be made bigger, and thus the participants each get sizable pieces of pie, it makes being in the coopetition seemingly more sensible.

How would things fare in the AI self-driving cars realm? Suppose that an auto maker X that has teamed up with high-tech firm Y, they are the XY team, and they are frantically trying to be the first with a true AI self-driving car. Meanwhile, we’ve got auto maker Q and its high-tech partner firm Z, and so the QZ team is also frantically trying to put together a true AI self-driving car.

Would XY be willing to get into a coopetition with QZ, and would QZ want to get into a coopetition with XY?

If XY believes they need no help and will be able to achieve an AI self-driving car and do so on a timely basis and possibly beat the competition, it seems unlikely they would perceive value in doing the coopetition. You can say the same about QZ, namely, if they think they are going to be the winner, there’s little incentive to get into the coopetition.

Some would argue that they could potentially shave on costs of trying to achieve an AI self-driving car by joining together. Pool resources. Do R&D together. They could possibly do some kind of technology transfer amongst each other, with one having gotten more advanced in some area than the other, and thus they trade with each on the things they each have gotten farthest along on. There’s a steep learning curve on the latest in AI and so the XY and QZ could perhaps boost each other up that learning curve. Seems like the benefits of being in a coopetition are convincing.

And, it is already the case that these auto makers and tech firms are eyeing each other. They each are intently desirous of knowing how far along the other is. They are hiring away key people from each other. Some would even say there is industrial espionage underway. Plus, in some cases, there are AI self-driving car developers that appear to have stepped over the line and stolen secrets about AI self-driving cars.

See my article about the stealing of secrets of AI self-driving cars:

This coopetition is not so easy to arrange, let alone to even consider. You are the CEO of the auto maker X, which has already forged a relationship with the high-tech firm Y. The marketplace perceives that you are doing the right thing and moving forward with AI self-driving cars. This is a crucial perception for any auto maker, since we’ve already seen that the auto makers will get drummed by the marketplace, such as their shares dropping, if they don’t seem to be committed to achieving an AI self-driving car. It’s become a key determiner for the auto maker and its leadership.

The marketplace figures that your firm, you the auto maker, will be able to achieve AI self-driving cars and that consumers will flock to your cars. Consumers will be delighted that you have AI self-driving cars. The other auto makers will fall far behind in terms of sales as everyone switches over to you. In light of that expectation, it would be somewhat risky to come out and say that you’ve decided to do a coopetition with your major competitors.

I’d bet that there would be a stock drop as the marketplace reacted to this approach. If all the auto makers were in the coopetition, I suppose you could say that the money couldn’t flow anywhere else anyway.

On the other hand, if only some of the auto makers were in the coopetition, it would force the marketplace into making a bet. You might put your money into the auto makers that are in the coopetition, under the belief they will succeed first, or you might put your money into the other auto makers that are outside the coopetition, under the belief they will win and win bigger because they aren’t having to share the pie.

Speaking of which, what would be the arrangement for the coopetition? Would all of the members participating have equal use of the AI self-driving car technologies developed? Would they be in the coopetition forever or only until a true AI self-driving car was achieved, or until some other time or ending state? Could they take whatever they got from the coopetition and use it in whatever they wanted, or would there be restrictions? And so on.

I’d bet that the coopetition would have a lot of tension. There is always bound to be professional differences of opinion. A member of the coopetition might believe that LIDAR is essential to achieving a true AI self-driving car, while some other member says they don’t believe in LIDAR and see it as a false hope and a waste of time. How would the coopetition deal with this?

For other aspects about differences in opinions about AI self-driving car designs, see my article:

Also, see my article about egocentric designs:

Normally, a coopetition is likely to be formulated when the competitors are willing to find a common means to contend with something that is relatively non-strategic to their core business. If you believe that AI self-driving cars are the future of the automobile, it’s hard to see that it wouldn’t be considered strategic to the core business. Indeed, even though today we don’t necessarily think of AI self-driving cars as a strategic core per se, because it’s still so early in the life cycle, anyone with a bit of vision can see that soon enough it will be.

If the auto makers did get together in a coopetition, and they all ended-up with the same AI self-driving car technology, how else would they differentiate themselves in the marketplace? I realize you can say that even today the auto makers are pretty much the same in the sense that they offer a car that has an engine and has a transmission, etc. The “technology” you might say is about the same, and yet they do seem to differentiate each other. Often, the differentiation is more on style of the car, the looks of the car, rather than the tech side of things.

For how auto makers might be marketing AI self-driving cars in the future, see my article:

For those that believe that the AI part of the self-driving car will end-up being the same for cars of the future, and it won’t be a differentiator to the marketplace, this admittedly makes the case for banding into a coopetition on the high-tech stuff. If the auto makers believe that the AI will be a commodity item, why not get into a coopetition, figure this arcane high-tech AI stuff out, and be done with it. No sense in fighting over something that anyway is going to be generic across the board.

At this time, it appears that the auto makers believe they can reach a higher value by creating their own AI self-driving car, doing so in conjunction with a particular high-tech firm that they’ve chosen, rather than doing so via a coopetition. Some have wondered if we’ll see a high-tech firm that opts to build its own car, maybe from scratch, but so far that doesn’t seem to be the case (in spite of the rumors about Apple, for example). There are some firms that are developing both the car and the high-tech themselves, such as Tesla, and see no need to band with another firm, as yet.

Right now, the forces appear to be swayed toward the don’t side of doing a coopetition. Things could change. Suppose that no one is able to achieve a true AI self-driving car? It could be that the pressures become large enough (the bigger foe) that they auto makers and tech firms consider the coopetition notion. Or, maybe the government decides to step in and forces some kind of coopetition, doing so under the belief that it is a societal matter and regulatory guidance is needed to get us to true AI self-driving cars. Or, maybe indeed aliens from Mars start to head here and we realize that if we just had AI self-driving cars we’d be able to fend them off.

For my piece about conspiracy theories and AI self-driving cars, see:

There’s the old line about if you can’t beat them, join them. For the moment, it’s assumed that the ability to beat them is greater than the join them alternative. The year 2050 is still off in the future and anything might happen on the path to that $7 trillion dollars.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

A Look Inside Facebook’s AI Machine

By Steven Levy, Wired When asked to head Facebook’s Applied Machine Learning group — to supercharge the world’s biggest social network with an AI makeover — Joaquin Quiñonero Candela hesitated. It was not that the Spanish-born scientist, a self-described “machine learning (ML) person,” hadn’t already witnessed how AI could help Facebook. Since joining the company in […]

By Steven Levy, Wired

When asked to head Facebook’s Applied Machine Learning group — to supercharge the world’s biggest social network with an AI makeover — Joaquin Quiñonero Candela hesitated. It was not that the Spanish-born scientist, a self-described “machine learning (ML) person,” hadn’t already witnessed how AI could help Facebook. Since joining the company in 2012, he had overseen a transformation of the company’s ad operation, using an ML approach to make sponsored posts more relevant and effective. Significantly, he did this in a way that empowered engineers in his group to use AI even if they weren’t trained to do so, making the ad division richer overall in machine learning skills. But he wasn’t sure the same magic would take hold in the larger arena of Facebook, where billions of people-to-people connections depend on fuzzier values than the hard data that measures ads. “I wanted to be convinced that there was going to be value in it,” he says of the promotion.

Despite his doubts, Candela took the post. And now, after barely two years, his hesitation seems almost absurd.

How absurd? Last month, Candela addressed an audience of engineers at a New York City conference. “I’m going to make a strong statement,” he warned them. “Facebook today cannot exist without AI. Every time you use Facebook or Instagram or Messenger, you may not realize it, but your experiences are being powered by AI.”

Last November I went to Facebook’s mammoth headquarters in Menlo Park to interview Candela and some of his team, so that I could see how AI suddenly became Facebook’s oxygen. To date, much of the attention around Facebook’s presence in the field has been focused on its world-class Facebook Artificial Intelligence Research group (FAIR), led by renowned neural net expert Yann LeCun. FAIR, along with competitors at Google, Microsoft, Baidu, Amazon, and Apple (now that the secretive company is allowing its scientists to publish), is one of the preferred destinations for coveted grads of elite AI programs. It’s one of the top producers of breakthroughs in the brain-inspired digital neural networks behind recent improvements in the way computers see, hear, and even converse. But Candela’s Applied Machine Learninggroup (AML) is charged with integrating the research of FAIR and other outposts into Facebook’s actual products—and, perhaps more importantly, empowering all of the company’s engineers to integrate machine learning into their work.

Because Facebook can’t exist without AI, it needs all its engineers to build with it.

My visit occurs two days after the presidential election and one day after CEO Mark Zuckerberg blithely remarked that “it’s crazy” to think that Facebook’s circulation of fake news helped elect Donald Trump. The comment would turn out be the equivalent of driving a fuel tanker into a growing fire of outrage over Facebook’s alleged complicity in the orgy of misinformation that plagued its News Feed in the last year. Though much of the controversy is beyond Candela’s pay grade, he knows that ultimately Facebook’s response to the fake news crisis will rely on machine learning efforts in which his own team will have a part.

But to the relief of the PR person sitting in on our interview, Candela wants to show me something else—a demo that embodies the work of his group. To my surprise, it’s something that performs a relatively frivolous trick: It redraws a photo or streams a video in the style of an art masterpiece by a distinctive painter. In fact, it’s reminiscent of the kind of digital stunt you’d see on Snapchat, and the idea of transmogrifying photos into Picasso’s cubism has already been accomplished.

“The technology behind this is called neural style transfer,” he explains. “It’s a big neural net that gets trained to repaint an original photograph using a particular style.” He pulls out his phone and snaps a photo. A tap and a swipe later, it turns into a recognizable offshoot of Van Gogh’s “The Starry Night.” More impressively, it can render a video in a given style as it streams. But what’s really different, he says, is something I can’t see: Facebook has built its neural net so it will work on the phone itself.

Read the source article in Wired.

Entrepreneurs Taking on Bias in Artificial Intelligence

Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life. “Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL. AI has also been touted as the new must-have […]

Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life.

“Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL.

AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.

Some of the examples of bias are blatant, such as Google’s facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of “winners” were light-skinned. Search Google for images of “unprofessional hair” and the results you see will mostly be pictures of black women (even searching for “man” or “woman” brings back images of mostly white individuals).

While more light has been shined on the problem recently, some feel it’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.

“Fundamentally, bias, if not addressed, becomes the Achilles’ heel that eventually kills artificial intelligence,” says Chad Steelberg, CEO of Veritone. “You can’t have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem.”

As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.

Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

Solution: Algorithm auditing

Back in the early 2010s, Cathy O’Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.

Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

However, O’Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O’Neil’s algorithms were discriminating against users of certain backgrounds, based on the other cues.

As O’Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren’t just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O’Neil and others.)

What’s more, in some industries — for example, housing — if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.

“I had left the finance [world] because I wanted to do better than take advantage of a system just because I could,” O’Neil says. “I’d entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place.”

O’Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracyabout the perils of letting algorithms run the world, and started consulting.

Eventually, she settled on a niche: auditing algorithms.

“I have to admit that it wasn’t until maybe 2014 or 2015 that I realized this is also a business opportunity,” O’Neil says.

Right before the election in 2016, that realization led her to found O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).

“I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn’t actually know how to do it,” O’Neil says. “I didn’t actually know. I didn’t have good advice to give them.” But, she wanted to figure it out.

So, what does it mean to audit an algorithm?

Alibaba to Challenge Amazon with a Cloud Service Push in Europe

Alibaba Group Holding Ltd. is in talks with BT Group PLC about a cloud services partnership as the Chinese internet giant challenges Inc.’s dominance in Europe. An agreement between Alibaba and the IT consulting unit of Britain’s former phone monopoly could be similar to Alibaba’s existing arrangement with Vodafone Group Plc in Germany, according […]

Alibaba Group Holding Ltd. is in talks with BT Group PLC about a cloud services partnership as the Chinese internet giant challenges Inc.’s dominance in Europe.

An agreement between Alibaba and the IT consulting unit of Britain’s former phone monopoly could be similar to Alibaba’s existing arrangement with Vodafone Group Plc in Germany, according to a person familiar with the matter, who asked not to be identified as the talks are private.

A BT spokeswoman confirmed by email that the U.K. telecom company is in talks with Alibaba Cloud and declined to give details. A spokesman for Alibaba declined to comment.

Started in 2009, Alibaba Cloud has expanded fast beyond China in a direct challenge to Amazon Web Services, the e-commerce giant’s division that dominates cloud computing. Alibaba Cloud is now the fourth-biggest global provider of cloud infrastructure and related services, behind Amazon, Microsoft Corp. and Alphabet Inc.’s Google, according to a report last month by Synergy Research Group.

Europe has become key to Alibaba Cloud’s success outside China, with prospects in the U.S. made murky by President Donald Trump’s America First agenda. Alibaba has pulled back in the U.S. just as tensions between America and China have escalated under Trump.

Alibaba started the German partnership with Vodafone in 2016. The Hangzhou, China-based company put its first European data center in Frankfurt, allowing Vodafone to resell Alibaba Cloud services such as data storage and analytics. Last week, Alibaba Cloud moved into France, agreeing to work with transport and communications company Bollore SA in cloud computing, big data and artificial intelligence.

Telecom dilemma

BT’s talks with Alibaba underscore a dilemma for the telecom industry. As big tech companies and consulting firms muscle in on their business installing and maintaining IT networks for large corporations, they must choose whether to resist them, or accept their help and decide which to ally with.

BT Global Services has struck up partnerships with Amazon, Microsoft and Cisco Systems Inc., while Spain’s Telefonica SA works with Amazon. In Germany, while Deutsche Telekom AG’s T-Systems has partners including China’s Huawei Technologies Co. and Cisco, it has structured its public cloud offering as an alternative to U.S. giants Amazon and Google—touting its ability to keep data within Germany where there are strict data-protection laws, 100% out of reach of U.S. authorities.

A deal with Alibaba could bolster BT’s cloud computing and big data skills as clients shift more of their IT capacity offsite to cut costs.

BT is undertaking a digital overhaul of its Global Services business in a restructuring involving thousands of job cuts after revenue at the division fell 9% last year. The poor performance of Global Services and the ouster last month of BT CEO Gavin Patterson have fueled speculation among some analysts that BT may sell the division. Still, the unit is seen by some investors as critical for BT’s relationships with multinational clients.

Read the source article in Digital Commerce 360.

Key Considerations in AI Vendor Selection, Deployment

The world of artificial intelligence is frightening. No, not the danger of an army of AI-powered robots taking over the world (though that is a bit concerning). The real fear is that the wrong vendor is chosen or the rollout handled poorly. After all, AI is complex, not fully mature, in some cases poorly understood, and […]

The world of artificial intelligence is frightening. No, not the danger of an army of AI-powered robots taking over the world (though that is a bit concerning). The real fear is that the wrong vendor is chosen or the rollout handled poorly. After all, AI is complex, not fully mature, in some cases poorly understood, and involves great changes to how an organization thinks and operates.

Much of the complexity stems from the fact that AI has no single meaning or definition. It is a combination of several elements (machine learning, natural language processing, computer vision and others). This means that use cases tend to be unique and complex. Companies not big enough to hire expertise rely deeply on consultants and vendors, likely more than in more familiar areas. AI is not for the corporate faint of heart.

So how should organizations approach AI?

The first step in any corporate initiative is to fully understand what is on the table. It seems almost needless to say that organizations must educate themselves about AI before taking the plunge. But, in this case, it’s so important that it is worth stating the obvious. They must assess what data they have to feed into the system and if remedial work is necessary to enable that data to be used.

Tractica Research Director Aditya Kaul suggests that organizations understand the difference between the AI platforms that process raw data to reach conclusions and perception-driven approaches that focus on the intricacies and nuances of language and vision. The next step is to experiment on a wide variety of use cases and settle on those that bring the greatest value to the organization. It is important to understand the metrics that will be used to gauge success, such as increased productivity or reduced costs.

Moving Ahead with AI

At that point, they are set to move ahead aggressively. “Once companies have a good understanding of the AI technologies and use cases, they can go [choose] a third-party enterprise-grade AI platform and build a robust framework around data and model warehousing that allows for efficient production-grade AI that can be swiftly deployed into client-facing products and services,” Kaul wrote to IT Business Edge in response to emailed questions.

This suggests deep changes, which makes choosing vendors an even more vital decision than better understood limited technology deployments. The stakes are high. It is a nascent field where some companies no doubt are selling vaporware and some perhaps haven’t figured out their own value proposition. It’s best to be very careful. “If your AI vendor won’t promise you real ROI, it’s because they can’t deliver,” wrote Ben Lamm, the co-founder and CEO of Hypergiant. “If a vendor is trying to skirt around a clear statement of value, you know they won’t serve you well in the long run.” Organizations should do the same block and tackling that is done for any other significant investment. Credentials should be checked, deep conversations conducted and a high comfort level achieved. “One of the most important things enterprises can look for in an AI vendor is understanding the success of their customer base,” wrote Peresh Kharya, the director of Accelerated Computing for NVIDIA. “Don’t be afraid to ask which of their customers are successful and how has their new AI solution benefited their business. Asking this question will help you gauge the tangible business value the vendor is promoting.”

Organizations can take steps to increase the odds that they will choose the right vendor. Dave Damer, the founder and CEO of Testfire Labs, offers three tips. The first two focus on precisely what the vendor will be providing. Companies should ask if the prospective vendor delivers packaged solutions, custom solutions or both, and if it has the necessary expertise in house or must outsource. Finally, the organization must understand what will happen after the deployment is done. “A lack of employee training or further customization of models can lead to unusable and/or ineffective technology,” Damer wrote.

Best of Breed or Single Vendor?

A longstanding debate in telecom and IT circles is whether platforms are better coming from a single vendor or “best in breed” arrangements in which the top elements are cherry picked and strung together. The single vendor platforms presumably are better integrated and have deeper and easier to use management functions, while the best in breed approach potentially offers better performance.

The pendulum is swinging toward multiple vendors, at least according to Tracy Malingo, the senior vice president of Product Strategy at Verint, which bought AI firm Next IT last December. “This is actually one of the biggest shifts that we’ve seen in AI,” Malingo wrote. “As major players have sought to lock in ecosystems and as companies have evolved in their understanding and needs for AI, we’ve seen the market begin to shift toward best of breed over single-source vendors. That trend will continue in the future.”

The bottom line is that AI is a slippery slope: That slope can arc toward more efficient operations and a healthier bottom line – or toward confusion, failed implementations and all the headaches that those results bring on. “Organizations should have a clear understanding of what business issues they’re trying to solve with AI,” wrote Guy Yehiav, the CEO of Profitect. “How will the technology they’re evaluating make an impact to both top and bottom line and what is the approach to roll it out across the business? If analytics and AI are done well, the impact should be quick and results tangible.”

Read the source article at IT Business Edge.

The Banking Industry Has a $1 Trillion Opportunity with AI

There are about 7.5 billion people on the planet, give or take a few. But that number pales in comparison to the number of connected devices worldwide. According to Autonomous, a financial research firm, people are outnumbered three-to-one by their smart computing devices — an estimated 22 billion in total. And the number of smart devices will […]

There are about 7.5 billion people on the planet, give or take a few. But that number pales in comparison to the number of connected devices worldwide. According to Autonomous, a financial research firm, people are outnumbered three-to-one by their smart computing devices — an estimated 22 billion in total. And the number of smart devices will continue to explode, with venture capital firms pouring $10 billion annually into AI-powered companies focusing on digitally-connected devices.

For financial institutions, their slice of this massive AI pie represents upwards of $1 trillion in projected cost savings. By 2030, traditional financial institutions can shave 22% in costs, says Autonomous in an 84-page report on AI in the financial industry. Here’s how they break down those cost savings:

  • Front Office – $490 billion in savings. Almost half of this ($199 billion) will come from reductions in the scale of retail branch networks, security, tellers, cashiers and other distribution staff.
  • Middle Office – $350 billion in savings. Just simply applying AI to compliance, KYC/AML, authentication and other forms of data processing will save banks and credit unions a staggering $217 billion.
  • Back Office – $200 billion in savings. $31 billion of this will be attributed to underwriting and collections systems.

These numbers align with what other analysts and research firms have forecast. Bain & Company has pegged the savings at around $1.1 trillion, while Accenture estimates that AI will add $1.2 trillion in value to the financial industry by 2035.

In the U.S. banking sector, 1.2 million employees have already been exposed to AI in the front-, middle- and back office, with almost three-quarters of workers in the front office using AI (even if they don’t know it). If you include the investment and insurance industry, there are 2.5 million U.S. financial services workers whose jobs are already being directly impacted by AI.

Use Cases for AI

Autonomous sees three primary ways in which artificial intelligence will transform the banking industry:

  1. AI technology companies such as Google and Amazon will add financial services skills to their smart home assistants, then leveraging this data+interface via relationships with traditional banking providers.
  2. Technology and finance firms merge/collaborate to build full psychographic profiles of consumers across social, commercial, personal and financial data (e.g., like Tencent coupling with Ant Financial in China).
  3. The crypto community builds decentralized, autonomous organizations using open source components with the goal of shifting power back to consumers.

AI-enabled devices are already using vision and sound to gather information even more accurately than humans, and the software continues to get more human-like.

“Not only can software understand the contents of inputs and categorize it as scale,” Autonomous explains, “it has exhibited the ability to generate new examples of those inputs. Artists are now as endangered as lawyers and bankers.”

But AI still has a way to go before a computer will become the next van Gogh or Pollock. Today’s AI is “narrow,” meaning that the machines are built to react to specific events and lack general reasoning capability. That said, there are plenty of practical applications for AI that banks and credit unions are taking advantage of today.

The most mature use cases are in chatbots in the front office, antifraud and risk and KYC/AML in the middle office, and credit underwriting in the back office.

Financial institutions can use AI to power conversational interfaces that integrate financial data and account actions with algorithm-powered automatic “agents” that can hold life-like conversations with consumers.

Bank of America has announced that it is aggressively rolling out Erica, its virtual assistant, to all of its 25 million mobile banking consumers. Using voice commands, texts or touch, BofA customers can instruct Erica to give account balances, transfer money between accounts, send money with Zelle, and schedule meetings with real representatives at financial centers.

Biometrics and workflow and compliance automation are other strong use cases for AI. To improve the consumer experience, AI can allow a bank or credit union to authenticate a mobile payment using a fingerprint or replace a numerical passcode with voice recognition.

In the middle office, AI can perform real-time regulatory checks for KYC/AML on all transactions rather than rely on more traditional methods of using batch processing to analyze only samples of consumers.

Perhaps the most promising application, says Autonomous, is using AI to incorporate social media, free text fields and even machine vision into the development of lending, investment and insurance products.

Read the source article at The Financial Brand.

Four Suggestions for Using a Kaggle Competition to Test AI in Business

According to a McKinsey report, only 20% of companies consider themselves adopters of AI technology while 41% remain uncertain about the benefits that AI provides. Considering the cost of implementing AI and the organizational challenges that come with it, it’s no surprise that smart companies seek ways to test the solutions before implementing them and get […]

According to a McKinsey report, only 20% of companies consider themselves adopters of AI technology while 41% remain uncertain about the benefits that AI provides. Considering the cost of implementing AI and the organizational challenges that come with it, it’s no surprise that smart companies seek ways to test the solutions before implementing them and get a sneak peek into the AI world without making a leap of faith.

That’s why more and more organizations are turning to data science competition platforms like Kaggle, CrowdAI and DrivenData. Making a data science-related challenge public and inviting the community to tackle it comes with many benefits:

  • Low initial cost – the company needs only to provide data scientists with data, pay the entrance fee and fund the award. There are no further costs.
  • Validating results – participants provide the company with verifiable, working solutions.
  • Establishing contacts – A lot of companies and professionals take part in Kaggle competitions. The ones who tackled the challenge may be potential vendors for your company.
  • Brainstorming the solution – data science is a creative field, and there’s often more than one way to solve a problem. Sponsoring a competition means you’re sponsoring a brainstorming session with thousands of professional and passionate data scientists, including the best of the best.
  • No further investment or involvement – the company gets immediate feedback. If an AI solution is deemed efficacious, the company can move forward with it and otherwise end involvement in funding the award and avoid further costs.

While numerous organizations – big e-commerce websites and state administrations among them – sponsor competitions and leverage the power of data science community, running a competition is not at all simple. An excellent example is the competition the US National Oceanic and Atmospheric Administration sponsored when it needed a solution that would recognize and differentiate individual right whales from the herd. Ultimately, what proved the most efficacious was the principle of facial recognition, but applied to the topsides of the whales, which were obscured by weather, water and the distance between the photographer above and the whales far below. To check if this was even possible, and how accurate a solution may be, the organization ran a Kaggle competition, which won.

Having won several such competitions, we have encountered both brilliant and not-so-brilliant ones. That’s why we decided to prepare a guide for every organization interested in testing potential AI solutions in Kaggle, CrowdAI or DrivenData competitions.

Recommendation 1. Deliver participants high-quality data

The quality of your data is crucial to attaining a meaningful outcome. Minus the data, even the best machine learning model is useless. This also applies to data science competitions: without quality training data, the participants will not be able to build a working model. This is a great challenge when it comes to medical data, where obtaining enough information is problematic for both legal and practical reasons.

  • Scenario: A farming company wants to build a model to identify soil type from photos and probing results. Although there are six classes of farming soil, the company is able to deliver sample data for only four. Considering that, running the competition would make no sense – the machine learning model wouldn’t be able to recognize all the soil types.

Advice: Ensure your data is complete, clear and representative before launching the competition.

Recommendation 2. Build clear and descriptive rules

Competitions are put together to achieve goals, so the model has to produce a useful outcome. And “useful” is the point here. Because those participating in the competition are not professionals in the field they’re producing a solution for, the rules need to be based strictly on the case and the model’s further use. Including even basic guidelines will help them to address the challenge properly. Lacking these foundations, the outcome may be right but totally useless.

  • Scenario: Mapping the distribution of children below the age of 7 in the city will be used to optimize social, educational and healthcare policies. To make the mapping work, it is crucial to include additional guidelines in the rules. The areas mapped need to be bordered by streets, rivers, rail lines, districts and other topographical obstacles in the city. Lacking these, many of the models may map the distribution by cutting the city into 10-meter widths and kilometer-long stripes, where segmentation is done but the outcome is totally useless due to the lack of proper guidelines in the competition rules.

Advice: Think about usage and include the respective guidelines within the rules of the competition to make it highly goal-oriented and common sense driven.

Read the source article at

Ensemble Machine Learning for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider How do you learn something? That’s the same question that we need to ask when trying to achieve Machine Learning (ML). In what way can we undertake “learning” for a computer and seek to “teach” the system to do things of an intelligent nature. That’s a holy grail […]

By Lance Eliot, the AI Trends Insider

How do you learn something?

That’s the same question that we need to ask when trying to achieve Machine Learning (ML). In what way can we undertake “learning” for a computer and seek to “teach” the system to do things of an intelligent nature. That’s a holy grail for those in AI that are aiming to avoid having to program their way into intelligent behavior. Instead, the notion is to be able to somehow get a computer to learn what to do and not need to explicitly write out every step or knowledge aspect required.

Allow me a moment to share with you a story about the nature of learning.

Earlier in my career, I started out as a professor and was excited to teach classes for both undergraduate students and graduate level students. Those first few lectures were my chance to aid those students in learning about computer science and AI. Before each lecture I spent a lot of time to prepare my lecture notes and was ready to fill the classroom whiteboard with all the key principles they’d need to know. Sure enough, I’d stride into the classroom and start writing on the board and kept doing so until the bell went off that the class session was finished.

After doing this for about a week or two, a student came to my office hours and asked if there was a textbook they could use to study from. I was taken aback since I had purposely not chosen a textbook in order to save the students money. I figured that my copious notes on the board would be better than some stodgy textbook and averted them from having to spend a fortune on costly books. The student explained that though they welcomed my approach, they were the type of person that found it easier to learn by reading a book. Trying not to offend me, the student gingerly inquired as to whether my lecture notes could be augmented by a textbook.

I considered this suggestion and sure enough found a textbook that I thought would be pretty good to recommend, and at the next session of the class mentioned it to the students, indicating that it was optional and not mandatory for the class.

While walking across the campus after a class session, another student came up to me and asked if there were any videos of my lectures. I was suspicious that the student wanted to skip coming to lecture and figured they could just watch a video instead, but this student sincerely convinced me that she found that watching a video allowed her to start and stop the lecture while trying to study the material after class sessions. She said that my fast pace during class didn’t allow time for her to really soak in the points and that by having a video she would be able to do so at a measured pace on her own time.

I considered this suggestion and provided to the class links to some videos that were pertinent to the lectures that I was giving.

Yet another student came to see me about another facet of my classes. For the undergrad lectures, I spoke the entire time and didn’t allow for any classroom discussion or interaction. This seemed sensible because the classes were large lecture halls that had hundreds of students attending. I figured it would not be feasible to carry on a Socratic dialogue similar to what I was doing in the graduate level courses where I had many 15-20 students per class. I had even been told by some of the senior faculty that trying to engage undergrads in discussion was a waste of time anyway since those newbie students were neophytes and it would be ineffective to allow any kind of Q&A with them.

Well, an undergrad student came to see me and asked if I was ever going to allow Q&A during my lectures. When I started to discuss this with the student, I inquired as to what kinds of questions was he thinking of asking. Turns out that we had a very vigorous back-and-forth on some meaty aspects of AI and it made me realize that there were perhaps students in the lecture hall that could indeed engage in a hearty dialogue during class. At my next lecture, I opted to stop every twenty minutes and gauge the reaction from the students and see if I could get a brief and useful interaction going with them. It worked, and I noticed that many of the students became much more interested in the lectures by this added feature of allowing for Q&A (even for so-called “lowly” undergraduate students, which was how my fellow faculty seemed to think of them).

Why do I tell you this story about my initial days of being a professor?

I found out pretty quickly that using only one method or approach to learning is not necessarily very wise. My initial impetus to do fast paced all-spoken lectures was perhaps sufficient for some students, but not for all. Furthermore, even the students that were OK with that narrow singular approach were likely to tap into other means of learning if I was able to provide it. By augmenting my lectures with videos, with textbooks, and by allowing for in-classroom discussion, I was providing a multitude of means to learn.

You’ll be happy to know that I learned that learning is best done via offering multiple ways to learn. Allow the learner to select which approach best fits to them. When I say this, also keep in mind that the situation might determine which mode is best at that time. In other words, don’t assume that someone that prefers learning via in-person lecture is always going to find that to be the best learning method for them. They might switch to a preference for say video or textbook, depending upon the circumstance.

And, don’t assume that each learner will learn via only one method. Student A might find that using lectures and the textbook is their best fit. Student B might find lectures to be unsuitable for learning and prefer dialogue and videos. Each learner will have their own one-or-more learning approaches that work best for them, and this varies by the nature of the topic being learned.

I kept all of this in mind for the rest of my professorial days and always tried to provide multiple learning methods to the students, so they could choose the best fit for them.

Ensemble Learning Employs Multiple Methods, Approaches

A phrase sometimes used to refer to this notion of multiple learning methods is known as ensemble learning. When you consider the word “ensemble” you tend to think of multiples of something, such as multiple musicians in an orchestra or multiple actors in a play. They each have their own role, and yet they also combine together to create a whole.

Ensemble machine learning is the same kind of concept. Rather than using only one method or approach to “teach” a computer to do something, we might use multiple methods or approaches. These multiple methods or approaches are intended to somehow ultimately work together so as to form a group. In other words, we don’t want the learning methods to be so disparate that they don’t end-up working together. It’s like musicians that are supposed to play the same song together. The hope is that the multiple learning methods are going to lead to a greater chance at having the learner learn, which in this case is the computer system as the learner.

At the Cybernetic AI Self-Driving Car Institute, we are using ensemble machine learning as part of our approach to developing AI for self-driving cars.

Allow me to further elaborate.

Suppose I was trying to get a computer system to learn some aspect of how to drive a car. One approach might be to use artificial neural networks (ANN). This is very popular and a relatively standardized way to “teach” the computer about certain driving task aspects. That’s just one approach though. I might also try to use genetic algorithms (GA). I might also use support vector machines (SVM). And so on. These could be done in an ensemble manner, meaning that I’m trying to “teach” the same thing but using multiple learning techniques to do so.

For the use of genetic algorithms in AI self-driving cars see my article:

For my article about support vector machines in AI self-driving cars see:

For my articles about machine learning for AI self-driving cars see:

Benchmarks and machine learning:

Federated machine learning:

Explanation-based machine learning:

Deep reinforcement learning:

Deep compression pruning in machine learning:

Simulations and machine learning:

Training data and machine learning:

Now you don’t normally just toss together an ensemble. When you put together a musical band, you probably would be astute to pick musicians that have particular musical skills and play particular musical instruments. You’d want them to end-up being complimentary with each other. Sure, some might be duplicative, such as you might have more than one guitar player, but that could be because one guitarist will be the lead guitar and the other perhaps the bass guitar player.

The same is said for doing ensemble machine learning. You’ll want to select machine learning approaches or methods that seem to make sense when considered in the totality as a group of such machine learning approaches. What is the strength of each ML chosen for the ensemble? What is the weakness of the ML chosen? By having multiple learning methods, hopefully you’ll be able to either find the “best” one for the given learning circumstance at hand, or you might be able to combine them together in a manner that offers a synergistic outcome beyond each of them performing individually.

So, you could select some N number of machine learning approaches, train them on some data, and then see which of them learned the best, as based on some kind of metrics. You might after training feed the MLs with new data and see which does the best job. For example, suppose I’m trying to train toward being able to discern street signs. So, I feed a bunch of pictures of street signs into these each ML’s of my ensemble. After they’ve each used their own respective learning approach, I then test them. I do so by feeding new pictures of street signs and see which of them most consistently can identify a stop sign versus a speed limit sign.

See my article about street signs and AI self-driving cars:

Out of my N number of machine learning approaches that I selected for this street sign learning task, suppose that the SVM turns out to be the “best” as based on my testing after the learning has occurred. I might then decide that for the street sign interpretation I’m going to exclusively use SVM for my AI self-driving car system. This aspect of selecting a particular model out of a set of models is sometimes referred to as the “bucket of models” approach, wherein you have a bucket of models in the ensemble and you choose one out of them. Your selection is based on a kind of “bake-off” as to which is the better choice.

But, suppose that I discover that of the N machine learning approaches, sometimes the SVM is the “best” and meanwhile there are other times that the GA is better. I don’t necessarily need to confine myself to choosing only one of the learning methods for the system. What I might do is opt to use both SVM and GA, and be aware beforehand of when each is preferred to come to play. This is akin to having the two guitarists in my musical band, and each has their own strengths and weaknesses, so if I’m thoughtful about how to arrange my band when they play a concert I’ll put them each into a part of the music playing that seems best for their capabilities.  Maybe one of them starts the song, and the other ends the song. Or however arranging them seems most suitable to their capabilities.

Thus, we might choose N number of machine learning approaches for our ensemble, train them, and then decide that some subset Q are chosen to become part of the actual system we are putting together. Q might be 1, in that maybe there’s only one of the machine learning approaches that seemed appropriate to move forward with, or Q might be 2, or 3, and so on up to the number N. If we do select more than just one, the question then arises as to when and how to use the Q number of chosen machine learning approaches.

In some cases, you might use each separately, such as maybe machine learning approach Q1 is good at detecting stop signs, while Q2 is good at detecting speed limit signs. Therefore, you put Q1 and Q2 into the real system and when it is working you are going to rely upon Q1 for stop sign detection and Q2 for speed limit sign detection.

In other cases, you might decide to combine together the machine learning approaches that have been successful to get into the set Q. I might decide that whenever a street sign is being analyzed, I’ll see what Q1 has to indicate about it, and what Q2 has to indicate about it. If they both agree that it is a stop sign, I’ll be satisfied that it’s likely a stop sign, and especially if Q1 is very sure of it. If they both agree that it is speed limit sign, and especially if Q2 is very sure of it, I’ll then be comfortable assuming that it is a speed limit sign.

Various Ways to Combine the Q Sets

There are various ways you might combine together the Q’s. You could simply consider them all equal in terms of their voting power, which is generally called “bagging” or bootstrap aggregation. Or, you could consider them to be unequal in their voting power. In this case, we’re going with the idea that Q1 is better at stop sign detection, so I’ll add a weighting to its results that if it’s interpretation is a stop sign then I’ll give it a lot of weight, while if Q2 detects a stop sign I’ll give it a lower weighting because I already know beforehand it’s not so good at stop sign detection.

These machine learning approaches that are chosen for the ensemble are often referred to as individual learners. You can have any N number of these individual learners and it all depends on what you are trying to achieve and how many machine learning approaches you want to consider for the matter at-hand. Some also refer to these individual learners as base learners. A base or individual learner can be whatever machine learning approach you know and are comfortable with, and that matches to the learning task at hand, and as mentioned earlier can be ANN, SVM, GA, decision trees, etc.

Some believe that to make the learning task fair, you should provide essentially the same training data to the machine learning approaches that you’ve chosen for the matter at-hand. Thus, I might select one sample of training data that I feed into each of the N machine learning approaches. I then see how each of those machine learning approaches did based on the sample data. For example, I select a thousand street sign images and feed them into my N machine learning approaches which in this case I’ve chosen say three, ANN, SVM, GA.

Or, instead, I might take a series of samples of the training data. Let’s refer to one such sample as S1, consisting of a thousand images randomly chosen from a population of 50,000 images, and feed the sample S1 into machine learning approach Q1. I might then select another sample of training data, let’s call it S2, consisting of another randomly selected set of a thousand images, and feed it into machine learning approach Q2. And so on for each of the N machine learning approaches that I’ve selected.

I could then see how each of the machine learning approaches did on their respective sample data. I might then opt to keep all of the machine learning approaches for my actual system, or I might selectively choose which ones will go into my actual system. And, as mentioned earlier, if I have selected multiple machine learning approaches for the actual system then I’ll want to figure out how to possibly combine together their results.

You can further advance the ensemble learning technique by adding learning upon learning. Suppose I have a base set of individual learners. I might feed their results into a second-level of machine learning approaches that act as meta-learners. In a sense, you can use the first-level to do some initial screening and scanning, and then potentially have a second-level that then aims at getting into further refinement of what the first-level found. For example, suppose my first-level identified that a street sign is a speed limit sign, but the first-level isn’t capable to then determine what the speed limit numbers are. I might feed the results into a second-level that is adept at ascertaining the numbers on the speed limit sign and be able to detect what the actual speed limit is as posted on the sign.

The ensemble approach to machine learning allows for a lot of flexibility in how you undertake it. There’s no particular standardized way in which you are supposed to do ensemble machine learning. It’s an area still evolving as to what works best and how to most effectively and efficiently use it.

Some might be tempted to throw every machine learning approach into an ensemble under the blind hope that it will then showcase which is the best for your matter at-hand. This is not as easy as it seems. You need to know what the machine learning approach does and there’s an effort involved in setting it up and giving it a fair chance. In essence, there are costs to undertaking this and you shouldn’t be using a scattergun style way of doing so.

For any particular matter, there are going to be so-called weak learners and strong learners. Some of the machine learning approaches are very good in some situations and quite poor in others. You also need to be thinking about the generalizability of the machine learning approaches. You could be fooled when feeding sample data into the machine learning approaches that say one of them looks really good, but it turns out maybe it has overfitted to the sample data. This might not then do you much good once you start feeding new data into the mix.

Another aspect is the value of diversity. If you have no-diversity, such as only one machine learning approach that you are using, there are likely to be situations wherein it isn’t as good as some other machine learning approach, and you should consider having diversity. Therefore, by having more than one machine learning approach in your mix, you are gaining diversity which will hopefully pay-off for varying circumstances. As with anything else, if you have too many though of the machine learning approaches it can lead to muddled results and you might not be able to know which one to believe for a given result provided.

Keep in mind that any ensemble that you put together will require computational effort, in essence computing power, in order to not only do the training but more importantly when involved in receiving new data and responding accordingly. Thus, if you opt to have a slew of machine learning approaches that are going to become part of your Q final set, and if you are expecting them to run in real-time on-board an AI self-driving car, this is going to be something you need to carefully assess. The amount of memory consumed and the processing power consumed might be prohibitive. There’s a big difference between using an ensemble for a research-oriented task, wherein you might not have any particular time constraints, and versus when using in an AI self-driving car that has severe time constraints and also limits on computational processing available.

For those of you familiar with Python, you might consider trying using the Python-oriented scikit-learn machine learning library and try out various ensemble machine learning aspects to get an understanding of how to use an ensemble learning approach.

If we’re going to have true AI systems, and especially AI self-driving cars, the odds are that we’ll need to deploy multiple machine learning models. Trying to only program directly our way to full AI is unlikely to be feasible. As Benjamin Franklin is famous for saying: “Tell me and I forget. Teach me and I remember. Involve me and I learn.” Using an ensemble learning approach is to-date a vital technique to get us toward that involve me and learn goal. We might still need even better machine learning models, but the chances are that no matter what we discover for better ML’s, we’ll end-up needing to combine them into an ensemble. That’s how the music will come out sounding robust and fulfilling for achieving ultimate AI.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.


Code Obfuscation for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Earlier in my career, I was hired to reverse engineer a million lines of code for a system that the original developer had long since disappeared. He had left behind no documentation. The firm had at least gotten him to provide a copy of the source code. Nobody […]

By Lance Eliot, the AI Trends Insider

Earlier in my career, I was hired to reverse engineer a million lines of code for a system that the original developer had long since disappeared. He had left behind no documentation. The firm had at least gotten him to provide a copy of the source code. Nobody at the firm knew anything about how the code itself worked. The firm was dependent upon the compiled code executing right and they simply hoped and prayed that they would not need to make any changes to the system.

Not a very good spot to be in.

I was told that the project was a hush-hush one and that I should not tell anyone else what I was doing. They would only let me see the source code while physically at their office, and otherwise I wasn’t to make a copy of it or take it off the premises. They even gave me a private room to work in, rather than sitting in a cubicle or other area where fellow staffers were. I became my own miniature skunk works, of sorts.

There was a mixture of excitement and trepidation for me about this project. I had done other reverse engineering efforts before and knew how tough it could be to figure out someone else’s code. Any morsels of “documentation” were always welcomed, even if the former developer(s) had only written things onto napkins or the back of recycled sheets of paper. Also, I usually had someone that kind of knew something about the structure of the code or at least had heard rumors by water cooler chats with the tech team. In this case, the only thing I had available were the end-users that used the system. I was able to converse with them and find out what the system was supposed to do, how they interacted with it, the outputs it produced, etc.

For a million lines of code, and with supposedly just one developer, he presumably was churning out a lot of lines of code for being just one person. I was told that he was a “coding genius” and that he was always able to “magically” make the system do whatever they needed. He was a great resource, they said. He was willing to make changes on the fly. He would come in during weekends to make changes. They felt like they had been given the “hacker from heaven” (with the word hacker in this case meaning a proficient programmer, and not the nowadays more common use as a criminal or cyber hacker).

I gently pointed out that if he was such a great developer, dare I say software engineer, how come he hadn’t documented his work? How come no one else was ever able to lay eyes on his work? How come he was the only one that knew what it did? I pointed out that they had painted themselves into a corner. If this heavenly hacker got hit by a bus (and floated upstairs, if you know what I mean), what then?

Well, they sheepishly admitted that I must be some kind of mind reader because he had one day just gotten up and left the company. There were stories that his girlfriend had gotten kidnapped in some foreign country and that he had arranged for mercenaries to rescue her, and that he personally was going there to be part of the rescue team. My mouth gaped open at this story. Sure, I suppose it could be true. I kind of doubted it. Seemed bogus.

The whole thing smelled like the classic case of someone that was protective of their work, and also maybe wanted a bit of job security. It’s pretty common that some developers will purposely aim to not document their code and make it as obscure as they can, in hopes of staving off losing their job. The idea is that if you are the only one that knows the secret sauce, the firm won’t dare get rid of you. You will have them trapped. Many companies have gotten themselves into that same predicament. And, though it seems like an obvious ploy to you and me, these firms often are clueless about what is taking place and fall into the trap without any awareness. When the person suddenly departs, the firm wakes up “shockingly” to what they’ve allowed to happen.

Some developers that get themselves into this posture will also at times try to push their luck. They demand that the firm pay them more money. They demand that the firm let them have some special perks. They keep upping the ante figuring that they’ll see how far they can push their leverage. This will at times trigger a firm to realize that things aren’t so kosher. At that point, they often aren’t sure of what to do. I’ve been hired as a “code mercenary” to parachute into such situations and try to help bail out the firm. As you might guess, the original developer, if still around, becomes nearly impossible to deal with and will refuse to lift a finger to help share or explain the secret sauce.

When I’ve discussed these situations with the programmer that had led things in that direction, they usually justified it. They would tell me that the firm at first paid them less than what a McDonald’s hamburger slinger would get. They got no respect for having finely honed programming skills. If the firm was stupid enough to then allow things to get into a posture whereby the programmer now had the upper hand, it seems like fair play. The company was willing to “cheat” him, so why shouldn’t he do likewise back to the company. The world’s a tough place and we each need to make our own choices, is what I was usually told.

Besides, it often played out over months and sometimes years, and the firm could have at any time opted to do something to prevent the continuing and deepening dependency. One such programmer told me that he had “saved” the company a lot of money. The doing of documentation would have required more hours and more billable time. The act of showing the code to others and teaching them about how it worked, once again more billable time. Furthermore, just like the case that I began to describe herein, he had worked evenings and weekends, being at the beck and call of the firm. They had gotten a great deal and had no right to complain.

Anyway, I’ll put to the side for the moment the ethics involved in all of this.

For those of you interested in the ethical aspects of programmers, please see my article:

When I took a look at the code of the “man that went to save his girlfriend in a strange land,” here’s what I found:   Ludwig Van Beethoven, Wolfgang Amadeus Mozart, Johann Sebastian Bach, Richard Wagner, Joseph Haydn, Johannes Brahms, Franz Schubert, Peter Ilyich Tchaikovsky, etc.


Allow me to elaborate. The entire source code consisted of variables with names of famous musical composers, and likewise all of the structure and objects and subroutines were named after such composers or were based on titles of their songs. Instead of seeing something like LoopCounter = LoopCounter + 1, it would say Mozart = Mozart + 1. Imagine a financial banking application that instead of referring to Account Name, Account Balance, Account Type, it instead said Bach, Wagner, and Brahms, respectively.

So, when trying to figure out the code, you’d need to tease out of the code that whenever you see the use of “Bach” it really means the Account Name field. When you see the use of Wagner it really means the Account Balance. And so on.

I was kind of curious about this seeming fascination with musical composers. When I asked if the developer was known for perhaps having a passion for classical music, I was told that maybe so, but not that anyone noticed.

I’d guess that it wasn’t so much his personal tastes in composers, and instead it was more likely his interest in code obfuscation.

You might not be aware that some programmers will purposely write their code in a manner to obfuscate it. They will do exactly what this developer had done. Instead of using naming that would be logically befitting the circumstance, they would make-up other names. The idea was that this would make it much harder for anyone else to figure out the code. This ties back to my earlier point about the potential desire to become the only person that can do the maintenance and upkeep on the code. By making things as obfuscated as you can, it causes anyone else to be either be baffled or have to climb up a steep learning curve to divine your secret sauce code.

If the person’s hand was forced by the company insisting that they share the code with Joe or Samantha, the programmer could say, sure, I’ll do so, and then hand them something that seems like utter mush. Here you go, have fun, the developer would say. If Joe and Samantha had not seen this kind of trickery before, they would likely roll their eyes and report back to management that it was going to be a long time to ferret out how the thing works.

I had the CEO of a software company that when this very thing happened, and when it was me that told him the programmer had made the code obfuscated, the CEO nearly blew his top. We’ll sue him for every dime we ever paid him, the CEO exclaimed. We’ll hang him out to dry and tell any future prospective employer that he’s poison and don’t ever hire him. And so on. Of course, trying to go after the programmer for this is going to be somewhat problematic. Did the code work? Yes. Did it do what the firm wanted? Yes. Did the firm ever say anything about the code having to be more transparently written? No.

Motivations for Code Obfuscation Vary

I realize that some of you have dealt with code that appears to be the product of obfuscation, and yet you might say that it wasn’t done intentionally. Yes, I agree that sometimes the code obfuscation can occur by happenstance. A programmer that doesn’t consider the ramifications of their coding practices might indeed write such code. They maybe didn’t intend to write something obfuscated, it just turned out that way. Suppose this programmer loved the classics and the composers, and when he started the coding he opted to use their names. That was well and good for say the first thousand lines of code.

He then kept building upon the initial base of code. Might as well continue the theme of using composer names. After a while, the whole darned thing is shaped in that way. It can happen, bit by bit. At each point in time, you think it doesn’t make sense to redo what you’ve already done, and so you just keep going. It might be like constructing a building that you first laid down some wood beams for, and even if maybe you should be using steel instead because that building is actually ultimately going to be a skyscraper, you started with wood, you kept adding into it with wood, and so wood it is.

For those of you that have pride as a software engineer, these stories often make you ill to your stomach. It’s those seat-of-the-pants programmers that give software development and software developers a bad name. Code obfuscation for a true software engineer is the antithesis of what they try to achieve. It’s like seeing a bridge with rivets and struts made of paper and you know the whole thing was done in a jury rigged manner. That’s not how you believe good and proper software is written.

I think we can anyway say this, code obfuscation can happen for a number of reasons, including possibly:

  •         Unintentionally and without awareness of it as a concern
  •         Unintentionally and by step at a time falling into it
  •         Intentionally and with some loathsome intent to obfuscate
  •         Intentionally but with an innocent or good meaning intent

So far, the intent to obfuscate has been suggested as something being done for job security or other personal reasons that have seemed somewhat untoward. There’s another reason to want to obfuscate the code, namely for code security or privacy, and rightfully so.

Suppose you are worried that someone else might find the code. This someone is not supposed to have it. You want the code to remain relatively private and you are hopeful of securing it so that no one else can rip it off or otherwise see what’s in it. This could be rightfully the case, since you’ve written the code and the Intellectual Property (IP) rights belong to you of it. Companies often invest millions of dollars into developing proprietary code and they obviously would like to prevent others from readily taking it or stealing it.

You might opt to encrypt the file that contains the source code. Thus, if someone gets the file, they need to find a means to decrypt it to see the contents. You can use some really strong form of encryption and hopefully the person wanting to inappropriately decrypt the file will have a hard time doing so and might be unable to do so or give up trying.

Using encryption is a pretty much an on-or-off kind of thing. In the encrypted state, no sense can be made of the contents, presumably. Suppose though that you realize that one way or another, someone has a chance of actually getting to the source code and being able to read what it says. Either they decrypt the file, or they happen to come along when it is otherwise in a decrypted state and grab up a copy of it, maybe they wander over to the programmer’s desktop and put in a USB stick and quickly get a copy while it is in plaintext format.

So, another layer of protection would be to obfuscate the code. You render the code less understandable. This can be done by altering the semantics of the code. The example of the musical composer names showcases how you might do this obfuscation. The musical composer names are written in English and readily read. But, from a logical perspective, in the context of this code, it wouldn’t have any meaning to someone else. The programmer(s) working on the code might have agreed that they all accept the idea that Bach means Account Name and Wagner means Account Balance.

Anyone else that somehow gets their hands on the code will be perplexed. What does Bach mean here? What does Wagner refer to? It puts those interlopers at a disadvantage. Rather than just picking up the code and immediately comprehending it, now they need to carefully study it and try to “reverse engineer” what it seems to be doing and how it is working.

This might require a laborious line-by-line inspection. It might take lots of time to figure out. Maybe it is so well obfuscated that there’s no reasonable way to figure it out at all.

The code obfuscation can also act like a watermark. Suppose that someone else grabs your code, and they opt to reuse it in their own system. They go around telling everyone that it is their own code, written from scratch, and no one else’s. Meanwhile, you come along and are able to take a look at their code. Imagine that you look at their code and observe that the code has musical composer names for all of the key objects in the code. Coincidence? Maybe, maybe not. It could be a means to try and argue that the code was ripped off from your code.

There are ways to programmatically make code obfuscated. Thus, you don’t necessarily need to do so by hand. You can use a tool to do the code obfuscation. Likewise, there are tools to help you crack a code obfuscation. Thus, you don’t necessarily need to do so entirely by hand.

In the case of the musical composer names, I might simply substitute the word “Bach” with the words “Account Name” and so on, which might make the code more comprehensible. The reality is that it isn’t quite that easy, and there are lots of clever ways to make the code obfuscated that it is very hard to render it fully un-obfuscated. There is still often a lot of by-hand effort required.

In this sense, the use of code obfuscation can be by purposeful design. You are trying to achieve the so-called “security by obscurity” kind of trickery. If you can make something obscure, it tends to make it harder to figure out and break into. At my house, I might put a key outside in my backyard so that I can get in whenever I want, but of course a burglar can now do the same. I might put the key under the doormat, but that’s pretty minimal obscurity. If I instead put the key inside a fake rock and I put it amongst a whole dirt area of rocks, the obfuscation is a lot stronger.

One thing about the source code obfuscation that needs to be kept in mind is that you don’t want to alter the code such that it computationally does something different than what it otherwise was going to do. That’s not usually considered in the realm of obfuscation. In other words, you can change the appearance of the code, you can possibly change around the code so that it doesn’t seem as recognizable, but if you’ve now made it that the code can no longer calculate the person’s banking balance, or if you’ve changed it such that the banking balance now gets calculated in a different way, you aren’t doing just code obfuscation.

In quick recap, here’s some aspects about code obfuscation:

  •         You are changing up the semantics and the look, but not the computational effect
  •         Code obfuscation can be done by-hand and/or by the use of tools
  •         Trying to reverse engineer the obfuscation can be done by-hand and/or by the use of tools
  •         There is weak obfuscation that doesn’t do an extensive code obfuscation
  •         There is strong obfuscation that makes the code obfuscation deep and arcane to unwind
  •         Code obfuscation can serve an additional purpose of trying to act like a watermark

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. And, like many of the auto makers and tech firms, we consider the source code to be proprietary and worthy of protecting.

One means for the auto makers and tech firms to try and achieve some “security via obscurity” is to go ahead and apply code obfuscation to their precious and highly costly source code.

This will help too for circumstances where someone somehow gets a copy of the source code. It could be an insider that opts to leak it to another firm or sell it to a competitor. Or, it could be that an breach took place into the systems holding the source code and a determined attacker managed to grab it. At some later point in time, if the matter gets exposed and there is a legal dispute, it’s possible that the code obfuscation aspects could come to play as a type of watermark of the original code.

For my article about the stealing of secrets and AI self-driving cars, see:

For my article about the egocentric designs of AI self-driving cars, see:

If you are considering using code obfuscation for this kind of purpose, you’ll obviously want to make sure that the rest of the team involved in the code development is on-board with the notion too. Some developers will like the idea, some will not. Some firms will say that when you check-out the code from a versioning system, they will have it automatically undo the code obfuscation, and only when it is resting in the code management system will it be in the code obfuscation form. Anyway, there are lots of issues to be considered before jumping into this.

For my article about AI developers and groupthink, see:

For the dangers of making an AI system into a Frankenstein, see my article:

Let’s also remember that there are other ways that one can end-up with code obfuscation. For some of the auto makers and tech firms, and with some of the open source code that has been posted for AI self-driving cars, I’ve right away noticed a certain amount of code obfuscation that has crept into the code when I’ve gotten an opportunity to inspect it.

As mentioned earlier, it could be that the natural inclination of the programmers or AI developers involves writing code that has code obfuscation in it. This can be especially true for some of the AI developers that were working in university research labs and now they have taken a job at an auto maker or tech firm that is creating AI software for self-driving cars. In the academic environment, often any kind of code you want to sling is fine, no need to “pretty it up” since it usually is done as a one-off to do an experiment or provide some kind of proof about an algorithm.

Self-Driving Car Software Needs to be Well-Built

The software intended to run a self-driving car ought to be better made than that – lives are at stake.

In some cases, the AI developers are under such immense pressures to churn out code for a self-driving car, due to the auto maker or tech firm having unimaginable or unattainable deadlines, they inadvertently write code no matter whether it seems clear cut or not. As often has been said, there is no style in a knife fight. There can also be AI developers that aren’t given guidance to write clearer code, or not given the time to do so, or not rewarded for doing so, and thus all of those reasons can come to play in code obfuscation too.

See my article about AI developer burnout:

See my article about API’s and AI self-driving cars:

Per my framework about AI self-driving cars, these are the major tasks involved in the AI driving the car:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action plan formulation
  •         Car controls command issuance

See my framework at:

There is a lot of code involved in each of those tasks. This is a real-time system that must be able to act and react quickly. The code needs to be tightly done so that it can run in optimal time. Meanwhile, the code needs to be understandable since the humans that wrote the code will need to find bugs in it, when they appear (which they will), and the humans need to update the code (such as when new sensors are added), and so on.

Some of the elements are based on “non-code” such as a machine learning model. Let’s agree to carve that out of the code obfuscation topic for the moment, though there are certainly ways to craft a machine learning model that can be more transparent or less transparent. In any case, taking out those pre-canned portions, I assure you that there’s a lot of code still leftover.

See my article about machine learning models and AI self-driving cars:

The auto makers and tech firms are in a mixed bag right now with some of them developing AI software for self-driving cars that is well written, robust, and ready for being maintained and updated. Others are rushing to write the code, or are unaware of the ramifications of writing obfuscated code, and might not realize the err of their ways until further along in the life cycle of advancing their self-driving cars. There are even some AI developers that are like the music man that wrote his code with musical composers in mind, for which it could be an unintentional act or an intentional act. In any case, it might be “good” for them right now, but likely later on will most likely turn out to be “bad” for them and others too.

Here’s then the final rules for today’s discussion on code obfuscation for AI self-driving cars:

  •         If it is happening and you don’t realize it, please wake-up and decide what to overtly be doing
  •         If you are using it as a rightful technique for security by obscurity, please make sure you do so aptly
  •         If you are using it for nefarious purposes, just be aware that what goes around comes around
  •         If you aren’t using it, decide explicitly whether to consider it or not, making a calculated decision about the value and ROI of using code obfuscation

For those of you reading this article, please be aware that in thirty seconds this text will self-obfuscate into English language obfuscation and the article will no longer appear to be about code obfuscation and instead will be about underwater basket weaving. The secrets of code obfuscation herein will no longer be visible. Voila!

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.


Here are 9 AI Use Cases Happening in Business Today

Artificial intelligence (AI) is increasingly getting attention from enterprise decision makers. Given that, it’s no surprise that AI use cases are growing. According research conducted by Gartner, smart machines will achieve mainstream adoption by 2021, with 30 percent of large companies using AI. These technologies, which can take the form of cognitive computing, machine learning and deep learning, are now […]

Artificial intelligence (AI) is increasingly getting attention from enterprise decision makers. Given that, it’s no surprise that AI use cases are growing. According research conducted by Gartner, smart machines will achieve mainstream adoption by 2021, with 30 percent of large companies using AI.

These technologies, which can take the form of cognitive computing, machine learning and deep learning, are now tapping advanced capabilities such as image recognition, speech recognition, the use of smart agents, and predictive analytics to reinvent the way organizations do business. Combined with other digital technologies, including the Internet of Things (IoT), a new era of AI promises to transform business.

Here’s a look at 10 leading AI use cases and how organizations can use them to gain a competitive advantage:

Marketing: AI for Real Time Data 

The use of real-time data, Web data, historical purchase data, app use data, unstructured data and geolocation information have introduced the ability to deliver information, product recommendations, coupons and incentives at the right time and place. AI allows companies to engage in personalized marketing and slide the dial closer to one-to-one relationships.

In addition, businesses gain competitive advantage by using machine learning and deep learning for sentiment analysis by analyzing e-mail and social media streams. More advanced systems can detect a person’s mood from photos and videos. This helps systems respond contextually and create more targeted marketing and interactions.

Retail Sales: AI for Voice and Image Search

Artificial intelligence in retail is transforming the way people shop and buy items ranging from clothes to cars. Voice search and image search are now widespread. Amazon and many other retailers now incorporate these tools in their apps. Next generation AI is also taking shape. For example, augmented reality (AR) lets shoppers view a sofa or paint color superimposed in their house or office. Virtual reality (VR) allows consumers to sit inside a vehicle and even test drive it without leaving home. Audi, BMW and others have developed VR systems for shoppers.

But the AI use cases don’t stop there. AI in retail extends to bots and virtual assistants that recommend products and provide information; algorithms that helps sales teams focus on high value customers and high probability transactions; and predictive analytics that factor in weather, the price of raw goods and components, or inventory levels to adjust pricing and promotions dynamically. Clothing retailer North Face, for instance, asks customers a series of questions related to a purchase at its website. Not only does this lead customers to the right product, it taps machine learning to gain insights that potentially lead to higher cart values and additional sales.

Customer Support: AI for Natural Language

AI in retail is emerging as a powerful force, but customer support is also harnessing the technology for competitive advantage. Bots and digital assistants are transforming the way support functions take place. These technologies increasingly rely on natural language processing to identify problems and engage in automated conversations. AI algorithms determine how to direct the conversation or route the call to the right human agent, who has the required information on hand. This helps shorten calls and it produces higher customer satisfaction rates. A Forrester study found that 73 percent of customers said that valuing their time is the most important thing a company can do to provide them with good online customer service.

Manufacturing: AI Powers Smart Robots

Robotics has already changed the face of manufacturing. However, robots are becoming far more intelligent and autonomous, thanks to AI. What is machine learning used for in factories? Many companies are building so-called “smart manufacturing” facilities that use AI to optimize labor, speed production and improve product quality. Companies are also turning to predictive analytics to understand when a piece of equipment is likely to require maintenance, repair or replacement.

For example, Siemens is now equipping gas turbine systems with more than 500 sensors that continuously monitor devices and machines. All this data is helping create the manufacturing facility of the future, sometimes referred to as Industry 4.0. Smart manufacturing–which merges the industrial IoT and AI–is projected to grow from $200 billion in 2018 to $320 billion by 2020, according to a study conducted by market research firm TrendForce.

Read the source article in Datamation.