Four Suggestions for Using a Kaggle Competition to Test AI in Business

According to a McKinsey report, only 20% of companies consider themselves adopters of AI technology while 41% remain uncertain about the benefits that AI provides. Considering the cost of implementing AI and the organizational challenges that come with it, it’s no surprise that smart companies seek ways to test the solutions before implementing them and get […]

According to a McKinsey report, only 20% of companies consider themselves adopters of AI technology while 41% remain uncertain about the benefits that AI provides. Considering the cost of implementing AI and the organizational challenges that come with it, it’s no surprise that smart companies seek ways to test the solutions before implementing them and get a sneak peek into the AI world without making a leap of faith.

That’s why more and more organizations are turning to data science competition platforms like Kaggle, CrowdAI and DrivenData. Making a data science-related challenge public and inviting the community to tackle it comes with many benefits:

  • Low initial cost – the company needs only to provide data scientists with data, pay the entrance fee and fund the award. There are no further costs.
  • Validating results – participants provide the company with verifiable, working solutions.
  • Establishing contacts – A lot of companies and professionals take part in Kaggle competitions. The ones who tackled the challenge may be potential vendors for your company.
  • Brainstorming the solution – data science is a creative field, and there’s often more than one way to solve a problem. Sponsoring a competition means you’re sponsoring a brainstorming session with thousands of professional and passionate data scientists, including the best of the best.
  • No further investment or involvement – the company gets immediate feedback. If an AI solution is deemed efficacious, the company can move forward with it and otherwise end involvement in funding the award and avoid further costs.

While numerous organizations – big e-commerce websites and state administrations among them – sponsor competitions and leverage the power of data science community, running a competition is not at all simple. An excellent example is the competition the US National Oceanic and Atmospheric Administration sponsored when it needed a solution that would recognize and differentiate individual right whales from the herd. Ultimately, what proved the most efficacious was the principle of facial recognition, but applied to the topsides of the whales, which were obscured by weather, water and the distance between the photographer above and the whales far below. To check if this was even possible, and how accurate a solution may be, the organization ran a Kaggle competition, which deepsense.ai won.

Having won several such competitions, we have encountered both brilliant and not-so-brilliant ones. That’s why we decided to prepare a guide for every organization interested in testing potential AI solutions in Kaggle, CrowdAI or DrivenData competitions.

Recommendation 1. Deliver participants high-quality data

The quality of your data is crucial to attaining a meaningful outcome. Minus the data, even the best machine learning model is useless. This also applies to data science competitions: without quality training data, the participants will not be able to build a working model. This is a great challenge when it comes to medical data, where obtaining enough information is problematic for both legal and practical reasons.

  • Scenario: A farming company wants to build a model to identify soil type from photos and probing results. Although there are six classes of farming soil, the company is able to deliver sample data for only four. Considering that, running the competition would make no sense – the machine learning model wouldn’t be able to recognize all the soil types.

Advice: Ensure your data is complete, clear and representative before launching the competition.

Recommendation 2. Build clear and descriptive rules

Competitions are put together to achieve goals, so the model has to produce a useful outcome. And “useful” is the point here. Because those participating in the competition are not professionals in the field they’re producing a solution for, the rules need to be based strictly on the case and the model’s further use. Including even basic guidelines will help them to address the challenge properly. Lacking these foundations, the outcome may be right but totally useless.

  • Scenario: Mapping the distribution of children below the age of 7 in the city will be used to optimize social, educational and healthcare policies. To make the mapping work, it is crucial to include additional guidelines in the rules. The areas mapped need to be bordered by streets, rivers, rail lines, districts and other topographical obstacles in the city. Lacking these, many of the models may map the distribution by cutting the city into 10-meter widths and kilometer-long stripes, where segmentation is done but the outcome is totally useless due to the lack of proper guidelines in the competition rules.

Advice: Think about usage and include the respective guidelines within the rules of the competition to make it highly goal-oriented and common sense driven.

Read the source article at deepsense.ai.

Here are the Top 5 Languages for Machine Learning, Data Science

Careers in data science, artificial intelligence, machine learning, and related technologies are considered among the best choices to pursue in an uncertain future economy where many jobs may end up automated and performed by robots and AI. Yet in spite of the likely strong and secure future of these careers, the job marketplace remains fundamentally […]

Careers in data science, artificial intelligence, machine learning, and related technologies are considered among the best choices to pursue in an uncertain future economy where many jobs may end up automated and performed by robots and AI.

Yet in spite of the likely strong and secure future of these careers, the job marketplace remains fundamentally unbalanced. There are still many more jobs open and available than there are qualified applicants to fill those jobs. Just do a search on Monster for the keyword machine learning and you will find thousands of job openings across the country.

Whether you are just starting out in your IT career or you are watching high-profile IT layoffs and considering the best new skills to learn, chances are you are wondering what the best skills are to emphasize on your LinkedIn profile and the best skills to focus on in the next online course you take. What programming language is the most likely to secure your future?

Through our regular discussions with executives, recruiters, and practitioners in the field, we’ve come up with a short list for you. You may already have one or more of these skills. Maybe you are wondering about the best one to learn next. Here’s our list. If you see one that you think we missed, please let us know in the comments section.

R

R remains one of the top languages for data science. First developed in the 1990s, this open source language has its roots in statistics, data analysis, and data visualization. In recent years it’s become the choice of a new generation of analysts who have who have appreciated the active open source community, the fact that they can download the software for free, and the downloadable packages that are available to customize the tool. Tech giant Microsoft has also embraced the platform acquiring Revolution Analytics, a commercially supported enterprise platform for R, in 2015.

Java

Java has also been around since the early 1990s, and back then was famous for its “write once, run anywhere” design, originating inside Sun Microsystems. Sun may no longer exist, having been acquired by Oracle, but Java seems here to stay, and it’s one of the languages you will likely encounter in your career as a machine learning specialist. Many of the machine learning job description ads out there specify Java as one of the languages they’d like for you to know. Chances are if you’ve been in development at all over the last 20 years, you’ve acquired a little bit of experience with Java. And if you feel like you need a little more hands-on experience, it’s pretty easy to find an online course.

Scala

Scala is another language that has been popular with data scientists and machine learning specialists. You’ll see this one mentioned most often in job ads where real-time data analysis is important to the role. It is an implementation language of technologies that enable streaming data, such as Spark and Kafka. Scala combines functional and object-oriented programming and works with both Java and Javascript.

C and C++

These languages have also been around for decades, and you may see them mentioned in machine learning job ads in the same sentence as some other more popular languages for machine learning. Organizations may be looking to add machine learning to existing projects that were built in these languages and so they may be looking for this kind of expertise. But if you are looking for a first language to learn for use with machine learning, it’s probably not one of these.

Python

Right now, Python is probably the top language to learn if you are looking to skill up in areas around machine learning. Just check out online machine learning courses that are available today. Chances are the one you pick will be using Python as the language of choice.

You’ll also find that Python is probably the top named language skill in job ads for machine learning specialists, and certainly also mentioned in many ads for data scientists and analysts, too. If you have to choose one skill to learn this year, Python is a great choice.

Read the source article in InformationWeek.

Solving Sparse-Reward Tasks with Curiosity

By Arthur Juliani, Senior Software Engineer, Machine Learning, Unity Technologies Now there is an easy way to encourage agents to explore the environment more effectively when the rewards are infrequent and sparsely distributed. These agents can do this using a reward they give themselves based on how surprised they are about the outcome of their […]

By Arthur Juliani, Senior Software Engineer, Machine Learning, Unity Technologies

Now there is an easy way to encourage agents to explore the environment more effectively when the rewards are infrequent and sparsely distributed. These agents can do this using a reward they give themselves based on how surprised they are about the outcome of their actions. In this post, I will explain how this new system works, and then show how we can use it to help our agent solve a task that would otherwise be much more difficult for a vanilla Reinforcement Learning (RL) algorithm to solve.

Curiosity-driven exploration

When it comes to Reinforcement Learning, the primary learning signal comes in the form of the reward: a scalar value provided to the agent after every decision it makes. This reward is typically provided by the environment itself and specified by the creator of the environment. These rewards often correspond to things like +1.0 for reaching the goal, -1.0 for dying, etc. We can think of this kind of rewards as being extrinsic because they come from outside the agent. If there are extrinsic rewards, then that means there must be intrinsic ones too. Rather than being provided by the environment, intrinsic rewards are generated by the agent itself based on some criteria. Of course, not any intrinsic reward would do. We want intrinsic rewards which ultimately serve some purpose, such as changing the agent’s behavior such that it will get even greater extrinsic rewards in the future, or that the agent will explore the world more than it might have otherwise. In humans and other mammals, the pursuit of these intrinsic rewards is often referred to as intrinsic motivation and tied closely to our feelings of agency.

Researchers in the field of Reinforcement Learning have put a lot of thought into developing good systems for providing intrinsic rewards to agents which endow them with similar motivation as we find in nature’s agents. One popular approach is to endow the agent with a sense of curiosity and to reward it based on how surprised it is by the world around it. If you think about how a young baby learns about the world, it isn’t pursuing any specific goal, but rather playing and exploring for the novelty of the experience. You can say that the child is curious. The idea behind curiosity-driven exploration is to instill this kind of motivation into our agents. If the agent is rewarded for reaching states which are surprising to it, then it will learn strategies to explore the environment to find more and more surprising states. Along the way, the agent will hopefully also discover the extrinsic reward as well, such as a distant goal position in a maze, or sparse resource on a landscape.

We chose to implement one specific such approach from a recent paper released last year by Deepak Pathak and his colleagues at Berkeley. It is called Curiosity-driven Exploration by Self-supervised Prediction, and you can read the paper hereif you are interested in the full details. In the paper, the authors formulate the idea of curiosity in a clever and generalizable way. They propose to train two separate neural-networks: a forward and an inverse model. The inverse model is trained to take the current and next observation received by the agent, encode them both using a single encoder, and use the result to predict the action that was taken between the occurrence of the two observations. The forward model is then trained to take the encoded current observation and action and predict the encoded next observation. The difference between the predicted and real encodings is then used as the intrinsic reward, and fed to the agent. Bigger difference means bigger surprise, which in turn means bigger intrinsic reward.

Read the source post on the Unity blog.

Here Are 10 Open-Source Tools/Frameworks for Artificial Intelligence

Here are 10 open-source tools/frameworks for today’s hot topic, AI. TensorFlow An open-source software library for Machine Intelligence. TensorFlow™ is an open-source software library, which was originally developed by researchers and engineers working on the Google Brain Team. TensorFlow is for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while […]

Here are 10 open-source tools/frameworks for today’s hot topic, AI.

TensorFlow

An open-source software library for Machine Intelligence.

TensorFlow™ is an open-source software library, which was originally developed by researchers and engineers working on the Google Brain Team. TensorFlow is for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.

TensorFlow provides multiple APIs. The lowest level API — TensorFlow Core — provides you with complete programming control. The higher level APIs are built on top of TensorFlow Core. These higher level APIs are typically easier to learn and use than TensorFlow Core. In addition, the higher level APIs make repetitive tasks easier and more consistent between different users. A high-level API like tf.estimator helps you manage data sets, estimators, training, and inference.

The central unit of data in TensorFlow is the tensor. A tensor consists of a set of primitive values shaped into an array of any number of dimensions. A tensor’s rank is its number of dimensions.

A few Google applications using tensor flow are:

RankBrain: A large-scale deployment of deep neural nets for search ranking on www.google.com

Inception Image Classification Model: Baseline model and follow-on research into highly accurate computer vision models, starting with the model that won the 2014 Imagenet image classification challenge

SmartReply: Deep LSTM model to automatically generate email responses

Massively Multitask Networks for Drug Discovery: A deep neural network model for identifying promising drug candidates by Google in association with Stanford University.

On-Device Computer Vision for OCR: On-device computer vision model to do optical character recognition to enable real-time translation

Apache SystemML

An optimal workplace for machine learning using big data.

SystemML, the machine-learning technology created at IBM, has reached one of the top-level project status at the Apache Software Foundation and it’s a flexible, scalable, machine learning system. Important characteristics are:

Algorithm customizability via R-like and Python-like languages.

Multiple execution modes, including Spark MLContext, Spark Batch, Hadoop Batch, Standalone, and JMLC (Java Machine Learning Connector).

Automatic optimization based on data and cluster characteristics to ensure both efficiency and scalability.

SystemML considered as SQL for Machine learning. Latest version (1.0.0) of SystemML supports: Java 8+, Scala 2.11+, Python 2.7/3.5+, Hadoop 2.6+, and Spark 2.1+.

It can be run on top of Apache Spark, where it automatically scales your data line by line, determining whether your code should be run on the driver or an Apache Spark cluster. Future SystemML developments include additional deep learning with GPU capabilities such as importing and running neural network architectures and pre-trained models for training.

Java Machine Learning Connector (JMLC) for SystemML

The Java Machine Learning Connector (JMLC) API is a programmatic interface for interacting with SystemML in an embedded fashion. The primary purpose of JMLC is as a scoring API, where your scoring function is expressed using SystemML’s DML (Declarative Machine Learning) language. In addition to scoring, embedded SystemML can be used for tasks such as unsupervised learning (for example, clustering) in the context of a larger application running on a single machine.

Caffe

A deep learning framework made with expression, speed, and modularity in mind.

The Caffe project was initiated by Yangqing Jia during his Ph.D. at UC Berkeley and then later developed by Berkeley AI Research (BAIR) and by community contributors. It mostly focusses on convolutional networks for computer vision applications. Caffe is a solid and popular choice for computer vision-related tasks and you can download many successful models made by Caffe users from the Caffe Model Zoo (link below) for out-of-the-box use.

Caffe Advantages

Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.

Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back.

Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU.

Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia.

Read the source article at DZone.com.

Tit-for-Tat and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider When I get to work each morning, I often see a duel between car drivers when they reach the gate that goes into the workplace parking garage. Let me describe the situation for you. For those drivers such as me that tend to come into the parking garage […]

By Lance Eliot, the AI Trends Insider

When I get to work each morning, I often see a duel between car drivers when they reach the gate that goes into the workplace parking garage. Let me describe the situation for you.

For those drivers such as me that tend to come into the parking garage from a major street that runs in front of the parking facility, we turn into a narrow alley that then leads to a gate arm. You need to then take out your pass card, wave it at the panel that will open the gate, and the gate arm then rises up (assuming you have a valid card). At this juncture, you are able to drive into the parking structure. This works just fine most of the time in the sense that once the gate arm is up, you zoom ahead into the parking structure.

But, it turns out that there is a second gate that is inside the parking structure and it allows traffic that is already in the structure to get onto the same parking floor as the gate that connects to the major street. This other gate arm is directly in front of the gate that I go through. So, what sometimes happens is that the gate opens from the street side, and there is a car inside the structure that at that same moment opens the gate that is just a few feet in front of the street side gate. Now, you have two cars facing each other, both wanting to go into the parking structure, but only one can do so at a time (because of the thin neck of the parking structure where the two gates are setup).

Imagine a duel in the old west days. Two gunslingers, both with loaded guns, eyeing each other. One waits for the other since either one might suddenly spring forth with their gun. Who will pull their gun out first? The same thing happens when both gates open at the same time. One car driver eyes the other car driver. Is the other car driver going to go first, or should I go first, that’s what’s in the mind of each driver.

Now, there are some car drivers that seem to not care that the other car driver might suddenly go forward, and as a result, these careless or heartless drivers just move forward and take what they seem to think is their birthright to always go first. This often seems to work out, admittedly. I’ve never seen two cars that crashed into each other. That being said, there are situations where both of the car drivers at each of the gates seems to believe they each have a birthright to make the first move. And, in those instances, the two cars have gotten pretty close to hitting each other. Typically, in this instance, they get within inches, one opts to stop, and the other driver zips ahead.

There are also the nervous nellies, or maybe they should more properly be known as courteous drivers, for whom when the gates open, each of those type of car driver looks to the other driver to go ahead and make the first move. These car drivers are willing to give the other driver the right of way. This is pretty nice, except admittedly sometimes it means that both cars are sitting there, waiting for the other, and because they are so giving that they are apparently willing to sit there forever (it seems), and meanwhile the car drivers behind them get really ticked off. You see, the gate arms are up, and yet nobody is moving, which can anger other drivers. If you are someone that is nearly late for work, and you see that nobody is moving, and yet the gate arms are up, you would be quite upset that you are being held back and that no one is moving ahead.

I’ve spoken with many of these drivers of each kind. The ones that are the birthright believers would say that it isn’t necessarily that they think they have a right of way, but instead they insist that their method is the most efficient. By not waiting around, they are moving the lines forward. Their belief is that all drivers should always be moving ahead as soon as the gate arm opens. If this means that you might also make contact with another car, that’s fine, since it is the modest cost for keeping the line moving efficiently.  These car drivers also think that the ones that wait once the gate opens are idiots, they are stupid because they are holding up the line and making things be inefficient.

The ones that are willing to wait for the other driver believe that this is the right way to do things. They believe it minimizes the potential for crashing into other cars. It is polite. It is civilized. The other car drivers that insist on the right of way are piggish. Those piggish drivers don’t care about other people and are ego-centric. They are the ones that ruin the world for everyone else.

Which is right and which is wrong?

You tell me. I’m not sure we can definitely call either one always right or always wrong. Certainly, the ones that increase the risk of cars hitting each other are endangering the lives of others, and so we could likely ascribe they are wrong for taking that first move.  I suppose though that those drivers would say that you might have drivers coming into the line that might hit the other cars ahead of them, since once the gate arm opens it suggests everyone should be moving forward, and so maybe there is that risk that needs to be compared to the risk of two cars hitting each other once the gates are open and the two cars are facing off.

It’s a conundrum, for sure.

The Prisoner’s Dilemma

Some of you might recognize this problem as one that has been affectionately called the Prisoner’s Dilemma.

In the case of the prisoner’s dilemma, you are to pretend that there are two people that have been arrested and are being held by the police in separate rooms. The two have no means to communicate with each other.  They were each potentially involved in the same crime. A prosecutor is willing to offer them each a deal.

The deal is that if one of them betrays the other and says that the other one did the crime, the one that does the betraying will be set free if the other one has remained silent (and in that case the other one will get 3 years in jail). But, if the other one has also tried to betray the one that is doing the betraying, they will both get 2 years in jail. If neither of them betrays the other, they will each get 1 year in jail.

So, if you were one of those prisoners, what would you do?

Would you betray the other one, doing so in hopes that the other one remains silent and so you can then go free? Of course, if the other one also betrays you, you both are going to jail for 2 years. Even worse, if you have remained silent and the other one betrays you, you’ll go to jail for 3 years. You could pin your hopes on the other one remaining silent and you opt to remain silent, figuring in that case you both only get 1 year each.

Another conundrum, for sure!

If you were to create a so-called payoff matrix, you would put on one axis the choices for you, and on the other axis the choices for the other person. You would have a 2×2 matrix, consisting of you remaining silent, you betraying the other, on one axis, and the other axis would have the other person remaining silent, or betraying you.

There is the viewpoint that by remaining silent you are trying to cooperate with the other person, while if you betray the other person then you are essentially defecting from cooperating. If you defect and the other person tries to cooperate, you get the payoff temptation labeled as T. If you cooperate and the other person cooperates, you get the payoff reward labeled as R. If you defect and the other person defects then you get the payoff punishment labeled as P. If you cooperate and the other person defects, you get the payoff “sucker bet” labeled as S.

In the game as I’ve described it:  T > R > P > S

I mention this because there are numerous variations of the dilemma in terms of the amount of payoff involved for each of the four choices, and it makes a difference in that unless the above is true, namely T > R, R > P, P > S, then the logic about what choice you should logically make is changed.

Anyway, what would you do? In theory, you are better off with betraying the other prisoner. This would be a proper thing to do in a rational self-interested way, since it offers the greatest option for the least of the penalties that you might get.

Now, you might try to fight the problem by saying that it depends on the nature of the other person. You might think that certainly you would already know the other fellow prisoner and so you would already have an inkling of how the other person is going to vote. If you knew that the other person was the type to remain silent, then you would indeed want to remain silent. If you knew that the other person was the type to fink on others, you’d presumably want to betray them.

But, we are going to say that you don’t know the other person and do not know what choice they are likely or unlikely to make.  Going back to the car drivers at the open gates, the car driver looking at the other car driver does not know that other person and does not know if they are the type of person that will zoom ahead or will be willing to wait. It’s the same kind of situation. Complete strangers that do not know what the other one will do.

Humans Tend Toward Cooperative Behavior

You might feel better about the world if I were to tell you that humans in these kinds of games have tended toward cooperative behavior and would more than not be willing to assume that the other person will act cooperatively too. I hope that boosts your feelings about humanity. Well, keep in mind that those that aren’t the cooperative types are now thinking that you are sheep and they like the idea that there are lots of sheep in the world. Sorry.

You might somewhat object to this prisoner’s dilemma since it only is a one-time deal. You might wonder what would someone do if the same dilemma happened over and over.  There is indeed the iterative prisoner’s dilemma, in which you play the game once, then after the outcome is known, you play it again, and so on. This makes for a quite different situation. You now can see what your other prisoner is doing over time, and opt to respond based on what the other person does.

When the number of times that the play is iterated is known to the players, the proper rational thing to do is for each to betray. In that sense, it is just like the single-shot game play. On the other hand, if the number of iterations is unknown, it is a toss-up as to whether to cooperate or to defect.

For the multiple plays, there are various strategies you can use. One is the “nice person” strategy of starting as a cooperative person and only switching if the other does a betray. The extreme of the nice person strategy is to always cooperate no matter what, but usually the other person will realize this and will then switch to betray for the rest of the game.

You might find of interest that these prisoner dilemma games have been played in numerous tournaments. One of the most winning strategies was done in four lines of programming code and became known as tit-for-tat. Whatever the other person did, the program did the same thing on the next move.  The only problem here is that often the game then goes into a death spiral of both players always defecting. As such, there is a variant that is tit-for-tat with some forgiveness, which will detect if a certain number of continuous betrayals has occurred and will then switch to cooperate in hopes that the other side will do so too.

What does this have to do with AI self-driving cars?

At the Cybernetic Self-Driving Car Institute, we are developing AI for self-driving cars and there will be instances when a self-driving car will need to make these kinds of prisoner dilemma decisions, such as the case of the open gates and deciding who goes first.

You’ve likely heard about the famous example of the self-driving car that came to a four-way stop sign and waited for the other cars to go ahead, which it did because the other cars were driven by humans and those humans realized that if they rolled through the stop sign they could intimidate the self-driving car. The AI of the self-driving car had been developed to avoid potentially leading to a crash and so it sat at the stop sign waiting for the other cars to give it a turn to go.

If we had only self-driving cars on the roadways, presumably they would have each neatly stopped at the stop sign, and then would have abided by the usual rule of the first one there goes, or that the one to the right goes, or something like that. There would not have been a stand-off.  They might even be able to communicate with each other via a local V2V (vehicle to vehicle communication system).

But, we are going to have a mixture of both human driven cars and AI driven self-driving cars for many years, if not even forever, and so the AI of the self-driving car cannot assume that the other cars are being driven by AI. The AI needs to know how to deal with human drivers.

Suppose your self-driving car is on the freeway and a car in the next lane signals that it wants into the lane of the self-driving car. Should it let the other car in? If the AI does this, it could be that pushy human drivers realize that the AI is a sucker, and the next thing you know all the other cars around the self-driving car are trying to also cut into the lane. At some point, the AI needs to know when to allow someone else in and when not to do so.

If the AI is playing the always cooperate mode, it will be just like the prisoner’s dilemma that others will do the betray always to the AI because they know that the AI will cave in. Don’t think we want that.

In fact, there might be some owners of self-driving cars that will insist they want their self-driving car AI to be the betraying type. Just as they themselves are perhaps that egocentric person, they might want that their self-driving car has the same kind of dominant approach to driving. You might be thinking that we don’t need to let such car owners have their way, and that maybe the auto makers should make all self-driving cars be the cooperating type. Or, maybe we should have federal and state regulations that say an AI cannot be the betraying type and so this will force all AI to be the same way.  Again, this is highly questionable and raises the same points made earlier about the mix of human drivers and AI drivers.

I realize you might find it shocking to think that the AI would be potentially a pushy driver and insist on getting its way. Imagine a human driver that encounters such a self-centered self-driving car. Will the human driver have road rage against the AI self-driving car? No darned machine is going to take cuts in front of me, you can just hear the human driver screaming in anger. Fortunately, we are unlikely to get any road rage from the AI when the human cuts it off while driving, though, if we train the AI by the way that humans drive, it could very well have a hidden and embedded road rage within its deep learning neural network.

The ability to discern what to do in the prisoner’s dilemma circumstances is a needed skill for the AI of any self-driving car that is seeking to be a Level 5 (a Level 5 is a true self-driving car that can do whatever a human driver can do). Besides providing that type of AI skill, the other aspect is whether to allow the self-driving car owner or human occupant to modify what it is. For example, the voice command system of the AI in the self-driving car could interact with the owner or occupant and find out which dominant strategy to use, allowing the human owner or occupant to select whatever they prefer in the situation. If you are late for work, maybe you go with the betray, while if you are on leisurely drive and in no rush then maybe you choose the cooperative. Or, maybe the AI needs to ascertain the type of person you are, and take on your personality. It’s a tough driving world out there and the tit-for-tat is just one of many ways to for your AI to make its way through the world.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

Adobe’s CTO Leads Company’s AI Business Strategy

By Ron Miller, TechCrunch Reporter There isn’t a software company out there worth its salt that doesn’t have some kind of artificial intelligence initiative in progress right now. These organizations understand that AI is going to be a game-changer, even if they might not have a full understanding of how that’s going to work just […]

By Ron Miller, TechCrunch Reporter

There isn’t a software company out there worth its salt that doesn’t have some kind of artificial intelligence initiative in progress right now. These organizations understand that AI is going to be a game-changer, even if they might not have a full understanding of how that’s going to work just yet.

In March at the Adobe Summit, I sat down with Adobe executive vice president and CTO Abhay Parasnis, and talked about a range of subjects with him including the company’s goal to build a cloud platform for the next decade — and how AI is a big part of that.

Parasnis told me that he has a broad set of responsibilities starting with the typical CTO role of setting the tone for the company’s technology strategy, but it doesn’t stop there by any means. He also is in charge of operational execution for the core cloud platform and all the engineering building out the platform — including AI and Sensei. That includes managing a multi-thousand person engineering team. Finally, he’s in charge of all the digital infrastructure and the IT organization — just a bit on his plate.

Ten years down the road

The company’s transition from selling boxed software to a subscription-based cloud company began in 2013, long before Parasnis came on board. It has been a highly successful one, but Adobe knew it would take more than simply shedding boxed software to survive long-term. When Parasnis arrived, the next step was to rearchitect the base platform in a way that was flexible enough to last for at least a decade — yes, a decade.

“When we first started thinking about the next generation platform, we had to think about what do we want to build for. It’s a massive lift and we have to architect to last a decade,” he said. There’s a huge challenge because so much can change over time, especially right now when technology is shifting so rapidly.

That meant that they had to build in flexibility to allow for these kinds of changes over time, maybe even ones they can’t anticipate just yet. The company certainly sees immersive technology like AR and VR, as well as voice as something they need to start thinking about as a future bet — and their base platform had to be adaptable enough to support that.

Making Sensei of it all

But Adobe also needed to get its ducks in a row around AI. That’s why around 18 months ago, the company made another strategic decision to develop AI as a core part of the new  platform. They saw a lot of companies looking at a more general AI for developers, but they had a different vision, one tightly focussed on Adobe’s core functionality. Parasnis sees this as the key part of the company’s cloud platform strategy. “AI will be the single most transformational force in technology,” he said, adding that Sensei is by far the thing he is spending the most time on.”

The company began thinking about the new cloud platform with the larger artificial intelligence goal in mind, building AI-fueled algorithms to handle core platform functionality. Once they refined them for use in-house, the next step was to open up these algorithms to third-party developers to build their own applications using Adobe’s AI tools.

It’s actually a classic software platform play, whether the service involves AI or not. Every cloud company from Box to Salesforce has been exposing their services for years, letting developers take advantage of their expertise so they can concentrate on their core knowledge. They don’t have to worry about building something like storage or security from scratch because they can grab those features from a platform that has built-in expertise  and provides a way to easily incorporate it into applications.

The difference here is that it involves Adobe’s core functions, so it may be intelligent auto cropping and smart tagging in Adobe Experience Manager or AI-fueled visual stock search in Creative Cloud. These are features that are essential to the Adobe software experience, which the company is packaging as an API and delivering to developers to use in their own software.

Whether or not Sensei can be the technology that drives the Adobe cloud platform for the next 10 years, Parasnis and the company at large are very much committed to that vision. We should see more announcements from Adobe in the coming months and years as they build more AI-powered algorithms into the platform and expose them to developers for use in their own software.

Parasnis certainly recognizes this as an ongoing process. “We still have a lot of work to do, but we are off in an extremely good architectural direction, and AI will be a crucial part,” he said.

Read the source poast at TechCrunch. 

IBM Attracting Developers With AI and Open Source ML Projects

Already a leader in the advancement of artificial intelligence, IBM has brought AI technology to developers with open arms. IBM recently launched a series of projects for developers to access open source code and services to build AI and machine learning applications. The vendor wants to democratize these technologies, so they can be easily accessed […]

Already a leader in the advancement of artificial intelligence, IBM has brought AI technology to developers with open arms.

IBM recently launched a series of projects for developers to access open source code and services to build AI and machine learning applications. The vendor wants to democratize these technologies, so they can be easily accessed and consumed by developers in open source communities and within the enterprises, said Angel Diaz, IBM’s vice president of developer advocacy and technology, who oversees the vendor’s developer outreach.

IBM has expanded the focus of its Center for Open-Source Data and AI Technologies in San Francisco — formerly the Spark Technology Center — to cover the enterprise AI lifecycle, which examines the gamut of AI and machine learning technologies with an initial focus on deep learning, Diaz said at the IBM Think 2018 conference last month.

The idea is to lower the barrier to entry and make it easier for developers to apply AI and machine learning techniques to business processes.

As part of this expansion, IBM added more data scientists and AI engineers, which has resulted in new projects, such as the Model Asset eXchange (MAX) and the Fabric for Deep Learning (FfDL) which is pronounced “fiddle.”

MAX is an open source ecosystem for data scientists and AI developers to share and consume models that use machine learning engines, such as TensorFlow, PyTorch and Caffe2, Diaz said. It also provides a standard approach to classify, annotate, and deploy these models for prediction and inferencing. Developers can customize the models in IBM’s new Watson Studio AI application development platform. Additionally, developers can train and deploy MAX models for production workloads that use Watson Studio, such as internet-of-things applications, said Guido Jouret, chief digital officer at ABB.

IBM’s MAX not only avoids the cost and time for developers to create these models themselves, but they also get access to the open source community to continually add and improve on these models, said Kathleen Walch, senior analyst at Cognilytica, based in Washington, D.C.

“It helps level the playing field for smaller companies [that] don’t have as much data or resources,” she said.

Meanwhile, FfDL presents a cloud-native service for popular open source frameworks TensorFlow, Caffe and PyTorch. It uses Kubernetes to provide a scalable, fault-tolerant deep learning framework. IBM Watson Studio’s Deep Learning as a Service capability is based on FfDL.

Read the source article at TechTarget.

Apple, IBM Couple Watson with Core ML for Expanded Machine Learning

Apple  and IBM  may seem like an odd couple, but the two companies have been working closely together for several years now. That has involved IBM sharing its enterprise expertise with Apple and Apple sharing its design sense with IBM. The companies have actually built hundreds of enterprise apps running on iOS devices. Today, they took that friendship a step further when they […]

Apple  and IBM  may seem like an odd couple, but the two companies have been working closely together for several years now. That has involved IBM sharing its enterprise expertise with Apple and Apple sharing its design sense with IBM. The companies have actually built hundreds of enterprise apps running on iOS devices. Today, they took that friendship a step further when they announced they were providing a way to combine IBM Watson machine learning with Apple Core ML to make the business apps running on Apple devices all the more intelligent.

The way it works is that a customer builds a machine learning model using Watson,  taking advantage of data in an enterprise repository to train the model. For instance, a company may want to help field service techs point their iPhone camera at a machine and identify the make and model to order the correct parts. You could potentially train a model to recognize all the different machines using Watson’s image recognition capability.

The next step is to convert that model into Core ML and include it in your custom app. Apple introduced Core ML at the Worldwide Developers Conference last June as a way to make it easy for developers to move machine learning models from popular model building tools like TensorFlow, Caffe or IBM Watson to apps running on iOS devices.

After creating the model, you run it through the Core ML converter tools and insert it in your Apple app. The agreement with IBM makes it easier to do this using IBM Watson as the model building part of the equation. This allows the two partners to make the apps created under the partnership even smarter with machine learning.

“Apple developers need a way to quickly and easily build these apps and leverage the cloud where it’s delivered. [The partnership] lets developers take advantage of the Core ML integration,” Mahmoud Naghshineh, general manager for IBM Partnerships and Alliances explained.

To make it even easier, IBM also announced a cloud console to simplify the connection between the Watson model building process and inserting that model in the application running on the Apple device.

Over time, the app can share data back with Watson and improve the machine learning algorithm running on the edge device in a classic device-cloud partnership. “That’s the beauty of this combination. As you run the application, it’s real time and you don’t need to be connected to Watson, but as you classify different parts [on the device], that data gets collected and when you’re connected to Watson on a lower [bandwidth] interaction basis, you can feed it back to train your machine learning model and make it even better,” Naghshineh said.

Read the source article at TechCrunch.

How Three AI Software Companies Position to Deliver on Projects Incorporating AI

AI software companies today are trying to figure out how to make a buck, what configuration of software and services is going to work. It’s difficult to do in such a rapidly-changing environment. Here we look at three companies – Veritone, Sapient and Cognitive Scale  – to see how they are going about it. Veritone […]

AI software companies today are trying to figure out how to make a buck, what configuration of software and services is going to work. It’s difficult to do in such a rapidly-changing environment. Here we look at three companies – Veritone, Sapient and Cognitive Scale  – to see how they are going about it.

Veritone in December announced the general availability of its AI Developer application. Developers of cognitive engines, simulation models usually employing machine learning algorithms, can plug them into Veritone’s aiWARE platform.

“AI needs an operating system,” said Chad Steelberg, chairman and CEO of Veritone , in a keynote talk at the recent AI World in Boston. “Veritone has built an AI operating system.”

He referred to his company’s approach as an “ensemble learning process” featuring a set of cognitive engines working together to perform computing tasks, as device drivers are normalized across hardware by operating systems. Veritone has cognitive engines for speech recognition, computer vision including face, logo, vehicle and license plate recognition.

Sentiment analysis, action classification, and text and visual content moderation engines are in the near-term pipeline, Veritone announced in a recent press release.

The funnel of cognitive engines numbers over 7,000, being developed by 5,000 companies, Steelberg said at AI World. The company expects this number to increase with the release of Veritone Developer. “We have engines in 14 classes of cognition, and we have developed none of them,” Steelberg said. “We left it up to the markets.”

Engines are being added at the rate of about two per week. The engine developer gets a royalty when the engine is used.

The product was available in a beta release to a select group of partners last year. Veritone Developer supports RESTful and GraphQL API integrations. It supports engine development in major categories of cognition including transcription, translation, face and object recognition, audio/video fingerprinting, optical character recognition, geolocation, and logo recognition. Veritone Developer uses a Conductor orchestration layer to optimize results.

Two companies working with Veritone Developer were quoted in a press release released at AI World. “We are delighted to have deployed our first engine with Veritone,” stated Dr. Arlo Faria, founder of Remeeting, a developer of transcription and speaker diarization (partitioning audio input stream into segments) technology. “Veritone Developer offers us the ability to manage training data sets and test our engines with curated libraries of representative client data,” said Dr. Faria in a press release.

“With Veritone Developer, we can more easily deliver our state-of-the-art cognitive engines to new clients across a variety of industry to generate actionable information from data that was previously in accessible,” stated Aaron Edell, co-founder of Machine Box, a company focused on machine learning technology. “This is an ecosystem built by developers for developers,” stated Edell in a press release.

In an example of how the system has been used, Steelberg said a UK client in law enforcement wanted a “person of interest” application. They wanted to look through video to find people with both weapons and tattoos. Veritone identified four companies that could do the identification, and within 48 hours, their engines were working with the system.

Later in December, Veritone announced the acquisition of Atigeo Corp., a provider of advanced data analytics software. The acquisition includes a cooperative distributed inference system, based on proprietary algorithms and Hamiltonian models, which describe an operator corresponding to the total energy of the system. This enables queries within huge bodies of unstructured data, where traditional approaches are impractical or impossible.  The software  features a self-adapting and self-learning design.

“This strategic acquisition will build on our data science foundation,” Steelberg stated in a press release.

Veritone has hired several former Atigeo data scientists and software engineers, including Wolf Kohn, Ph.D, an authority on optimal hybrid control and quantum control, who has worked at Lockheed Corp. and as a professor at Rice University and the University of Washington.

Sapient Brings Emerging Tech Expertise to Services

A different approach to making it in the AI software business is being taken by Sapient, which is pursuing a services model approach, assisted by its expertise in building AI systems.

Sapient is the technology arm of a large media agency. “We help the CMOs and CEOs understand the value proposition of incorporating AI into their business, said Brian Martin, director of technology and Cognitive Architecture lead for Sapient, in a meeting at AI World. “The sooner they get past the hype, the better.”

Sapient aims to help clients find value with emerging technology including AI, blockchain, cloud computing and quantum computing. “As a partner with our clients, we help them find value with emerging technology, and we help with strategy and execution. We have expertise across multiple disciplines,” Martin said.”We target companies that want to transform their business but don’t know how.”

Sapient was bought in late 2014 by Publicis Group, a French multinational advertising and public relations firm,  for consideration of $3.7 billion. Sapient had been a services company focused on domains of marketing, omni-channel commerce and consulting.  Since then, “merging and melding” of the two companies has taken place, and Sapient has built on its experience running back–end systems to today tie more strong into the end user experience., Martin said.

At the time of the acquisition, Sapient had three units: SapientNitro, Sapient Global Markets and Sapient Government Services.

In a recent win, Sapient Global Markets announced a successful going live of a project to implement a software risk platform from Murex at Nationwide Building Society, a UK financial services provider. Murex provides technology solutions for trading in financial markets; they partner with Sapient.

“Nationwide will now be able to reduce the amount of internal reconciliations while improving accuracy and operational efficiency,” stated Philippe Helou, co-founder and managing partner at Murex, in a press release. “These benefits have been delivers thanks to the strong partnership between Murex, Nationwide and Sapient,” he stated.

CognitiveScale Offers the Cortex Platform, Leverages IBM Watson Experience

CognitiveScale positions with “augmented intelligence” software to solve complex business problems for financial services, healthcare and digital commerce markets. The company offers the Cortex platform, to help enterprises apply AI and blockchain technology to increase user engagement, improve decision-making and self-learning business processes.

CognitiveScale just recently announced Cortex 5, the next generation of its cloud software, aimed at speeding up implementations and return-on-investment of enterprise grade AI systems.The company observes an “AI adoption gap” gap today and cites these reasons: businesses are not sure which problems to solve using machine learning, computer vision and blockchain; business lack the tools, skills and methods require to build, deploy and manage enterprise AI systems; and business may not have access to high-value data, models and algorithms suitable for their industry.

“CognitiveScale’s mission is to remove the barriers to AI adoption and solve complex business problems at scale in financial services, healthcare, and digital commerce markets,” stated Matt Sanchez, CTO and co-founder, CognitiveScale, in a press release. “Our Cortex 5 platform helps businesses derive rapid benefit from AI powered business processes by bridging the data, skills, and tooling gaps between data science workflows and the software development lifecycle.”

Prior to founding CognitiveScale, Sanchez was the head of IBM Watson Labs and built some of the earliest cognitive systems using IBM Watson for banking, healthcare, and insurance industries.

Cortex 5 is designed to help businesses with limited machine learning expertise start building their own high-quality AI systems through three interrelated cloud-based software offerings:

AI Marketplace: An online AI collaboration system that brings together many industry-specific data, models, algorithms, and digital professional services to help business experts, researchers, data scientists and developers work through problems and move ideas forward;

AI Platform: A lifecycle management platform that bridges the tooling gap between data science workflows and the software development lifecycle and radically reduces the skills and expertise required to design, deploy, and manage complex AI systems.

AI Systems:  A family of trained and proven AI software agents that deliver AI-powered personalization for consumer engagement (engage agents) and AI-powered process automation (amplify agents).  These agents are delivered as a service and built and managed by the Cortex AI Platform for enterprise grade AI performance and assurance.

Cortex 5 is available on both Amazon AWS and Microsoft Azure cloud environments and supports both enterprise and hybrid cloud deployments.

Cognitive Scale was founded in 2013 by a group of entrepreneurs and former executives out of IBM Watson. Clients include Barclays, Exxon Mobil, USAA, MD Anderson and NBC Universal.

Cameron Davies, a senior VP of Data Science at NBC Universal, stated in a press release, “CognitiveScale is a leader in the industry, thinking through how to ease adoption barriers to make AI accessible and enterprise ready.”

The young AI software industry is taking shape. These three companies represent slightly different approaches but all aiming to simplify the process of building and deploying applications that exploit AI. We look forward to tracking the progress of these three and other companies, especially writing about experiences with deployed projects.

  • By John P. Desmond, AI Trends Editor

Gartner at AI World: AI Application Development Is Stuck in 1958

AI World 2017 Boston — If we compare the application of artificial intelligence (AI) in the enterprise to the history of computer innovation, we’re running about 60 years behind. Tom Austin, vice president and fellow at Stamford, Conn.-based Gartner, shared that thought in his keynote address about AI realities recently at the AI World Conference, taking place at the […]

AI World 2017 Boston — If we compare the application of artificial intelligence (AI) in the enterprise to the history of computer innovation, we’re running about 60 years behind.

Tom Austin, vice president and fellow at Stamford, Conn.-based Gartner, shared that thought in his keynote address about AI realities recently at the AI World Conference, taking place at the Boston Marriott Copley Place hotel.

Austin compared the current state of AI application developments to the year 1958 in the evolution of computers.

“We’re in a great period of promise but great turbulence as well,” he said. “You might get me to bump up to (1964) with the emergence of the IBM System/360.”

Austin’s thesis: AI in the enterprise exists, sure. But don’t fall for the AI hype, he warned.

AI Has a Long Way to Go

Why is AI application development stuck 59 years ago?

In 1958, people using computers had to provide their own sorting capabilities. No system provided sort, Austin said. Further, there were a lack standards, a dearth of off-the-shelf applications, bespoke solutions, few engineering cookbooks and a high risk factor with few, if any, safety systems.

“We didn’t have software vendors until the early ’80s,” Austin said. “We do have have some application vendors today in AI. The production applications that have rolled out to use AI today are focused on one area: customer-facing applications. And that’s where you’ll find a lot of activity and interest …. So it feels like 1958 all over again in terms of maturity and we have a long way to go …. Look at all the progress we had to make in computers to get to where we are today.”

Gartner AI Numbers Tell the Story

Austin said the science behind AI has been impressive. The actual application of it? Not so much yet.

He backed that with numbers by Gartner. The research firm interviewed more than 3,806 CIOs around the world about the application of AI in their companies. Austin shared those numbers in advance of the report’s scheduled release in the new year. The company found only 4 percent of respondents had at least one AI-based application in production mode. Another 4 percent claimed they were in the process of putting them in place.

“So we’re not there,” Austin said. “You shouldn’t be disappointed by that. We have a long way to go.”

Both Google and Microsoft (and Amazon, we found) have recently announced the availability of AI consulting services, Austin said. “Why?” he asked. “Because the applications aren’t there. And so we need five generations of applications potentially to get where we think we need to be.”

Read the source article at CMS Wire.