HBR: 3 Types of Real World AI to Support Your Business Today

In 2013, the MD Anderson Cancer Center launched a “moon shot” project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system. But in 2017, the project was put on hold after costs topped $62 million—and the system had yet to be used on patients. At the same time, the […]

In 2013, the MD Anderson Cancer Center launched a “moon shot” project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system. But in 2017, the project was put on hold after costs topped $62 million—and the system had yet to be used on patients. At the same time, the cancer center’s IT group was experimenting with using cognitive technologies to do much less ambitious jobs, such as making hotel and restaurant recommendations for patients’ families, determining which patients needed help paying bills, and addressing staff IT problems.

The results of these projects have been much more promising: The new systems have contributed to increased patient satisfaction, improved financial performance, and a decline in time spent on tedious data entry by the hospital’s care managers. Despite the setback on the moon shot, MD Anderson remains committed to using cognitive technology—that is, next-generation artificial intelligence—to enhance cancer treatment, and is currently developing a variety of new projects at its center of competency for cognitive computing.

The contrast between the two approaches is relevant to anyone planning AI initiatives. Our survey of 250 executives who are familiar with their companies’ use of cognitive technology shows that three-quarters of them believe that AI will substantially transform their companies within three years. However, our study of 152 projects in almost as many companies also reveals that highly ambitious moon shots are less likely to be successful than “low-hanging fruit” projects that enhance business processes. This shouldn’t be surprising—such has been the case with the great majority of new technologies that companies have adopted in the past. But the hype surrounding artificial intelligence has been especially powerful, and some organizations have been seduced by it.

In this article, we’ll look at the various categories of AI being employed and provide a framework for how companies should begin to build up their cognitive capabilities in the next several years to achieve their business objectives.

Three Types of AI

It is useful for companies to look at AI through the lens of business capabilities rather than technologies. Broadly speaking, AI can support three important business needs: automating business processes, gaining insight through data analysis, and engaging with customers and employees.

Process automation.

Of the 152 projects we studied, the most common type was the automation of digital and physical tasks—typically back-office administrative and financial activities—using robotic process automation technologies. RPA is more advanced than earlier business-process automation tools, because the “robots” (that is, code on a server) act like a human inputting and consuming information from multiple IT systems. Tasks include:

  • transferring data from e-mail and call center systems into systems of record—for example, updating customer files with address changes or service additions;
  • replacing lost credit or ATM cards, reaching into multiple systems to update records and handle customer communications;
  • reconciling failures to charge for services across billing systems by extracting information from multiple document types; and
  • “reading” legal and contractual documents to extract provisions using natural language processing.RPA is the least expensive and easiest to implement of the cognitive technologies we’ll discuss here, and typically brings a quick and high return on investment. (It’s also the least “smart” in the sense that these applications aren’t programmed to learn and improve, though developers are slowly adding more intelligence and learning capability.) It is particularly well suited to working across multiple back-end systems.At NASA, cost pressures led the agency to launch four RPA pilots in accounts payable and receivable, IT spending, and human resources—all managed by a shared services center. The four projects worked well—in the HR application, for example, 86% of transactions were completed without human intervention—and are being rolled out across the organization. NASA is now implementing more RPA bots, some with higher levels of intelligence. As Jim Walker, project leader for the shared services organization notes, “So far it’s not rocket science.”

One might imagine that robotic process automation would quickly put people out of work. But across the 71 RPA projects we reviewed (47% of the total), replacing administrative employees was neither the primary objective nor a common outcome. Only a few projects led to reductions in head count, and in most cases, the tasks in question had already been shifted to outsourced workers. As technology improves, robotic automation projects are likely to lead to some job losses in the future, particularly in the offshore business-process outsourcing industry. If you can outsource a task, you can probably automate it.

Read the source article at Harvard Business Review.

5 Myths About Cognitive Computing

Artificial intelligence (AI) is one of the most frequently discussed topics in business today, but even more than most new technologies, its promise is sometimes obscured by a set of lingering myths—particularly among those whose exposure to the technology has been limited. Professionals with first-hand experience have a different perspective, according to the 2017 Deloitte State of […]

Artificial intelligence (AI) is one of the most frequently discussed topics in business today, but even more than most new technologies, its promise is sometimes obscured by a set of lingering myths—particularly among those whose exposure to the technology has been limited.

Professionals with first-hand experience have a different perspective, according to the 2017 Deloitte State of Cognitive Survey, which is based on interviews with 250 business executives who have already begun adopting and using AI and cognitive technologies. The responses of these early adopters shed considerable light on the current state of cognitive technology in organizations. Along the way, they help dispel five of the most persistent myths.

Myth 1: Cognitive is all about automation

It is rare to find a media report about AI that doesn’t speculate about job losses. Much of the reason for that is the commonly held belief that the technology’s primary purpose is automating human work. But that’s hardly the full story—in fact, there are significant uses for AI that do not involve substituting machine labor for human labor.

A Deloitte analysis of hundreds of AI applications in every industry reveals that these applications tend to fall into three categories: product, process, and insight. Product applications embed cognitive technologies into products or services to help provide a better experience for the end user, whether by enabling “intelligent” behavior or a more natural interface or by automating some of the steps a user normally performs. Process applications use cognitive technology to enhance, scale, or automate business processes, while insight applications use AI such as machine learning and computer vision to analyze data to reveal patterns, make predictions, and guide better decisions. In some cases, these technologies can be used to automate human work, but often they are used to do work that no human could have done otherwise.

Survey respondents clearly believe AI is important for more than just automation. While 92 percent say it is important or very important in their internal businesses processes, 87 percent rank it comparably for the products and services they sell. Cutting jobs through automation falls at the bottom of respondents’ list of potential benefits.

Myth 2: Cognitive kills jobs

Hand in hand with the belief that AI is all about automation is the expectation that it will destroy countless jobs. While it’s impossible to know what will happen in the distant future, both the objectives and the predictions of survey respondents suggest that job loss won’t be a major outcome. Only 7 percent of respondents selected “reduce headcount through automation” as their first choice among nine potential benefits of the technology; just 22 percent chose it among their top three.

When asked about the likelihood of job loss in the near future, respondents were similarly upbeat (Figure 1). Just over half expect that augmentation—smart machines and humans working side by side—will be the most likely scenario three years from now. Only 11 percent expect substantial job displacement; a larger percentage expect job gains or no substantial impact on jobs.

Respondents are more likely to be concerned about substantial job loss in the more distant future—22 percent expect it to happen in 10 years—but even then, a larger proportion (28 percent) expect augmentation to be the most likely outcome. The same percentage anticipate brand-new jobs.

Myth 3: The financial benefits are still remote

Many people view AI as a futuristic technology dominated by a handful of tech giants making headlines with high-profile applications. They believe most companies will not be able to achieve real financial benefits anytime soon. There is some truth to this view: The tech giants are indeed at the forefront of AI R&D and have capabilities not available to everyone. On the other hand, there are ordinary companies in every industry that have deployed AI and reaped financial benefits.

Read the source article at the Wall Street Journal.

How Blockchain Technology and Cognitive Computing Work Together

When it comes to revolutionary technology, the blockchain and cognitive computing are two at the top of the list in 2018. With these technologies finally being put to use in practical applications, we’re learning more and more about what they can do on their own—and together. Let’s take a look at how some industries can […]

When it comes to revolutionary technology, the blockchain and cognitive computing are two at the top of the list in 2018. With these technologies finally being put to use in practical applications, we’re learning more and more about what they can do on their own—and together. Let’s take a look at how some industries can take advantage of this powerful combination.

Before we can discuss what these two technologies can accomplish together, it’s important to understand them separately.

Cognitive computing is essentially using advanced artificial intelligence systems to create a “thinking” computer. Deep learning allows cognitive computers to learn and adapt as they receive new data, and they do not simply execute logic-based commands as computers have traditionally done. Because the technology is evolving and encompasses many different AI systems, there is no standardized definition for these systems. However, the term is best used to describe computer systems that mimic the human brain.

The blockchain, a new system for storing information and processing transactions, was created for the distribution of bitcoin, the world’s leading cryptocurrency. It’s different from most databases because it uses a distributed ledger system, rather than a centralized database. In basic terms, that means that the information is distributed in thousands of computer networks, instead of being stored in one location. The information is updated regularly, and everyone on the network can view it.

This makes the blockchain more secure than a traditional database since a hacker cannot compromise the whole system by breaching one computer. Today, the blockchain is becoming popular in some industries for its superior security. Cybersecurity is a growing concern, and the blockchain could be one way some industries can reduce the number of breaches.

Cognitive Computing and Blockchain – the “IoT Dream”

So how do these two technologies work together? Since we’ve only just scratched the surface on the capabilities of both the blockchain and cognitive computing, there’s still a lot of opportunity for bringing these technologies together. One of the largest areas for potential expansion is in tandem with IoT (Internet of Things) growth.

Many industries are beginning to see how using interconnected devices can help them automate and improve their processes, but there are currently limitations on scaling and security with centralized systems.

IBM, a leader in artificial intelligence, has already integrated its Watson supercomputer into a platform for IoT, allowing businesses to make better use of the data they collect using these devices. IoT devices collect the data, but the majority of this data is “dark”, meaning that it just sits in storage and isn’t used for anything. Cognitive computing has the ability to process this data in ways humans can’t—while gaining valuable insights that can be used in strategic planning and performance measurement.

So how does the blockchain fit into this equation? Mainly, as a way to scale IoT usage and for security purposes. IoT data can be extremely sensitive and valuable to businesses—the last thing a company wants is for a data breach to occur. Blockchain ledgers also create logs for context, which provide detailed information about anomalies and problems and break down exactly where and when these problems occurred.

  • By Sarah Daren, consultant

Read the source article at RTInsights.com.

Deloitte Survey Dispells Five Myths about Cognitive Technology

In the 2017 State of Cognitive Survey, Deloitte surveyed 250 “cognitive-aware” US executives from large companies. These managers were knowledgeable about AI/cognitive technologies and informed about what their companies were doing with the technology. Their responses about their companies are subjective, but they shed considerable light on the current state of cognitive technology within organizations. Five […]

In the 2017 State of Cognitive Survey, Deloitte surveyed 250 “cognitive-aware” US executives from large companies. These managers were knowledgeable about AI/cognitive technologies and informed about what their companies were doing with the technology. Their responses about their companies are subjective, but they shed considerable light on the current state of cognitive technology within organizations. Five myths that the respondents dispel are discussed below.

Myth No. 1

The main use of cognitive technologies is automating work that people do.

It is rare to find a story in the media about AI that doesn’t speculate about the how the technology is destined to put lots of people out of work. (See Myth No. 2.) This is because it is widely assumed that AI is all about automating the work that people do. But this is hardly the full story. As our prior research has shown, and the survey has validated, there are significant uses for AI that do not involve substituting machine labor for human labor.

Our analysis of hundreds of AI applications in every industry has revealed that these applications tend to fall into three categories: product, process, and insight.1 And these applications don’t necessarily involve automating work that people do. Product applications, for instance, embed cognitive technologies into products or services to help provide a better experience for the end user, whether by enabling “intelligent” behavior, a more natural interface (such as natural language text or voice), or by automating some of the steps a user normally performs. Process applications use cognitive technology to enhance, scale, or automate business processes. This might entail automating work that people were doing; but it also might involve doing new work that wasn’t practical to do without AI. And insight applications use AI technology such as machine learning and computer vision to analyze data in order to reveal patterns, make predictions, and guide more effective actions. Again, in some cases this can be used to automate human work. But it is also used to do work that no human could have done previously because the analysis was impractical without the use of AI.

Myth No. 2

Cognitive technologies lead to substantial loss of jobs.

It’s widely argued that cognitive technologies bring about automation-related job losses. Entire books have been written about this notion. While it’s impossible to know what will happen in the distant future with regard to this issue, both the objectives and the predictions of the survey respondents suggest that job loss won’t be a major implication of cognitive technologies.

Read the source report at Deloitte Insights.

AI Beats Humans in Reading Comprehension for First Time

Artificial intelligence programs built by Alibaba and Microsoft have beaten humans on a Stanford University reading comprehension test. “This is the first time that a machine has outperformed humans on such a test,” Alibaba said in a statement Monday. The test was devised by artificial intelligence experts at Stanford to measure computers’ growing reading abilities. Alibaba’s software was the […]

Artificial intelligence programs built by Alibaba and Microsoft have beaten humans on a Stanford University reading comprehension test.

“This is the first time that a machine has outperformed humans on such a test,” Alibaba said in a statement Monday.

The test was devised by artificial intelligence experts at Stanford to measure computers’ growing reading abilities. Alibaba’s software was the first to beat the human score.

Luo Si, the chief scientist of natural language processing at the Chinese company’s AI research group, called the milestone “a great honor,” but also acknowledged that it will likely lead to a significant number of workers losing their jobs to machines.

The technology “can be gradually applied to numerous applications such as customer service, museum tutorials and online responses to medical inquiries from patients, decreasing the need for human input in an unprecedented way,” Si said in a statement.

Alibaba has already put the technology to work on Singles Day, the world’s biggest shopping bonanza, by using computers to answer a large number of customer service questions.

In a tweet, Pranav Rajpurkar, one of the Stanford researchers who developed the reading test, called Alibaba’s feat “a great start to 2018” for artificial intelligence.

The Stanford test generates questions about a set of Wikipedia articles.

For example, a human or AI program reads a passage about the history of British TV show Doctor Who and then answers questions like, “What is Doctor Who’s space ship called?” (Spoiler alert: It’s the TARDIS, for non-Doctor Who fans out there.)

Alibaba’s deep neural network model scored 82.44 on the test on January 11, narrowly beating the 82.304 scored by the human participants. A day later, Microsoft’s AI software also beat the human score, with a result of 82.650.

“These kinds of tests are certainly useful benchmarks for how far along the AI journey we may be,” said Andrew Pickup, a spokesman for Microsoft. “However, the real benefit of AI is when it is used in harmony with humans,” he added.

Facebook, Tencent and Samsung have also previously submitted AI models to the Stanford project.

Read the source article at CNNtech.

Artificial Intelligence Investing Heading For Prime Time

Artificial intelligence is a branch of computer science that aims to create intelligent machines that teach themselves. Much of AI’s growth has occurred in the last decade. The upcoming decade, according to billionaire investor Mark Cuban, will be the greatest technological revolution in man’s history. More progress has been achieved on artificial intelligence.in the past […]

Artificial intelligence is a branch of computer science that aims to create intelligent machines that teach themselves. Much of AI’s growth has occurred in the last decade. The upcoming decade, according to billionaire investor Mark Cuban, will be the greatest technological revolution in man’s history.

More progress has been achieved on artificial intelligence.in the past five years than in the past five decades. Rapid machine-learning improvements have allowed computers to surpass humans at certain feats of ingenuity, doing things that at one time would have been unfathomable.  IBM calls the autonomous machine learning field ‘cognitive computing.’ The ‘cognitive computing’ space is bursting with innovations; a result of billions of research and investment dollars spent by large companies such as Microsoft, Google and Facebook. IBM alone has spent $15 billion on Watson, its cognitive system, as well as on related data analytics technology.

Arthur Samuel’s checkers-playing program appeared in the 1950s. It took another 38 years for a computer to master checkers.  In 1997, the IBM deep blue program defeated world chess champion Gary Kasparov. Around the time deep blue first started learning chess, Kasparov declared, “No computer will ever beat me.”  That historic accomplishment took IBM 12 years.

Artificial intelligence first started hitting the mainstream headlines in 2011 when IBM’s Watson beat two human contestants on TV’s Jeopardy. This was the landmark milestone of its time, especially if you consider one of the players was Ken Jennings, who holds the record for the consecutive wins (74) on the quiz show.  Getting to that moment took five years. IBM’s Watson spent four of them learning the English language, and another year reading—and retaining—every single word of Wikipedia (plus a few thousand books).

No computer was ever supposed to master the game Go, but it did. Go was invented in China in 548 B.C. It is a game of ‘capture the intersection’ played on a 19×19 grid with each player deploying a combined cache of 300-plus black and white pebbles. The possible board permutations in Go vastly outnumber the board permutations of chess.

Designed by a team of researchers at DeepMind, an AI lab now owned by Google, AlphaGo was an AI system built with one specific objective: learning to play the game of Go very well. AlphaGo’s minders never gave it the rules of the game. They fed it tens of millions of Go moves from expert players and the computer had to figure it all out. The concept of reinforcement learning was put to the test by way of millions of matches that the system played against versions of itself, neural network-versus-neural network.  The results and key lessons were fed back to AlphaGo, which constantly learned and improved its game. The operative word is learned. AlphaGo not only knew how to play Go as a human would, but it moved past the human approach into a completely new way of playing.

Read the source article at Forbes.

Veritone’s AI Developer Application is Generally Available

Veritone, Inc. , a supplier of artificial intelligence (AI) insights and cognitive solutions, has announced the general availability of its Veritone Developer application. The application empowers developers of cognitive engines, applications and application programming interfaces (APIs) to bring new AI ideas to life through simple integration with the Veritone aiWARE platform. Veritone Developer is a self-service development […]

Veritone, Inc. , a supplier of artificial intelligence (AI) insights and cognitive solutions, has announced the general availability of its Veritone Developer application. The application empowers developers of cognitive engines, applications and application programming interfaces (APIs) to bring new AI ideas to life through simple integration with the Veritone aiWARE platform.

Veritone Developer is a self-service development environment that empowers developers to create, submit and deploy public and private applications and cognitive engines directly into the aiWARE architecture. After a successful limited-beta-release to a select group of partners, Veritone Developer is now publicly available as a unique resource for machine learning experts, application development firms, and system integrators. Veritone Developer supports RESTful and GraphQL API integrations as well as engine development in major categories of cognition, including: transcription, translation, face and object recognition, audio/video fingerprinting, optical character recognition (OCR), geolocation, transcoding, and logo recognition.

“We are delighted to have deployed our first engine with Veritone,” commented Dr. Arlo Faria, founder of Remeeting, a developer of cutting-edge transcription and speaker diarization technology, in a press release. “Veritone Developer offers us the ability to manage training data sets and test our engines with curated libraries of representative client data. This is a great benefit to us, as is the overall partnership with their ecosystem team.”

“It is clear from the overwhelming interest in Veritone Developer that the AI development community values our ecosystem platform and its ability to create revenue opportunities for their cognitive engines and applications,” said Tyler Schulze, vice president and general manager of the partner ecosystem at Veritone. “Exposure to Veritone’s growing portfolio of blue-chip clients via a single interface that makes it easy to train, test, and deploy AI offerings is highly attractive to development partners while supporting our overall mission of delivering purpose-driven AI solutions across any type of organization.”

Several new speech recognition and computer vision engines, including face, logo, vehicle, and license plate recognition, have been qualified and are in the final deployment stages within Veritone Developer. Sentiment analysis, action classification, and sophisticated text and visual content moderation engines are also in the near-term pipeline. One of the world’s most valuable consumer brands has already developed a unique custom application incorporating Veritone’s best-of-breed cognitive processing, and over ten new engines have been accepted and fully deployed in the Veritone aiWARE platform during the beta period. Veritone has identified and is curating a global funnel of over 7,000 cognitive engines across 7 major classes and 60 defined categories of cognition, and expects the volume of engines deployed in aiWARE to increase significantly with the full release of Veritone Developer.

Read the full press release about Veritone Developer.