The Banking Industry Has a $1 Trillion Opportunity with AI

There are about 7.5 billion people on the planet, give or take a few. But that number pales in comparison to the number of connected devices worldwide. According to Autonomous, a financial research firm, people are outnumbered three-to-one by their smart computing devices — an estimated 22 billion in total. And the number of smart devices will […]

There are about 7.5 billion people on the planet, give or take a few. But that number pales in comparison to the number of connected devices worldwide. According to Autonomous, a financial research firm, people are outnumbered three-to-one by their smart computing devices — an estimated 22 billion in total. And the number of smart devices will continue to explode, with venture capital firms pouring $10 billion annually into AI-powered companies focusing on digitally-connected devices.

For financial institutions, their slice of this massive AI pie represents upwards of $1 trillion in projected cost savings. By 2030, traditional financial institutions can shave 22% in costs, says Autonomous in an 84-page report on AI in the financial industry. Here’s how they break down those cost savings:

  • Front Office – $490 billion in savings. Almost half of this ($199 billion) will come from reductions in the scale of retail branch networks, security, tellers, cashiers and other distribution staff.
  • Middle Office – $350 billion in savings. Just simply applying AI to compliance, KYC/AML, authentication and other forms of data processing will save banks and credit unions a staggering $217 billion.
  • Back Office – $200 billion in savings. $31 billion of this will be attributed to underwriting and collections systems.

These numbers align with what other analysts and research firms have forecast. Bain & Company has pegged the savings at around $1.1 trillion, while Accenture estimates that AI will add $1.2 trillion in value to the financial industry by 2035.

In the U.S. banking sector, 1.2 million employees have already been exposed to AI in the front-, middle- and back office, with almost three-quarters of workers in the front office using AI (even if they don’t know it). If you include the investment and insurance industry, there are 2.5 million U.S. financial services workers whose jobs are already being directly impacted by AI.

Use Cases for AI

Autonomous sees three primary ways in which artificial intelligence will transform the banking industry:

  1. AI technology companies such as Google and Amazon will add financial services skills to their smart home assistants, then leveraging this data+interface via relationships with traditional banking providers.
  2. Technology and finance firms merge/collaborate to build full psychographic profiles of consumers across social, commercial, personal and financial data (e.g., like Tencent coupling with Ant Financial in China).
  3. The crypto community builds decentralized, autonomous organizations using open source components with the goal of shifting power back to consumers.

AI-enabled devices are already using vision and sound to gather information even more accurately than humans, and the software continues to get more human-like.

“Not only can software understand the contents of inputs and categorize it as scale,” Autonomous explains, “it has exhibited the ability to generate new examples of those inputs. Artists are now as endangered as lawyers and bankers.”

But AI still has a way to go before a computer will become the next van Gogh or Pollock. Today’s AI is “narrow,” meaning that the machines are built to react to specific events and lack general reasoning capability. That said, there are plenty of practical applications for AI that banks and credit unions are taking advantage of today.

The most mature use cases are in chatbots in the front office, antifraud and risk and KYC/AML in the middle office, and credit underwriting in the back office.

Financial institutions can use AI to power conversational interfaces that integrate financial data and account actions with algorithm-powered automatic “agents” that can hold life-like conversations with consumers.

Bank of America has announced that it is aggressively rolling out Erica, its virtual assistant, to all of its 25 million mobile banking consumers. Using voice commands, texts or touch, BofA customers can instruct Erica to give account balances, transfer money between accounts, send money with Zelle, and schedule meetings with real representatives at financial centers.

Biometrics and workflow and compliance automation are other strong use cases for AI. To improve the consumer experience, AI can allow a bank or credit union to authenticate a mobile payment using a fingerprint or replace a numerical passcode with voice recognition.

In the middle office, AI can perform real-time regulatory checks for KYC/AML on all transactions rather than rely on more traditional methods of using batch processing to analyze only samples of consumers.

Perhaps the most promising application, says Autonomous, is using AI to incorporate social media, free text fields and even machine vision into the development of lending, investment and insurance products.

Read the source article at The Financial Brand.

HBR: 3 Types of Real World AI to Support Your Business Today

In 2013, the MD Anderson Cancer Center launched a “moon shot” project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system. But in 2017, the project was put on hold after costs topped $62 million—and the system had yet to be used on patients. At the same time, the […]

In 2013, the MD Anderson Cancer Center launched a “moon shot” project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system. But in 2017, the project was put on hold after costs topped $62 million—and the system had yet to be used on patients. At the same time, the cancer center’s IT group was experimenting with using cognitive technologies to do much less ambitious jobs, such as making hotel and restaurant recommendations for patients’ families, determining which patients needed help paying bills, and addressing staff IT problems.

The results of these projects have been much more promising: The new systems have contributed to increased patient satisfaction, improved financial performance, and a decline in time spent on tedious data entry by the hospital’s care managers. Despite the setback on the moon shot, MD Anderson remains committed to using cognitive technology—that is, next-generation artificial intelligence—to enhance cancer treatment, and is currently developing a variety of new projects at its center of competency for cognitive computing.

The contrast between the two approaches is relevant to anyone planning AI initiatives. Our survey of 250 executives who are familiar with their companies’ use of cognitive technology shows that three-quarters of them believe that AI will substantially transform their companies within three years. However, our study of 152 projects in almost as many companies also reveals that highly ambitious moon shots are less likely to be successful than “low-hanging fruit” projects that enhance business processes. This shouldn’t be surprising—such has been the case with the great majority of new technologies that companies have adopted in the past. But the hype surrounding artificial intelligence has been especially powerful, and some organizations have been seduced by it.

In this article, we’ll look at the various categories of AI being employed and provide a framework for how companies should begin to build up their cognitive capabilities in the next several years to achieve their business objectives.

Three Types of AI

It is useful for companies to look at AI through the lens of business capabilities rather than technologies. Broadly speaking, AI can support three important business needs: automating business processes, gaining insight through data analysis, and engaging with customers and employees.

Process automation.

Of the 152 projects we studied, the most common type was the automation of digital and physical tasks—typically back-office administrative and financial activities—using robotic process automation technologies. RPA is more advanced than earlier business-process automation tools, because the “robots” (that is, code on a server) act like a human inputting and consuming information from multiple IT systems. Tasks include:

  • transferring data from e-mail and call center systems into systems of record—for example, updating customer files with address changes or service additions;
  • replacing lost credit or ATM cards, reaching into multiple systems to update records and handle customer communications;
  • reconciling failures to charge for services across billing systems by extracting information from multiple document types; and
  • “reading” legal and contractual documents to extract provisions using natural language processing.RPA is the least expensive and easiest to implement of the cognitive technologies we’ll discuss here, and typically brings a quick and high return on investment. (It’s also the least “smart” in the sense that these applications aren’t programmed to learn and improve, though developers are slowly adding more intelligence and learning capability.) It is particularly well suited to working across multiple back-end systems.At NASA, cost pressures led the agency to launch four RPA pilots in accounts payable and receivable, IT spending, and human resources—all managed by a shared services center. The four projects worked well—in the HR application, for example, 86% of transactions were completed without human intervention—and are being rolled out across the organization. NASA is now implementing more RPA bots, some with higher levels of intelligence. As Jim Walker, project leader for the shared services organization notes, “So far it’s not rocket science.”

One might imagine that robotic process automation would quickly put people out of work. But across the 71 RPA projects we reviewed (47% of the total), replacing administrative employees was neither the primary objective nor a common outcome. Only a few projects led to reductions in head count, and in most cases, the tasks in question had already been shifted to outsourced workers. As technology improves, robotic automation projects are likely to lead to some job losses in the future, particularly in the offshore business-process outsourcing industry. If you can outsource a task, you can probably automate it.

Read the source article at Harvard Business Review.

Here Are the 3 Key Components of Artificial Intelligence Readiness

Artificial intelligence is the technology story of the hour, and everyone wants to dive in. However, three recent studies suggest there’s more work to be done before AI starts delivering business value. A report from McKinsey suggests many organizations require a solid infrastructure underneath it all — it takes digital to go more digital. Data is also […]

Artificial intelligence is the technology story of the hour, and everyone wants to dive in. However, three recent studies suggest there’s more work to be done before AI starts delivering business value.

A report from McKinsey suggests many organizations require a solid infrastructure underneath it all — it takes digital to go more digital. Data is also a vital piece of the puzzle, a survey of 2,300 executives from MIT Technology Review and PureStorage adds.

At the same time, KPMG reports that investment in AI technologies is still relatively low, though many companies have plans for future spending on the technology.

All three surveys find a high degree of optimism about AI. The majority of executives in the MIT survey, 81%, believe AI will have a positive impact on their industry in the future, and 64% are likely to consider investing in AI solutions in the future. In addition, 83% agree AI will significantly enhance processes across industries — such as self-driving
safety and improved healthcare.

These survey results identify three critical components to AI transformation:

Data. However, the MIT survey also sees the need to better govern and grasp data. At least 84% are concerned about the speed at which data can be received, interpreted and analyzed for AI systems. Another 83% also agree that it is essential that data is analyzed for meaning and context.

Infrastructure. The underlying technology is also a concern surfaced in the surveys. The McKinsey survey finds those with a strong digital base are most likely to succeed with their AI efforts. “It appears that AI adopters can’t flourish without a solid base of core and advanced digital technologies. Companies that can assemble this bundle of capabilities are starting to pull away from the pack and will probably be AI’s ultimate winners.”

Culture. The KPMG survey finds many organizations have digital and AI efforts that are too narrowly focused, rather than elevating those to a more strategic approach. “They have not positioned themselves to transform their business and operating models so they can become and remain competitive with digital-first companies,” the report notes.

“Organizations that can power up IA efforts can radically improve operations, transform their business models and become long-term winners,” the KPMG report adds. “But piecemeal efforts that focus mainly on cutting the cost of legacy processes and reducing headcount – with, for example, siloed efforts to automate payroll, invoice processing, and customer service inquiries – will not move the needle in this new world.” The report observes that taking a strategic approach to AI and related technologies can boost business returns of 5X to 10X.

The MIT researchers agree that it is still early in the story of AI, and it’s not too late to prepare organizations and their infrastructures for the AI wave. “Organizations that work to address AI challenges and educate workers at all levels on both the promise and the reality of AI, as well as the value of data, will derive the maximum value from their data stores—value that will drive better business performance and an optimal customer experience.”

Read the source article at RTInsights.

AI Tiptoes Into the Workplace, in the Beginning of a Wave

There is no shortage of predictions about how artificial intelligence is going to reshape where, how and if people work in the future. But the grand work-changing projects of A.I., like self-driving cars and humanoid robots, are not yet commercial products. A more humble version of the technology, instead, is making its presence felt in […]

There is no shortage of predictions about how artificial intelligence is going to reshape where, how and if people work in the future.

But the grand work-changing projects of A.I., like self-driving cars and humanoid robots, are not yet commercial products.

A more humble version of the technology, instead, is making its presence felt in a less glamorous place: the back office.

New software is automating mundane office tasks in operations like accounting, billing, payments and customer service.

The programs can scan documents, enter numbers into spreadsheets, check the accuracy of customer records and make payments with a few automated computer keystrokes.

The technology is still in its infancy, but it will get better, learning as it goes. So far, often in pilot projects focused on menial tasks, artificial intelligence is freeing workers from drudgery far more often than it is eliminating jobs.

The bots are mainly observing, following simple rules and making yes-or-no decisions, not making higher-level choices that require judgment and experience. “This is the least intelligent form of A.I.,” said Thomas Davenport, a professor of information technology and management at Babson College.

But all the signs point to much more to come. Big tech companies like IBM, Oracle and Microsoft are starting to enter the business, often in partnership with robotic automation start-ups. Two of the leading start-ups, UiPath and Automation Anywhere, are already valued at more than $1 billion. The market for the robotlike software will nearly triple by 2021, by one forecast.

“This is the beginning of a wave of A.I. technologies that will proliferate across the economy in the next decade,” said Rich Wong, a general partner at Accel, a Silicon Valley venture capital firm, and an investor in UiPath.

The emerging field has a klutzy name, “robotic process automation.” The programs — often called bots — fit into the broad definition of artificial intelligence because they use ingredients of A.I. technology, like computer vision, to do simple chores.

For many businesses, that is plenty. Nearly 60 percent of the companies with more than $1 billion in revenue have at least pilot programs underway using robotic automation, according to research from McKinsey & Company, the consulting firm.

The companies and government agencies that have begun enlisting the automation software run the gamut. They include General Motors, BMW, General Electric, Unilever, Mastercard, Manpower, FedEx, Cisco, Google, the Defense Department and NASA.

State Auto Insurance in Ohio Ahead of the Curve

State Auto Insurance Companies in Columbus, Ohio, started its first automation pilot project two years ago. Today, it has 30 software programs handling back-office tasks, with an estimated savings of 25,000 hours of human work — or the equivalent of about a dozen full-time workers — on an annualized basis, assuming a standard 2,000-hour work year.

Holly Uhl, a technology manager who leads the automation program, estimated that within two years the company’s bot population would double to 60 and its hours saved would perhaps triple to 75,000, nearly all in year-after-year savings rather than one-time projects.

Cutting jobs, Ms. Uhl said, is not the plan. The goal for the company, whose insurance offerings include auto, commercial and workers’ compensation, is to increase productivity and State Auto’s revenue with limited additions to its head count, she said.

Ms. Uhl said her message to workers is: “We’re here to partner with you to find those tasks that drive you crazy.”

Rebekah Moore, a premium auditor at the company, had one in mind. Premium auditors scrutinize insurance policies and make recommendations for changing rates. They audit less than half of the policies, Ms. Moore said.

The policies that will not be audited then have to be set aside and documented. That step, she explained, is a routine data entry task that involves fiddling with two computer programs, plugging in codes and navigating drop-down menus. It takes a minute or two. But because auditors handle many thousands of policies, the time adds up, to about an hour a day, she estimated.

Starting in May, a bot took over that chore. “No one misses that work,” Ms. Moore said.

Is she worried about the bots climbing up the task ladder to someday replace her? Not at all, she said. “We’ll find things to do with our time, higher-value work,” said Ms. Moore, 37.

On State Auto’s current path, her confidence seems justified. If the company hits its target of 75,000 hours in savings by 2020, that would be the equivalent of fewer than 40 full-time workers, compared with State Auto’s work force of 1,900. The company plans to grow in the next two years. If so, State Auto would most likely be hiring a few dozen people fewer than it would otherwise.

UiPath Envisions One Bot Per Employee

Automation companies are eager to promote the bots as helpful assistants instead of job killers. The technology, they say, will get smarter and more useful, liberating workers rather than replacing them.

“The long-term vision is to have one bot for every employee,” said Bobby Patrick, chief market officer for UiPath. The company, which is based in New York, recently reported that its revenue more than tripled in the first half of 2018, to a yearly rate of more than $100 million.

Mihir Shukla, chief executive of Automation Anywhere, refers to his company’s bots as “digital colleagues.” In July, the company announced it had raised a $250 million round of venture funding, valuing the company at $1.8 billion.

The market for A.I.-enhanced software automation is poised for rapid growth, but that expansion, analysts say, will ultimately bring job losses.

Forrester Research estimated that revenue would nearly triple to $2.9 billion over the next three years. And by 2021, robotic automation technology will be doing the equivalent work of nearly 4.3 million humans worldwide, Forrester predicted.

In a dynamic global labor market, that is not a clear-cut forecast of 4.3 million layoffs. The bots may do work not previously done by humans, and people may move onto new jobs.

“But these initial bots will get better, and the task harvesting will accelerate,” said Craig Le Clair, an analyst for Forrester. “For workers, there will be a mix of automation dividends and pain.”

The recent research has examined jobs as bundles of tasks, some of which seem ripe for replacement and others not. So the technology’s immediate impact will resemble the experience to date with robotic software, changing work more than destroying jobs.

For Ms. Uhl of State Auto, the most persistent pushback has come not at the company but at home, from her two young sons, Christian, 9, and Elijah, 7, who are eager to glimpse the future.

Hearing their mother talk about robots at work, they keep asking her to bring one home. “It’s not the kind of robot you can see,” Ms. Uhl said she has told her disappointed sons.

Read the source article in The New York Times.

Fluid Data Strategy Needed to Keep Tech Mapped to Business Plan

By Mahesh Lalwani, Vice President, Head of Data & Cognitive Analytics at Mphasis In today’s world, it should no longer be acceptable to have merely adaptive data. To win customers and market share, an organization must do far more and predict which strategy will unlock the potential its data has to offer. A company must envision […]

By Mahesh Lalwani, Vice President, Head of Data & Cognitive Analytics at Mphasis

In today’s world, it should no longer be acceptable to have merely adaptive data. To win customers and market share, an organization must do far more and predict which strategy will unlock the potential its data has to offer. A company must envision how it will compete against today’s known players and future disruptors. Additionally, it needs to anticipate how government rules and regulations will affect its playing field, and it must protect its brand in hostile environments.

Ask any CIO or CDO and they will tell you that it’s fairly complex.

To move an organization onto a more advanced plan of action, CIOs and other executives can think of data strategy in the simple terms of business drivers and technology enablers and how to constantly evolve it. Automation is a business driver that commonly prompts companies to consider new data strategies. As the imperative to run leaner operations grows, enterprises find it valuable to automate business processes to help expedite work that ordinarily takes up long periods of time. A fluid data strategy allows a business to mine the information on how a certain manual function was done in order to automate it. A common tech enabler that actualizes this transformation is Artificial Intelligence (AI). Mimicking the way the human mind works, tools enabled by AI can gather the needed data and build a prototype of the tasks that are to be automated.

Figure-1 illustrates some of the drivers that can shape your data strategy. On the vertical axis, it shows innovation versus risks and regulations, and on the horizontal, centralized IT versus business users, because they inhibit opposite priorities in most cases. The business drivers in the upper half help you increase top and bottom lines, whereas the lower half keeps you from paying hefty fines for non-compliance. The right half represents the priority of your business users and lines of businesses, and the left half is what keeps your centralized IT occupied.

Based on this situation, budget, resources, and predicted future needs, the recommendation would be to focus on just a few interconnected drivers for the next six months. As part of the data strategy, the organization would establish the selected drivers as business goals, allocate specific budgets, bring together teams that understand the impacted systems and processes, and define how to measure success and monitor progress.

In another example, suppose an organization wants to reduce costs through lean IT and create new products based on data insights. At the same time, they also want to identify technologies to enable this new data strategy for the next six months. Figure-2 indicates that the organization may want to focus on the creation of dashboards to show how its products and revenues stack up, along with the building of data lakes and automating of data ingestion from upstream sources. One will help identify strengths and gaps in offerings, and the other will create a platform for the future.

Redefining Data Strategy: The Holy Grail of Marketing

Once it has successfully achieved these goals, an organization may want to redefine its data strategy to take up more challenging goals such as the holy grail of marketing: a “cradle-to-grave” lifecycle journey. That will require allocating new budgets, adding experienced marketing analysts and data scientists to the team, and ingesting new datasets into the data lake from Web analytics, marketing automation, and CRM systems, among others.

With time, an organization can learn to (a) strike a balance between competing priorities and (b) keeping all teams in sync to  achieve new goals every few months as part of fluid data strategy and (c)  monitoring progress frequently. It can become a champion at predicting and defining the right drivers and selecting suitable technology enablers from the likes of Figure-1 and Figure-2 to create custom, fluid shapes outlining an organization’s Agile data strategy.

The new trends observed in the data landscape that will guide organizations in refining their data strategy are indicated in Figure-3.

Business Intelligence

Most business intelligence today is backward-looking and obsolete. Data science and AI give you the tools to mine your data and build models that accurately predict the future. Data science uncovers insights that are otherwise extremely difficult, if not impossible, to achieve. AI helps automate decision-making based on learning.

Data Warehousing

The role of data warehousing has been extended to include data lakes, saving cost and offering the flexibility of the cloud. Data lakes can help reduce computing and storage requirements and costs by ingesting raw data from the data warehouse, performing ETL, and returning aggregates to the data warehouse allowing existing downstream applications to work without any change.

Traditional Master Data Management (MDM)

Most traditional MDM initiatives are starting to be seen as never-ending and as providing little, if any, value. Instead, Agile MDM has emerged as far more productive and useful, with use-case specific minimum viable data, automated data quality improvements, and reference data updates with AI in the data pipeline.

Single Version of Truth

Most organizations have considered creating a single version of truth for some of their enterprise datasets. A few resourceful companies have even used semantic modeling to bring different versions closer. A better approach though, involves having a single source of truth but allowing many versions of truth. For instance, how many customers paid for a particular movie stream will likely differ from how many customers watched it in a given month. The first number is of interest to the accounts department and the second one to marketing, so while they both represent different versions of the truth they originate from a single source of data.

Batch and Files

One more trend we all have seen is the use of real-time streams instead of batches and files. Data’s value decreases quickly with time, so it is best to analyze it in-flight before storing it. Also the more we store, the more data debt (what we need to analyze) we collect. Most of the time, it makes sense to reduce or throw away the unimportant raw data and store only compact summarized or aggregate data, which should be made available as a service to other systems harnessing more value from your data.

In summation, all businesses clearly stand to gain from adopting what can be called a fluid data strategy. Such an approach gives enterprises the flexibility to pick only those business drivers and tech enablers that are relevant to their business plan. It also provides companies with the room to come back and review their choices every couple of months to tweak and rethread their strategy according to new trends and goals.

Read the source article at ITProPortal.

 

Overcome the Inertia That Keeps Businesses From Deploying AI – Here is How

By Harry Kabadaian, CEO of Fancy Lab, a digital marketing agency Artificial intelligence (AI) isn’t merely “important” to innovation and basic processes at the organization of the future, it’s indispensable. To thrive in that future, businesses already are in early-stage explorations to transform into AI-driven workplaces. But despite the high interest level in leveraging AI in business, […]

By Harry Kabadaian, CEO of Fancy Lab, a digital marketing agency

Artificial intelligence (AI) isn’t merely “important” to innovation and basic processes at the organization of the future, it’s indispensable.

To thrive in that future, businesses already are in early-stage explorations to transform into AI-driven workplaces. But despite the high interest level in leveraging AI in business, implementation remains quite low. According to Gartner’s 2018 CIO Agenda Survey, only four percent of Chief Information Officers (CIOs) have implemented AI. The survey report is careful to note we’re about to see more growth in “meaningful” deployments: 46 percent more CIOs had made plans for AI implementation by February, when the report was published.

t won’t happen instantly. First, you must understand your business in terms of goals, technology needs and the impact its adoption will have on employees and customers. Plenty can go wrong as you address any of those points. Here are a few tips to help achieve minimum resistance.

1. Treat AI as a business initiative, not a technical specialty.

Many organizations view AI’s implementation as a task for the IT department. That mistake alone could give rise to most of your future challenges.

AI is a business initiative in the sense that successful adoption calls for active participation throughout the process — not simply when it’s deployed. The same people currently responsible for running daily business processes must have real roles to help build and maintain the AI-driven model.

Here’s how it looks in real life:

  • The organization requires collaboration and support from data scientists and the IT team.
  • IT is responsible for deploying machine-learning models that are trained on historical information, demanding a prediction-data pipeline. (Creating that pipeline is a process unto itself, with specific requirements for each of the multiple tasks.)

The odds of finding success with AI implementation increase when the whole team is on board to acquire data, analyze it and develop complex systems to work with the information.

2. Teach staff to identify problems that AI can solve.

AI-driven enterprises often search out data scientists with deep knowledge of their business. A better approach would be teaching employees to identify problems that AI can solve and then guiding workers to create their own models. Your team members already understand how your business operates. In fact, they even know the factors that trigger specific responses from partners, customers and prospects.

IT can help businesses analyze and understand the context of each model. It also can plan its deployment using supported systems. Specifically, IT should be able to obtain answers on topics such as:

  • The usage pattern required by a particular business process.
  • The optimal latency period between a prediction request and its service.
  • Models that need to be monitored for update, latency and accuracy.
  • The tolerance of a business process to predictions delayed or not made.

Employees who tackle problems with an AI mindset can monitor business processes and learn to ask the right questions when it matters.

3. Allow business professionals to build machine-learning models.

A company trying to transform its complete scope of operations with AI might view the timeline as a bit slow. The current approach hinges on manually building machine-learning models. When asked, businesses managers ranked time to value among the biggest challenges. Respondents in the Gartner survey revealed their teams took an average of 52 days to build a predictive model and even longer to deploy it into production. Management teams often have little means to determine the model’s quality, even after months of development by data scientists.

An automated platform could transform AI’s economics, producing machine-learning models in hours or even minutes — not months. Such a platform also should allow business leaders to compare multiple models for accuracy, latency and analysis so they can select the most suitable model for any given task.

Equipping your staff with the right tools and skills empowers them to contribute to a system that’s optimized for your business. What’s more, automated platforms can help them create the models they need to transform processes.

Considering the many challenges businesses face when deploying AI, it’s understandable so many still lag behind. Organizations that have overcome these barriers can attest to AI’s power to revolutionalize business through process improvement and increased employee productivity.

End-use technologies require human participation as an input. Without human creators, technology can’t successfully morph into human roles.

Read the source article in Entrepreneur.

Opinion: Data is Holding Back AI

By Sultan Meghji, Founder & Managing Director at Virtova  I remember grumbling, “Good lord this is a waste of time,” in 1992 while I was working on an AI application for lip-reading. The grumble escaped my lips because I felt like I was spending half my time inputting data cleanly into the video processing neural […]

By Sultan Meghji, Founder & Managing Director at Virtova 

I remember grumbling, “Good lord this is a waste of time,” in 1992 while I was working on an AI application for lip-reading. The grumble escaped my lips because I felt like I was spending half my time inputting data cleanly into the video processing neural network. Bouncing from a video capture device to a DEC workstation to a Convex Supercomputer to a Cray, I felt like I had been thrown into a caldron of Chinese water torture.

Sultan Meghji, Founder & Managing Director at VirtovaSitting over my head was a joke happy birthday poster from Arthur C. Clarke’s Space Odyssey series featuring HAL 9000. I found it ironic that I was essentially acting like a highly-trained monkey, while a fictional AI stared down at me, laughing. Over the two years of that AI project, I easily spent 60% of my time just getting the data captured, cleaned, imported and in a place where it could be used by the training system. AI, as practitioners know, is the purest example of garbage in, garbage out. The worst part is that sometimes you don’t realize it until your AI answers “anvil” when you ask it what someone’s favorite food is.

Last month, I was having a conversation with the CEO of a well-respected AI startup when I was struck by deja-vu. He said, “I swear, we have spent at least half of our funding on data management.” I wondered if this could actually be the case, so I pushed him, probing him with questions on automation, data quality and scaling. His answers all sounded remarkably familiar. Over the next two weeks, I contacted a few other AI startup executives — my criteria was that they had raised at least $10 million in funding and had a product in the market — and their answers were all strikingly similar.

To be sure, there are significant improvements being made to decrease the amount of information needed to train AI systems and in building effective learning transference mechanisms. This week, in fact, Google revealed solid progress with the news that its AlphaGo is now learning automatically from itself. The advancement trends will continue, but such innovations are still very much still in their early stages. In the meantime, AI hype is very likely to outstrip real results.

So what are some things that can be done to raise the quality of AI development? Here are my suggestions for building a best-in-class AI system today:

Rely on peer-reviewed innovation. Companies using AI backed by thoughtful study, preferably peer reviewed by academics, are showing the most progress. However, that scrutiny should not stop with the algorithm. That same critical analysis should be true of the data. To that point, I recently suggested to a venture capital firm that if the due diligence process for a contemplated investment revealed a great disparity between the quality of the algorithms and the quality of the data utilized by the start-up, it should pass on the investment. Why? Because that disparity is a major red flag.

Organize data properly. There is an incredible amount of data being produced each day. But it should be kept in mind that learning vs. production data is different, and data must be stabilized as you move from a training environment to a production one. As such, utilizing a cohesive internal data model is critical, especially if the AI is built according to a recent ‘‘data-driven’ architecture vs. a ‘model-driven’ system. Without a cohesive system, you have a recipe for disaster. As one CEO recently told me, a year of development had to be discarded because his company hadn’t configured its training data properly.

Automate everything in the production environment. This goes hand in hand with being organized, but it needs to be called out separately. Transitioning from the research lab to the production environment, no matter what system you are building, requires a fully automated solution. One of the benefits of the maturation of Big Data and IOT systems is that building such a solution is a relatively straightforward part of developing an AI system. However, without full automation, errors in learning, production and a strain on human resources compound flaws and make their repair exceedingly difficult.

Choose quality over quantity. Today, data scientists find themselves in a situation where a large amount of the data they collect is of terrible quality. An example is clinical genetics, where the data sources used to analyze gene sequence variation are so inconsistent that ‘database of database’ systems have been built to make sense of the datasets. In the case of genetic analysis systems, for example, over 200 separate databases are often utilized. Banks too often must extract data from at least 15 external systems. Without a systemic basis for picking and choosing the data, any variances in data can work against the efficacies of an AI system.

Scale your data (and that’s hard to do). Given my previous comments about Big Data and IOT, you might think that scaled data management is easily available. But you would be wrong. That’s because once you clear the previous four steps, you may end up with very small relevant sample sets. In some applications, a small dataset may represent a good start; however, that doesn’t fly in AI systems. Indeed, would you want to release an AI program such as autonomous cars or individualized cancer drugs into the wild after being trained on a small database?

In aggregate, the considerations described above represent some fundamental starting points for ensuring that you are holding your data to the same standards to which you hold your AI. Ahead of coming technical advancements, especially around data management and optimization in algorithm construction, these tenets are a good starting point for those trying to avoid the common garbage in, garbage out issues that are (unfortunately) typifying many AI systems today.

The author is an experienced executive in high tech, life sciences and financial services. Starting his career as a technology researcher over 25 years ago, he has served in a number of senior management roles in financial services firms, as well as starting and exiting a number of startups.

Read the source article in The Financial Revolutionist.

AI Ecosystem in Toronto a Model: Region’s AI Talent Attracting Support From Investors, Major Players

The thriving AI ecosystem in Toronto can serve as a model for other tech hubs. The system is underpinned by the region’s AI talent and expertise, by increasing support from venture capitalists for startups, and by leadership from the top. Canadian Prime Minister Pierre Trudeau is into coding, welcomes entrepreneurs to Canada, has tuned the […]

The thriving AI ecosystem in Toronto can serve as a model for other tech hubs. The system is underpinned by the region’s AI talent and expertise, by increasing support from venture capitalists for startups, and by leadership from the top. Canadian Prime Minister Pierre Trudeau is into coding, welcomes entrepreneurs to Canada, has tuned the immigration system to help attract experts and has backed the effort with government grants.

Toronto is home to world-class academic institutions including the nearby University of Waterloo and the University of Toronto. “These institutions are world leaders in scientific research, creating an ecosystem ripe with opportunities for novel applications for AI, particularly in the fields of heal and life sciences,” stated Naheed Kurji, president and CEO of Cyclica, writing in VentureBeat. Cyclica offers an AI platform for use in the pharmaceuticals industry.

Toronto is welcoming, with a diverse, international and well-educated population. A strong local network of investors, incubators, technologists and support staff is sustaining and growing AI companies focused on transforming specific industries, Kurji stated.

Organizations headed by AI pioneers are leading and advising startups in the Toronto region. These include: Geoffrey Hinton, called by some the “Godfather of Deep Learning,” who splits his time between Google and teaching at the University of Toronto; Sanja Fidler, assistant professor of computer science at the University of Toronto, and a director of AI at NVIDIA; Raquel Urtasun of the University of Toronto and the head of Uber Advanced Technology Group in Toronto.

Raquel Urtasun, head of the Uber Advanced Technology Group in Toronto

Urtasun started at Uber in May 2017, to pursue her work on machine perception for self-driving cars. The work entails machine learning, computer vision, robotics and remote sensing. Before coming to the university, Urtasun worked at the Toyota Technological Institute at Chicago. Uber committed to hiring dozens of research and made a multi-year, multi-million dollar commitment to Toronto’s Vector Institute, which Urtasun co-founded. She still works one day per week at the University of Toronto.

Urtasan has argued that self-driving vehicles need to wean themselves off LIDAR (Light Detection and Ranging), a remote sensing method that uses a pulsed laser to measure variable distances. Her research has shown in some cases that vehicles can obtain similar 3D data about the world from ordinary cameras, which are much less expensive than LIDAR units, which costs thousands of dollars.

“If you want to build a reliable self-driving car right now we should be using all possible sensors,” Urtasun told Wired in an interview published in November 2017. “Longer term the question is how can we build a fleet of self-driving cars that are not expensive.”

The Vector Institute is one of a group of business-growth focused institutions that call Toronto home, the others being the MaRS Discovery District and the Creative Destruction Lab. Each shares a commitment to advancing AI innovation in the city. Each organization is connected to academic programs, cultivating local technology talent. All three bring technical and business talent together to optimize innovations and position them for the market.  

The Vector Institute is an independent, non-profit research institution focused on deep learning and machine learning. Its global partners include Google, Shopify, Accenture, Thomson Reuters, NVIDIA, Uber, Air Canada and five major Canadian banks. Its chief scientific advisor is Hinton, who has said, “The Institute will build on Canada’s pool of globally recognized AI expertise by training, attracting and retaining more top researchers who want to lead the world in machine learning and deep learning research.” This while having the flexibility to work on commercial applications within companies or in their own startups.

The MaRS Discovery District is a non-profit, public-private partnership founded in 2000, originally to commercialize publicly-funded medical research. The original name stood for “Medical and Related Sciences,” but that narrower association was later abandoned to include information and communications technology, engineering and social innovation. As of 2016, startup companies emerging from MaRS had created more than 6,000 jobs and raised over $3.5 billion in capital (2008 to 2016) and generated $1.8 billion in revenue (2008 to 2016).

The Creative Destruction Lab at the University of Toronto’s Rotman School of Management is a seed-stage program focusing on the transition from pre-seed to seed-stage funding. One of the lab’s co-founders is Dennis Bennie, an entrepreneur who co-founded the software company Delrina Corp., sold to Symantec in 1995 for shares valued at $760 million. He has since founded the XDL Venture Fund, focused on early stage opportunities.

Dr. Ajay Agrawal, founder of the Creative Destruction Lab, in April published a book co-authored with Joshua Gans and Avi Goldfarb: “Prediction Machines: The Simple Economics of Artificial Intelligence.” Hal Varian, the chief economist at Google, commented, “What does AI mean for your business? Read this book to find out.”

The book is a guide for companies for how to set strategies, for governments to design policies and for people to plan their lives for a different world that AI will bring. The three prominent economists recast the rise of AI as a drop in the cost of prediction. The authors show how basic tools from economics provide clarity about the AI revolution and a basis for action by CEOs, managers, policy makers, investors and entrepreneurs.

Tech Investors Bullish on Canada

Toronto continues to  attract significant interest from the investment community. Toronto saw a 165% increase in funding with $321 million invested across 38 deals in Q1 2018, according to a report from PwC Canada and CB Insights.

Toronto accounted for 34% of $1.7 billion in VC funding invested in Canada in 2016, according to  Fintech Finance. Active technology startups in the Toronto region number between 2,500 and 4,100, according to Tech Toronto, an organization that supports and monitors the community.

As more Canadian companies have proven to be successful, interest from VCs has increased, according to Chris Erickson, general partner of Pangaea Ventures of Vancouver. “More VCs have been getting good returns, raising more money and investing it in Canadian companies. We’d like to see that improve and grow,” he stated in a recent account in Crunchbase News.

The Canadian Venture Capital and Private Equity Association (CVCA) reports that in the first half of 2017, Canada saw 21 exits compared with 32 in all of 2016. This included two venture-backed initial public offerings: Ontario’s Real Matters, Inc., in financial services; and Zymeworks, Inc., a biotech firm.

Toronto saw a 165% increase in funding with $321 million invested across 38 deals in Q1 2018, according to a report from PwC Canada and CB Insights.

Toronto is also attractive to foreign tech companies. Paytm, a leading fintech startup from India, last year chose to develop its lab in Toronto for the close links between the financial and technology sectors, and the available talent pool. PayCommerce, a fintech firm from New Jersey specializing in cross-border payments, chose to launch an R&D operation in Mississauga. WeWork, a coworking space provider based in New York City, last year rented 60,000 square feet of office space in downtown Toronto to capitalize on the entrepreneurial energy.

Major technology firms that have chosen to locate their Canadian headquarters in the Toronto region include: IBM Canada, Markham; Alphabet (Google), Toronto; HP Canada, Mississauga; Cisco Systems Canada, Toronto; and Microsoft Canada, Mississauga.

In addition, GM Canada recently announced plans to open a 15,000 square foot technical center in Markham to conduct R&D on autonomous cars, with the potential to create some 700 high-quality jobs. The government of Ontario, as part of a national “super cluster” initiative, will invest $80 million in the Autonomous Vehicle Innovation Network, in partnership with the Ontario Centers for Excellence, a public-private accelerator.

The Canadian government continues to invest in its AI initiative. The Canadian Institute for Advanced Research (CIFAR) will fund a $125 million Pan-Canada strategy to “promote national collaboration, develop a robust AI talent pipeline, attract companies seeking to invest in AI, and build a Canadian AI brand.”

The Ontario government plans to expand its Business Growth Initiative to $650 million over five years, targeted towards helping innovation-driven small and medium enterprises grow and compete internationally.

More recent announcements about setting up AI labs in Toronto have come from Adobe, Samsung, LG Electronics and Etsy. Vivek Goel, VP of research and innovation with the University of Toronto, was quoted in the Globe and Mail as saying companies are moving core development operations to the region to take advantage of local talent. “From my perspective, it’s very positive. It’s creating opportunities for our folks to develop careers in Canada, where traditionally these individuals maybe have gone abroad,” she stated.

A Few Notable Toronto Startups

Deep Genomics is using AI to focus on early-stage development of drugs for inherited diseases that result from a single genetic mutation, diseases estimated to affect 350 million people worldwide. The company has raised $16.7 million to date and has hired a team of geneticists, molecular biologists and chemists working on treating disease using biologically-accurate AI technology.

Deep Genomics was founded by Brendan Frey, a professor at the University of Toronto who specializes in machine learning and genomic medicine.  The current work is driven by cost-effective new ways of sequencing whole genomes, the entire readout of a person’s DNA. “There’s an opening of a new era of data-rich, information-based medicine,” stated Frey in an article in MIT Technology Review published in May 2017. “There’s a lot of different kinds of data you can obtain. And the best technology we have for dealing with large amounts of data is machine learning and artificial intelligence,” he stated.

Frey trained as a computer scientist and studied at the University of Toronto under Geoffrey Hinton, a noted figure in the development of deep learning. Deep Genomics will seek to partner with a pharma company on drug development, Frey told MIT Tech Review. “There’s going to be a really massive shakeup of pharmaceuticals,” he stated. “In five years or so, the pharmaceutical companies that are going to be successful are going to have a culture of using these AI tools.”

Layer 6 AI was a startup until it got acquired by Toronto-Dominion Bank in January for more than $100M. Launched in 2016, the company uses AI in its platform to analyze data to learn and anticipate an individual customer’s needs. Layer 6 founders Jordan Jacobs and Tomi Poutanen are among the founders of Vector Institute. Jacobs told the Globe and Mail as the company was looking to raise more money from venture capital last fall, it began getting acquisition offers from a variety of companies “both inside and outside of Canada.”

The startup, which employed 17 people, Jacobs stated was guided by a desire to help “build a global AI ecosystem in Canada.” In addition to heading Layer 6 AI, Jacobs is now the Chief AI Officer, Business and Strategy, for TD Bank Group. Trained as a lawyer, Jordan spent 15 years advising tech entrepreneurs, Grammy and Oscar winners and sports teams in complex transactions and financings.

Winterlight Labs monitors cognitive health by speech sample analysis. The company has raised $1.5 million total from investors. The firm’s AI technology is said to quickly and accurately quantify speech and language patterns to help detect and monitor cognitive and mental diseases.

Its diagnostic system aims to analyze natural speech to detect and monitor dementia, Alzheimer’s, aphasia and other cognitive conditions. The scalable platform uses short recorded speech samples to analyze hundreds of linguistic cues, detecting dementia and other conditions with a high level of accuracy. The platform has applications in drug trials, long-term and primary care, and speech language pathology.

Co-founder Frank Rudzicz called these verbal clues “jitters” and “shimmers” – high frequency wavelets only computers can year, in a recent interview with Wired. Winterlight positions its tool as more sensitive than pencil and paper-based tests doctors currently use to assess Alzheimer’s. Winterlight’s tool can also be used multiple times per week, instead of once every six months like the pape method. That lets it track good days and bad days in measuring a patient’s cognitive function. The product is still in its beta test stage, and is being piloted by medical professionals in Canada, the US and France.

Watch this space for more on AI innovation in Canada.

— By John P. Desmond, AI Trends Editor

Human Challenges Face Today’s AI Business Strategies

The hype surrounding artificial intelligence (AI) is intense despite that fact that as yet, artificial intelligence (AI) for most enterprises is still at an early, or planning, stage. While a lot has been done, there is a lot more to do before it becomes commonplace. However, that hasn’t stopped speculation about the impact on employment […]

The hype surrounding artificial intelligence (AI) is intense despite that fact that as yet, artificial intelligence (AI) for most enterprises is still at an early, or planning, stage. While a lot has been done, there is a lot more to do before it becomes commonplace. However, that hasn’t stopped speculation about the impact on employment and what it might mean for workers, especially those whose jobs are repetitive and considered low skilled.

In October last year,  a survey carried out by analytics giant Cary, N.C-based analytics giant SAS showed that the vast majority of organizations have begun to talk about artificial intelligence, and a few have even begun to implement suitable projects. There is much optimism about the potential of AI, although fewer were confident that their organization was ready to exploit that potential.

AI Human Challenges 

The reason for this is not because there is a lack of technologies on the market. What the research uncovered was that the challenges come from a shortage of data science skills to maximize value from emerging AI technology, and deeper organizational and societal obstacles to AI adoption. Some of the figures contained in the report show that:

  • 55 percent of survey respondents felt that the biggest challenge related to AI was the changing scope of human jobs in light of AI’s automation and autonomy.
  • 41 percent of respondents raised questions about whether robots and AI systems should have to work “for the good of humanity” rather than simply for a single company, and how to look after those who lost jobs to AI systems.

It also showed that several organizations had a senior-level sponsor for AI and advanced analytics. In some cases, this was a member of the C-suite, and in a few, the CEO. In others, it was a more junior director, usually one with an interest in the area. One respondent mentioned that the organization planned to appoint a Chief Data Officer within the next six months, who would take on responsibility for this area.

And it’s not the only research that has raised the issue of the impact AI will have on jobs. Recently, we were able to identify seven jobs that might be overtaken by the growth in the use of AI in the enterprise. That said there are ways that enterprises – and individuals – can meet the challenge.

Building a Talent Pipeline

AI is generating a demand for new skill sets in the workplace. However, there is a widespread shortage of talent that possess the knowledge and capabilities to properly build, fuel, and maintain these technologies within their organizations, according to Mohit Joshi president and head of banking, financial services and insurance, as well as, healthcare and life sciences at Bengalaru, India-based Infosys. The simple answer is up-skilling. “The lack of well-trained professionals who can build and direct a company’s AI and digital transformation journeys noticeably hinders progress and continues to be a major hurdle for businesses. But there is also opportunity here too and a way to redeploy workers who face redundancy because of AI,” he says.

To mitigate this, businesses should look inward and create on-the-job training and  to build these skills internally. With the proper staff powering AI, employees are able to focus on other critical activities and boost productivity creating a larger ROI. If an enterprise’s digital transformation goal is for AI to become a business accelerator, it needs to be an amplifier of its people. “It’s going to take work to give everyone access to the fundamental knowledge and skills in problem-finding and remove the elitism around advanced technology, but the boost to productivity and ROI will be worth it in the end,” says Joshi. Businesses that haven’t yet allocated budget for AI should start small by manually auditing the organization to streamline processes and free up employees’ bandwidth. This allows decision makers to clearly see which systems aren’t utilized effectively and which areas could benefit from technology down the road.

Shifting Roles

Anthony Macciola, Chief Innovation Officer and is responsible for AI initiatives at Moscow, Russia-based global giant ABBYY, a company that uses machine learning, robotic process automation and text analytics to improve business outcomes. He says that the introduction of AI into the general workplace will result in more tasks being addressed by system of record applications and shift knowledge workers’ roles from a control to an expertise standpoint.

He cites an example of how this will work in the mortgage lending market. The dependency on a loan origination officer to drive the loan process will diminish over time due to the loan origination system being able to make intelligent decisions based on past funding behavior. This will leave only rules-based exceptions to require a loan processor’s attention. As a result, this will lighten the overall workload for loan officers, allowing them to be more responsive when an exception rises and should allow mortgage lenders to increase the productivity of their operations.

“As software gets smarter, dependency on the workforce shrinks and knowledge workers who have typically conducted manual input tasks or controlled processes in fintech, healthcare, transportation and logistics, and government customer/constituent engagement scenarios will become more narrowly focused from a role and responsibility standpoint,” he says.

Read the source article at CMSWire.com.

Data Science on a Budget: Audubon’s Advanced Analytics

On Memorial Day weekend 2038, when your grandchildren visit the California coast, will they be able to spot a black bird with a long orange beak called the Black Oystercatcher? Or will that bird be long gone? Will your grandchildren only be able to see that bird in a picture in a book or on […]

On Memorial Day weekend 2038, when your grandchildren visit the California coast, will they be able to spot a black bird with a long orange beak called the Black Oystercatcher? Or will that bird be long gone? Will your grandchildren only be able to see that bird in a picture in a book or on a website?

A couple of data scientists at the National Audubon Society have been examining the question of how climate change will impact where birds live in the future, and the Black Oystercatcher has been identified as a “priority” bird — one whose range is likely to be impacted by climate change.

How did Audubon determine this? It’s a classic data science problem.

First, consider birdwatching itself, which is pretty much good old-fashioned data collection. Hobbyists go out into the field, identify birds by species and gender and sometimes age, and record their observations on their bird lists or bird books, and more recently on their smartphone apps.

Audubon itself has sponsored an annual crowdsourced data collection event for more than a century — the Audubon Christmas Bird Count — providing the organization with an enormous dataset of bird species and their populations in geographies across the country at specific points in time. The event is 118 years old and one of the longest data sets for birds in the world

That’s one of the data sets that Audubon used in its project that looks at the impact of climate change on bird species’ geographical ranges, according to Chad Wilsey, director of conservation science at Audubon. He spoke with InformationWeek in an interview. Wilsey is an ecologist, and not trained as a data scientist. But like many scientists, he uses data science as part of his work. In this case, as part of a team of two ecologists, he applied statistical modeling using technologies such as R to multiple data sets to create the predictive models for future geographical ranges for specific bird species. The results are published in the 2014 report, Audubon’s Birds and Climate Change. Audubon also published interactive ArcGIS maps of species and ranges to its website.

The initial report used Audubon’s Christmas bird count data set and the North American Breeding Bird Survey from the US government. The report assessed geographic range shifts through the end of the century for 588 North American bird species during both the summer and winter seasons under a range of future climate change scenarios. Wilsey’s team built models based on climatic variables such as historical monthly temperature and precipitation averages and totals. The team built models using boosted regression trees and machine learning. These models were built with bird observations and climate data from 2000 to 2009 and then evaluated with data from 1980 to 1999.

“We write all our own scripts,” Wilsey told me. “We work in R. It is all machine learning algorithms to build these statistical models. We were using very traditional data science models.”

Audubon did all this work on an on-premises server with 16-CPUs and 128 gigabytes of RAM.

Read the source article in InformationWeek.com.