Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life. “Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL. AI has also been touted as the new must-have […]
Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life.
“Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL.
AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.
Some of the examples of bias are blatant, such as Google’s facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of “winners” were light-skinned. Search Google for images of “unprofessional hair” and the results you see will mostly be pictures of black women (even searching for “man” or “woman” brings back images of mostly white individuals).
While more light has been shined on the problem recently, some feel it’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.
“Fundamentally, bias, if not addressed, becomes the Achilles’ heel that eventually kills artificial intelligence,” says Chad Steelberg, CEO of Veritone. “You can’t have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem.”
As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.
Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing
Solution: Algorithm auditing
Back in the early 2010s, Cathy O’Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.
However, O’Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O’Neil’s algorithms were discriminating against users of certain backgrounds, based on the other cues.
As O’Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren’t just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O’Neil and others.)
What’s more, in some industries — for example, housing — if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.
“I had left the finance [world] because I wanted to do better than take advantage of a system just because I could,” O’Neil says. “I’d entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place.”
O’Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, about the perils of letting algorithms run the world, and started consulting.
Eventually, she settled on a niche: auditing algorithms.
“I have to admit that it wasn’t until maybe 2014 or 2015 that I realized this is also a business opportunity,” O’Neil says.
Right before the election in 2016, that realization led her to found O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).
“I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn’t actually know how to do it,” O’Neil says. “I didn’t actually know. I didn’t have good advice to give them.” But, she wanted to figure it out.
So, what does it mean to audit an algorithm?
“The most high-level answer to that is it means to broaden our definition of what it means for an algorithm to work,” O’Neil says.
Often, companies will say an algorithm is working if it’s accurate, effective or increasing profits, but for O’Neil, that shouldn’t be enough.
“So, when I say I want to audit your algorithm, it means I want to delve into what it is doing to all the stakeholders in the system in which you work, in the context in which you work,” O’Neil says. “And the stakeholders aren’t just the company building it, aren’t just for the company deploying it. It includes the target for the algorithm, so the people that are being assessed. It might even include their children. I want to think bigger. I want to think more about externalities, unforeseen consequences. I want to think more about the future.”
For example, Facebook’s News Feed algorithm is very good at encouraging engagement and keeping users on its site. However, there’s also evidence it reinforces users’ beliefs, rather than promoting dialog, and has contributed to ethnic cleansing. While that may not be evidence of bias, it’s certainly not a net positive.
Right now, ORCAA’s clients are companies that ask for their algorithms to be audited because they want a third party — such as an investor, client or the general public — to trust it. For example, O’Neil has audited an internal Siemens project and New York-based Rentlogic’s landlord rating system algorithm. These types of clients are generally already on the right track and simply want a third-party stamp of approval.
Edmonton, Home to Reinforcement Learning, now a Foundation of AI, is Retaining AI Talent, Attracting Investment Including DeepMind The capital of Edmonton in the Canadian province of Alberta, like its counterparts in Toronto and Montreal, has a number of strengths in AI research that are attracting engineering talent and private investors. These include: — The […]
Edmonton, Home to Reinforcement Learning, now a Foundation of AI,
is Retaining AI Talent, Attracting Investment Including DeepMind
The capital of Edmonton in the Canadian province of Alberta, like its counterparts in Toronto and Montreal, has a number of strengths in AI research that are attracting engineering talent and private investors. These include:
— The University of Alberta, considered a bedrock of Reinforcement Learning (RL) thanks to pioneering work done by Prof. Richard Sutton. The Royal Bank of Canada’s RBC Research arm announced in early 2017 it would hire Prof. Sutton to advise on a new research lab opening in Alberta to research the application of AI in banking.
RBC CEO Dave McKay stated at the time, “There is a lot of investment discussion about AI creating new capabilities. And it is a tool we are very excited about harnessing it within our own organization.”
— Amii (Alberta Machine Intelligence Institute), a research group set up by Prof. Sutton, has continued to attract top students from around the world.
— Borealis AI is a research center funded by RBC and aligned with U Alberta and Amii, aimed at technology transfer from AI research to commercial business opportunities. Prof. Mathew Taylor, a RL expert from Washington State University, leads research at Borealis and currently has 15 researchers focused on solving RL problems.
— ACAMP (Alberta Center for Advanced Micro Nano Technology), is an industry-led product development center founded in 2007 and used by advanced technology entrepreneurs to move their innovation from proof-of-concept to manufactured product. The center provides entrepreneurs access to multidisciplinary engineers, technology experts, unique specialized equipment, and industry expertise.
Located in Edmonton’s Research Park, ACAMP has a focus on electronics hardware, firmware, sensors, and embedded systems. The center’s product development group provides a range of support at each stage of the product development process.
The firm cites client testimonials from Xtel International, Ltd., Symroc, Nanolog Audio, the University of Dayton, Medella Health and Hifi Engineering.
Prof. Sutton Recognized for Reinforcement Learning Research
Dr. Sutton is recognized for his work in reinforcement learning, an area of machine learning that focuses on making predictions without historical data or explicit examples. Reinforcement learning techniques have been shown to be powerful in determining ideal behaviors in complex environments. For example, the techniques were used to secure a first-ever victory over a human world champion in the game of Go, as have been used in recent applications in robotics and self-driving cars.
“The collaboration between RBC Research and Amii will help support the development of an AI ecosystem in Canada that will push the boundaries of academic knowledge,” stated Dr. Sutton in a press release. “With RBC’s continued support, we will cultivate the next generation of computer scientists who will develop innovative solutions to the toughest challenges facing Canada and beyond. We’ve only scratched the surface of what reinforcement learning can do in finance.”
“We are thrilled to be opening a lab in Edmonton and to collaborate with world-class scientists like Dr. Sutton and the other researchers at Amii,” stated Dr. Foteini Agrafioti, head of RBC Research. “RBC Research has built strong capabilities in deep-learning, and with this expansion, we are well poised to play a major role in advancing research in AI and impact the future of banking.”
Gabriel Woo, VP of innovation at RBC Research in Toronto, stated in the Financial Post that while Toronto’s and Montreal’s AI ecosystems are further along, “you have a comparable academic lab at AMII, and it is home to Sutton, who literally wrote the textbook on reinforcement learning that is being read around the world. Because of that, we are partnering with them to create and fuel opportunities to help that talent stay in Edmonton.”
Woo believes the community can expect to see more investors and startups in the near future. “If we are able to provide opportunities for them to apply their research, it will attract more attention from VCs and others and increase the opportunities for commercialization.”
Edmonton Startups Have Access to Capital, Work Space
That notion was seconded by Shawn Abbott, a general partner at iNovia Capital, which backs early stage companies. “The rising tide in AI has been due to the avalanche of large-scale cloud computing capacity, which has made the techniques of scientific AI development practical,” he said in an interview with AI Trends. “AI helps make a commodity of prediction; the ability to forecast what will happen next is now available in many industries. It’s a new way to build software and to provide cognitive augmentation, the ability to support intellectual or human endeavors with software.”
The advances of Dr Sutton have been pivotal to the advance of AI generally and in Edmonton in particular. “Dr. Sutton’s group has turned out more PhDs in AI than any other group in Canada,” Abbott said.
Keeping that talent in Canada has been the focus of Startup Edmonton, funded by the Edmonton Economic Development Corp., since its founding in 2009. The group supports entrepreneurs with mentorship programs, coworking space and community events, bringing together developers, students, founders and investors. The effort has helped to some degree to stem the brain drain of AI talent from Canada. “I don’t think it’s completely stopped but it has slowed down,” said Tiffany Linke-Boyko, CEO of Startup Edmonton, in an interview with AI Trends. A more favorable cost of living in Edmonton also helps. “The expense of living in some of the US high tech cities is insane,” she said.
She called the effort to raise awareness of Edmonton as a good location to build new AI companies as off to a good start and early stage. “We still need more companies; it’s a young ecosystem with interesting momentum,” she said.
DeepMind Commitment a Boost
Edmonton got a boost with the announcement in July 2017 that DeepMind would open its first international AI research lab in downtown Edmonton. The 10-person lab, to operate in partnership with the University of Alberta, will be headed by three University of Alberta PhDs: Richard Sutton, Michael Bowling and Patrick Pilarski.
“This is a huge reputational win for the University of Alberta,” stated U of A’s dean of science, Jonathan Schaeffer, himself an AI pioneer, in an account in the Edmonton Journal. “We’ve been one of the best AI research centres in the world for more than 10 years. The academic world knows this, but the business community doesn’t. The DeepMind announcement puts us on the map in a big way. It’s going to wake up a lot of people.”
Bowling is a leading expert on AI and games. He and his team created computer programs that beat champion human poker players. Pilarski, an engineer, specializes in adapting AI to medical uses, from helping to create intelligent prosthetic limbs to reading and screening medical tests. DeepMind of London wanted them, but the three didn’t want to leave Edmonton to move to London. So DeepMind decided to come to them.
“We’ve reached a critical mass here. There’s a kind of stickiness,” stated Pilarski.”This is the right place at the right time. It’s like nowhere else in the world.”
Now the three are in a good position to attract some of their best students back to Edmonton and to recruit more top students. “A lot of our graduates are dying for a chance to use their education in Edmonton,” stated Bowling. “We’re hoping this is a catalyst for more of a tech build-up in Edmonton.”
Over the last 15 years, the Alberta government has invested $40 million in AI and machine learning research, mostly at the U of Alberta. That steady funding lured Sutton and Bowling to Edmonton initially.
DeepMind in January announced funding for an endowed chair at the University of Alberta’s department of computer science. The person who fills the position will be given academic freedom to explore any interest that could advance the field of AI.
“The DeepMind endowed chair, together with additional funding to support AI research at the department of computing science, is a sign of our continued commitment to this cause, and we look forward to the research breakthroughs this deep collaboration will bring,” stated Demis Hassabis, founder and CEO of DeepMind, in a press release.
Interesting AI Startups in and Around Edmonton
Here is a look at selected Edmonton-area startups that incorporate AI in their products or services.
Testfire Labs: Machine Learning Underlies the Hendrix AI Assistant
Testfire Labs, founded in 2017, is a startup that uses machine learning and artificial intelligence to build productivity solutions that modernize the way people work. Testfire’s flagship product, Hendrix.ai, is an AI assistant that captures meeting notes, action items and data points by listening via a microphone.
Currently in its beta test phase, Hendrix is said to produce meeting summaries that leave out “chit chat” for clarity.
“The demands to do more with less in modern business keep increasing,” stated Dave Damer, founder and CEO, in an account on Testfire recently published in AI Trends. “AI gives us an opportunity to legitimately take things off people’s’ hands that are generally mundane tasks so they can focus on higher-value work.”
Testfire has had three rounds of funding, with the amount raised undisclosed, according to Crunchbase.
Stream Technologies, Inc.
Stream combines the power of spectroscopy and AI machine learning to make detection quick and easy. Test results normally received from a lab or from those with a certain level of expertise are now identified in near and real time.
Within the agriculture sector, customer may want to identify anything from an invasive species to a disease, to a nutrient deficiency or levels of oil in plants, seeds and fertilizers.
Stream uses a three-stage system of capture, analyze and visualize to deliver its services. The capture is executed by a multispectral camera or spectrometer; that data is fed into the Stream Analytics Engine, which creates an application to analyze the spectral data; in the visualization stage, the data is ready in minutes, either colored images or levels of the detected element.
The Analytics Engine combined machine learning techniques and neural net design specifically to show the test results from spectral images and spectrometer scans.
One example is the ability to detect the difference between organic and polyethylene leaves. After the analysis, the polyethylene leaves are colored red and the organic leaves are colored blue.
DrugBank is a curated pharmaceutical knowledge base for precision medicine, electronic health records and drug development.
“Our mission is to enable advanced in precision medicine and drug discovery,” said co-founder and CEO Mike Wilson of OMx Personal Health Analytics, Inc., which operates DrugBank, in comments to AI Trends.
DrugBank provides structured drug information that covers drugs from discovery stage to approval stage. It includes comprehensive molecular information about drugs, their mechanisms, their interactions and their targets as well as detailed regulatory information including indications, marketing status and clinical trials. DrugBank has become one of the world’s most widely used reference drug resources. It is routinely used by the general public, educators, pharmacists, pharmacologists, the pharmaceutical industry and regulatory agencies.
The first version of DrugBank was released in 2006. Version 5.1.1 was released in July 2018. The online database started as a project of computer science professor Dr. David Wishart of the U of Alberta. Undergraduate students Craig Knox and Mike Wilson helped develop the tool as undergraduates. The two later made a deal with the university to commercialize the database and set up shop at Startup Edmonton, which provides workspace and support for entrepreneurs. .
“The first weekend we released it, the servers crashed because there was so much traffic coming in,” stated co-founder Craig Knox in an account in Startup Edmonton. “It was quite popular and grew in its popularity over the years.” Over the next decade, DrugBank became ubiquitous in the pharma world, with millions of global users.
“We sell subscriptions for our datasets and software for precision medicine, electronic health records, and drug development. We also provide datasets for academic researchers for free,” Wilson told AI Trends.
Now DrugBank’s commercial clients include some of the largest pharmaceutical companies in the world, as well as mid-sized companies, a growing number of pharma startups, and companies providing scientific reference software. “The value for the users is saving time by finding the information in one place,” stated Wilson.
Each month, a million users visit the site, making DrugBank the most popular drug database in the world. It has information on more than 20,000 individual drugs, including approved drugs, drugs in clinical trials and drug formulas that show potential.
With pharma research advancing rapidly, the database must be continually updated with new information. To do this, the company uses a team of nine ‘bio curators’ — representing pharmacy, medicine, biochemistry, and other fields — who comb the academic literature for new information to add to the resource daily.
New offerings use AI to provide insights for precision medicine and pharmaceutical analytics. “Our latest offering analyzes an individual’s medical history and medications and provides important insights based on an analysis of various factors including side effects, interactions and comparisons to similar medications,” Wilson said. “The offering leverages our extremely detailed structured knowledge base and a proprietary AI algorithm to provide the analysis.”
The founders spoke highly of the support they get from Startup Edmonton, which has helped them lay a foundation for a global, scalable technology product. They enjoy being located in the downtown facility with its network of entrepreneurs. “You learn from each other which is a really cool benefit,” stated Wilson.
Includes Healthcare, Biomed, Text Analysis, Legal Research, Image Analysis, Drug Discovery, Education Canada has made a commitment for many years to the study of AI at universities across the county, and today robust business incubation programs supported by Canada’s state and regional governments work to transform research into viable businesses. This AI ecosystem has produced […]
Includes Healthcare, Biomed, Text Analysis, Legal Research, Image Analysis, Drug Discovery, Education
Canada has made a commitment for many years to the study of AI at universities across the county, and today robust business incubation programs supported by Canada’s state and regional governments work to transform research into viable businesses. This AI ecosystem has produced breakthrough research and is attracting top talent and investment by venture capital. Here is a look at a selection of Montreal- and Toronto-based AI startups.
TandemLaunch, Technology Transfer Acceleration
TandemLaunch is a Montreal-based technology transfer acceleration company, founded in 2010, that works with academic researchers to commercialize their technological developments. CEO and General Partner Helge Seetzen was the founder and directs the company’s strategy and operations. TandemLaunch has raised $29.5 million since its founding, according to CrunchBase. The firm has spun out more than 20 companies and has been recognized for supporting women founders.
Seetzen was a successful entrepreneur who co-founded Sunnybrook Technologies and later BrightSide Technologies to commercialize display research developed at the University of British Columbia. BrightSide was sold to Dolby Laboratories for $28 million in 2007.
TandemLaunch provides startups with office space, access to IT infrastructure, shared labs for electronics, mechanical or chemical prototyping, mentoring, hands-on operational support and financing.
Asked by AI Trends to comment, CEO Seetzen said, “TandemLaunch has a long history of building leading AI companies based on technologies from international universities. Example successes include LandR – the world’s largest music production platform – and SportlogiQ which offers AI-driven game analytics for sports. Many younger TandemLaunch companies are at the brink of launching game-changing products onto the market such as Aerial’s AI for motion sensing from Wi-Fi signals which will be released in several countries as a home security solution later this year. With hundreds of AI developers across our portfolio of 20+ companies, TandemLaunch is well positioned to capitalize on AI opportunities of all stripes.”
Other companies in the TandemLaunch portfolio include: Kalepso, focused on blockchain and machine learning; Ora, offering nanotechnology for high-fidelity audio; Wavelite, aiming to increase the lifetime of wireless sensors used in IoT operations; Deeplite, providing an AI-driven optimizer to make deep neural networks faster; Soundskrit, changing how sound is measured using a bio-inspired design; and C2RO, offering a robotic SaaS platform to augment perception and collaboration capabilities of robots.
BenchSci offers an AI-powered search engine for biomedical researchers. Founded in 2015 in Toronto, the company recently raised $8 million in a series A round of funding led by iNovia Capital, with participation including Google’s recently-announced Gradient Ventures.
BenchSci uses machine learning to translate both closed-and open-access data into recommendations for specific experiments planned by researchers. The offering aims to speed up studies to help biomedical professionals find reliable antibodies and reduce resource waste.
“Without the use of AI, basic biomedical research is not only challenging, but drug discovery takes much longer and is more expensive,” BenchSci cofounder and CEO Liran Belenzon stated in an account in VentureBeat. “We are applying and developing a number of advanced data science, bioinformatics and machine learning algorithms to solve this problem and accelerate scientific discovery by ending reagent failure.” (A reagent is a substance used to detect or measure a component based on its chemical or biological activity.)
In July 2017, Google announced its new venture fund aimed at early-stage AI startups. In the year since, Gradient Ventures has invested in nine startups including BenchSci, the fund’s first known health tech investment and first outside the US.
“Machine learning is transforming biomedical research,” stated Gradient Ventures founding partner Ankit Jain. “BenchSci’s technology provides a unique value proposition for this market, enabling academic researchers to spend less time searching for antibodies and more time working on their experiments.”
BenchSci told VentureBeat is tripled its headcount last year and plans to add 16 new hires throughout 2018.
Imagia is an AI healthcare company that fosters collaborative research to accelerate accessible, personalized healthcare.
Founded in 2015 in Montreal, the company in November 2017 acquired Cadens Medical Imaging for an undisclosed amount, to accelerate development of its biomarker discovery processes. Founded in 2008, Cadens develops and markets medical imaging software products designed for oncology, the study of tumors.
“This strategic transaction will significantly accelerate Imagia’s mission of delivering AI-driven accessible personalized healthcare solutions. Augmenting Imagia’s deep learning expertise with Cadens’ capabilities in clinical AI and imaging was extremely compelling, to ensure our path from validation to commercialization,” stated Imagia CEO Frederic Francis in a press release. “This is particularly true for our initial focus on developing oncology biomarkers that can improve cancer care by predicting a patient’s disease progression and treatment response.”
Imagia co-founder and CTO Florent Chandelier said “Our combined team will build upon the long-term outlook of clinical research together with healthcare partnerships, and the energy and focus of a technology startup with privileged access to deep learning expertise and academic research from Yoshua Bengio’s MILA lab. We are now uniquely positioned to deliver AI-driven solutions across the healthcare ecosystem.”
In prepared remarks, Imagia board chair Jean-Francois Pariseau stated, “Imaging evolved considerably in the past decade in terms of sequence acquisition as well as image quality. We believe AI enables the creation of next generation diagnostics that will also allow personalization of care. The acquisition of Cadens is an important step in building the Imagia platform and supports our strategy of investing in ground breaking companies with the potential to become world leaders in their field.”
Ross Intelligence is where AI meets legal research. The firm was founded in 2015 by Andrew Arruda, Jimoh Ovbiagele and Pargies Dall ‘Oglio, machine learning researchers from the University of Toronto. Ross, headquartered in San Francisco, in October 2017 announced an $8.7 million Series A investment round led by iNovia Capital, seeing an opportunity to compete with the legal research firms LexisNexis and Thomson Reuters.
The platform helps legal teams sort through case law to find details relevant to new cases. Using standard keyword search, the process takes days or weeks. With machine learning, Ross aims to augment the keyword search, speed up the process and improve the relevancy of terms found.
“Bluehill [Research] benchmarks Lexis’s tech and they are finding 30 percent more relevant info with Ross in less time,” stated Andrew Arruda, co-founder and CEO of Ross, in an interview with TechCrunch.
Ross uses a combination of off-the-shelf and proprietary deep learning algorithms for its AI stack. The firm is using IBM Watson for some of its natural language processing as well. To build training data, Ross is working with 20 law firms to simulate workflow example and test results.
Ross has raised a total of $13.1 million in four rounds of financing, according to Crunchbase.
The firm recently hired Scott Sperling, former head of sales at WeWork, as VP of sales. In January, Ross announced its new EVA product, a brief analyzer with some of the power of the commercial version. Ross is giving it away for free to seed the market. The tool can check the recent history related to cited cases and determine if they are still good law, in a manner similar to that of LexisNexis Shepard’s and Thomson Reuters KeyCite, according to an account in LawSites.
EVA’s coverage of cases includes all US federal and state courts, across all practice areas. “With EVA, we want to provide a small taste of Ross in a practical application, which is why we are releasing it completely free,” Arruda told LawSites. “We’re deploying a completely new way to doing research with AI at its core. And because it is based on machine learning, it gets smarter every day.”
Phenomic AI Uses Deep Learning to Assist Drug Discovery
Phenomic AI is developing deep learning solutions to accelerate drug discovery. The company was founded in Toronto in June 2017 by Oren Kraus, from the University of Toronto, and Sam Cooper, a graduate of the Institute of Cancer Research in London. The aim is to use machine learning algorithms to help scientists studying image screenings to learn which cells are resistant to chemotherapy, thus fighting the recurrence of cancer in many patients. The AI enables the software to comb through thousands of cell culture images to identify those responsible for being chemo-resistant.
“My PhD at U of T was looking at developing deep-learning techniques to automate the process of analyzing images of cells, so I wanted to create a company looking at this issue,” stated Kraus in an account in StartUp Here Toronto. “There are key underlying mechanisms that allow cancer cells to survive in the first place. If we can target those underlying mechanisms that prevent cancer coming back in entire groups of patients, that’s what we’re going for.”
Cooper is working towards his PhD with the department of Computational Medicine at Imperial College, London, and also with the Dynamical Cell Systems team at the Institute of Cancer Research. His research focuses on developing deep and reinforcement learning solutions for pharmaceutical research.
An early research partner of Phenomic AI is the Toronto Hospital for Sick Children, in a project to study a hereditary childhood disease.
The company has raised $1.5 million in two funding rounds, according to Crunchbase.
Erudite.ai is marketing ERI, a product that aims to connect a student who needs help on a subject with a peer who has shown expertise in the same subject. The company was founded in 2016 in Montreal and has raised $1.1 million to date, according to Crunchbase. The firm uses an AI system to analyze the content of conversations and specific issues the student faces. From that, it generates personalized responses for the peer-tutor. ERI is offered free to students and schools.
Erudite.ai is competing for an IBM Watson XPrize for Artificial Intelligence, being one of three top 10 teams announced in December, from 150 entrants competing for $5 million in prize money. President and founder Patrick Poirier was quoted in The Financial Post on the market opportunity, “Tutoring is very efficient at helping people improve their grades. It’s a US $56 billion market. But at $40 an hour, it’s very expensive.” Erudite.ai is giving away its product, for now. The plan is to go live in September and host 200,000 students by year-end. By mid-2019, the company plans to sell a version of the platform to commercial tutoring firms, to help them speed teaching time and reduce costs.
The company hopes to extend beyond algebra to geometry, then the sciences, in two years. “The AI will continue to improve,” states Poirier. “In five years, I hope we will be helping 50 million people.”
Keatext’s AI platform interprets customers’ written feedback across various channels to highlight recommendations aimed at improving the customer experience. The firm’s product is said to enable organizations to audit customer satisfaction, identify new trends, and keep track of the impact of actions or events affecting the clients. Keatext’s technology aims to mimic human comprehension of text to deliver reports to help managers make decisions.
The company was founded in 2010 in Montreal by Narjes Boufaden, first as a professional services company. From working with clients, the founder identified a gap in the text analytics industry she felt the firm could address. In 2014, Keatext began offering a SaaS product offering.
Boufaden holds an engineering degree in computer science and a PhD in natural language processing, earned with the supervision of Yoshua Bengio and Guy Lapalme. Her expertise is in developing algorithms to analyze human conversations. She has published many articles on NLP, machine learning, and text mining from conversational texts.
Keatext in April announced a new round of funding, adding CA$1.72 million to support commercial expansion, bringing the company’s funding total to CA$3.32 million since launching its platform two years ago. “This funding will help us gain visibility on a wider scale as well as to consolidate our technological edge,” stated Boufaden in a press release. “Internet and intranet communication allows organizations to hold ongoing conversations with the people they serve. This gives them access to an enormous amount of potentially valuable information. Natural language understanding and deep learning are the keys to tapping into this information and revealing how to better serve their audiences.”
Founded in 2013 in Montreal, Dataperformers is an applied research company that works on advanced AI technologies. The company has attracted top AI researchers and engineers to work on Deep Learning models to enable E-commerce and FinTech business uses.
Calling Dataperformers “science-as-a-service,” co-founder and CEO Mehdi Merai stated, “We are a company that solves problems through applied research work in artificial intelligence,” in an article in the Montreal Gazette. Among the first clients is Desjardins Group, an association of credit unions using the service to analyze large data volumes, hoping to discover hidden patterns and trends.
Dataperformers is also working on a search engine for video called SpecterNet, that combines use of AI and computer vision to find specific content. Companies could use the search engine to identify videos where their products appear, then market the product to the video’s audience. The company is using reinforcement learning to help the video search AI to learn on its own.
Botler.ai was founded in January 2018 by Ritika Dutt, COO, and Amir Moraveg, CEO, as a service to help victims of sexual harassment determine whether they have been violated. The bot was created following a harassment experienced by cofounder Dutt.
She was unsure how to react after the experience, but once she researched the legal code, she gained confidence. “It wasn’t just me making things up in my head. There was a legal basis for the things I was feeling, and I was justified in feeling uncomfortable,” she stated in an account in VentureBeat.
The bot uses natural language processing to determine whether an incident could be classified as sexual harassment. The bot learned from 300,000 court cases in Canada and the US, drawing on testimony from court filings, since testimony aligns most closely with conversational tone. The bot can generate an incident report.
This is Botler.ai’s second product, following a bot made last year to help people navigate the Canadian immigration system.
Yoshua Bengio of MILA is an advisor to the startup.
With its confluence of academics, international accessibility, culture of collaboration, many startups and access to capital, Montreal may be poised to become the next Silicon Valley. This might be especially true given the current America political climate hostile to the international cooperation on which research institutions and technology companies thrive. Montreal is benefitting today from […]
With its confluence of academics, international accessibility, culture of collaboration, many startups and access to capital, Montreal may be poised to become the next Silicon Valley. This might be especially true given the current America political climate hostile to the international cooperation on which research institutions and technology companies thrive.
Montreal is benefitting today from a long-term commitment by the Canadian government to fund AI research.
“Canada has supported the fundamental basics of AI by financing Bengio (Yoshua Bengio,University of Montreal and MILA), LeCun (Yann LeCun, VP and Chief AI Scientist, Facebook) and Geoff Hinton (University of Toronto and Google), over 25 years, back to when AI was not as strong a bet “ said Chris Arsenault, General Partner, iNovia Capital, Montreal, in an interview with AI Trends. “That’s why Canada is in such a great position right now.”
These scientists are a big pull for Canada to attract students and the many big technology companies who have opened research labs in Canada, especially in Montreal and Toronto. These include: IBM AI Lab; Facebook AI Center (FAIRE); Google AI Lab; Microsoft (acquired Maluuba in January 2017); Tencent, via an investment in Element.ai; Intel, also via Element.ai; Google DeepMind Center; Samsung AI Center; Thales Centre of Research & Tech in AI; the RBC (Royal Bank of Canada) Borealis AI Center; Uber AI; ADM AI lab (opening soon); NVIDIA, SunLife, Adobe; LG, Fujitsu; and TD (Toronto-Dominion Bank)/Layer 6.
“We are just starting to see the fruits of the results of all this research in the form of companies with business models and platforms incorporating AI,” Arsenault said. Advances in chip design and availability of compute power via the cloud are also enabling the rush. “This was not possible five or 10 years ago,” Arsenault added.
Companies Finding AI Talent in Montreal
A chief attraction for companies pursuing AI research and commercialization, is the access to top talent centered around the universities, in particularly the McGill University and the University of Montreal, which includes the Montreal Institute for Learning Algorithms (MILA), said to be one of the largest deep learning labs in the world. Partly this is due to the accomplishments of Dr. Bengio, one of the world’s leading deep learning researchers. (See Executive Interview with Dr. Bengio in AI Trends.)
“Montreal has the largest concentration of deep learning academics in the world. This attracts some of the best students, postdocs, professors, researchers, engineers and entrepreneurs interested in contributing to the ongoing AI revolution,” Dr. Bengio stated.
The Canadian government’s commitment to AI is exemplified in its support for MILA. The government of Quebec recently allocated $80 million over the near five years to support its growth, and the federal government’s Pan-Canadian AI Strategy unit has granted MILA $44 million to supports its activities.
The MILA mission is to attract and retain talent in the machine learning field; to propel advanced research in deep learning and reinforcement learning; to transfer technology by supporting private AI startups and established businesses; and to contribute to the social dialogue and the development of applications that benefit society.
The new Facebook Artificial Intelligence Research (FAIR) in Montreal will be led by McGill University professor Joelle Pineau, a member of MILA. The plan is to employ research scientists and engineers engaged in a wide range of projects, with a focus on reinforcement learning and dialog systems.
“Montreal already has an existing fantastic academic AI community, an exciting ecosystem of startups, and promising government policies to encourage AI research,” stated LeCun in a press release about the investment. “We are excited to become part of this larger community, and we look forward to engaging with the entire ecosystem and helping it continue to thrive.”
“For many years, I have seen a steady stream of talented AI researchers with Masters and PhDs from our universities move to the US to find the best research jobs,” Prof. Pineau stated in a release from McGill University. “They will now have an opportunity to do this right here in Montreal. The Montreal FAIR Lab will initially launch with ten researchers, with the aim of scaling up to more than 30 researchers in the coming year.”
Technical talent in Montreal is attracted to companies who offer a chance to publish papers and “do something good for humanity,” in the words of Patrick Poirier, chief technology of startup Erudite AI. “Trying to fight for talent with pure cash is a losing bet for startups in Montreal,” he told Daniel Faggella, the founder of Tech Emergence, a market research company focused on AI and machine learning, who spent 12 days visiting AI related ventures and executives in Montreal last year and wrote an account of his conclusions.
Montreal Cost of Living, Diversity Are Strengths
The Montreal culture, lifestyle and relatively low cost of living compared to other urban tech centers such as San Francisco and Boston, is also attractive.
One technologist who made the move from Silicon Valley to Montreal is Maxime Chevalier-Boisvert, who returned to Montreal in mid-2017 after working at Apple for 13 months, according to an account in the New York Times. She had an opportunity to work with Yoshua Bengio at MILA and could not pass it up. Her title at MILA is Architect of Imaginary Machines. While her salary was about one-third of what she made at Apple, her rent for a two-bedroom apartment in Montreal was less than a third of the monthly rent she paid for a one-bedroom apartment in Sunnyvale. “Living in Montreal is pretty good,” she stated.
The Montreal AI culture has also attracted investments from those concerned with the social impact and risks of AI. The Open Philanthropy Project in July 2017 awarded $2.4 million to MILA to support “technical research on potential risks from advanced AI,” stated the announcement from OPP, which has a focus area on Global Catastrophic Risks that includes advanced AI. The OPP’s two primary aims are to increase high-quality research on the safety of AI, and the number of people knowledgeable about both machine learning and the potential risks of AI.
Montreal’s diversity of culture is also helping to attract talent. Dr. Alexandre Le Bouthillier, founder of machine vision healthcare company Imagia, observed that most talent in Montreal’s AI community is foreign-born, with his own team coming from all over the globe. “Smart people know that talent attracts talent,” he has stated.
Montreal and Toronto are benefitted from a Canadian immigration strategy consistent with the country’s AI initiative. Canada launched a fast-track visa program for high-skilled workers in the summer of 2017. Today, foreign students make up 20 percent of all students at Canadian universities compared with less than five percent in the US, according to a recent account in Politico written by two University of Toronto professors, Richard Florida and Joshua Gans. Canadian immigration law also makes it easier for foreign students to remain in Canada after they graduate.
Since the election of Donald Trump as US president in November 2016, applications to Canadian universities have spiked upward. International student applications jumped 70 percent in the fall of 2017 compared to the previous year; applications to McGill University in Montreal jumped 30 percent; and those to the University of British Columbia in Vancouver increased by 25 percent, according to the authors.
Canadian Prime Minister Justin Trudeau views immigrants as contributing to the growth of the Canadian economy, particularly in areas of technical innovation. “People choosing to move to a new place are self-selected to be ambitious, forward-thinking, brave and builders of a better future,” he stated in a recent account in TechCrunch. “For someone who chooses to do this to ensure their kids have a good life is a big step.” The Canadian perspective on innovation is helping to attract talent not only for the opportunity to conduct technical research but also to study “the consequences of AI, the consequences of automation,” Trudeau stated.
French culture has a big impact on Montreal, expanding beyond the delis and coffee shops and into business life. Many of the larger businesses primarily speak French in the office and in many of the top universities, including the University of Montreal.
Montreal Attracting Investment Capital
The ability of Montreal’s universities and startups to attract capital from tech giants and investors has helped to cement its position. The ability of Montreal-based platform and incubator Element AI, to raise $102 million in a Series A round of investment in June 2017, was a tipping point. The firm’s mission is to lower the barrier to entry for commercial applications in AI by offering AI talent and resources to companies that need to supplement their own staffs.
The round was led by Data Collective, which backs entrepreneurs applying deep learning technologies to transform giant industries, and included as partners Microsoft Ventures and NVIDIA. The Series A round came six months after Element AI announced a seed round from Microsoft Ventures (for an undisclosed amount) and eight months after the company launched.
The firm’s approach is to build an “incubator” or “safe space” where companies that might sometimes compete, sit alongside each other and collaborate to build new products. Some believe this may be an industry first. Data Collective sees an opportunity to close the gap between the AI have and have-nots.
“There is not a lot left in the middle,” Data Collective managing partner Matt Ocko told TechCrunch. “The issue with corporations, governments and others trapped in that no man’s land of AI ‘have-nots’ is that their rivals with superior AI-powered decision making and signal processing will dominate global markets.”
Element AI foresees initial product pickup in areas of: predictive modeling, forecasting models for small data sets, conversational AI and natural language processing, aggregation techniques based on machine learning, reinforcement learning for physics-based motion control, statistical machine learning algorithms, voice recognition, fluid simulation and consumer engagement optimization.
Element AI is not yet discussing customer engagements in depth, a spokesman told AI Trends, but they have signed up as customers the Port of Montreal, Radio-Canada (Canadian media company) and the Canadian Space Agency. According to a recent article in Fortune, the company sees an opportunity to embed itself in large organizations that may use Google for email and Amazon for web services, but are reluctant to give those companies access to internal databases with company-sensitive information. Element AI sees an opportunity to position as a more ethical AI company than those involved with military contracts and election influencers.
The future looks good for AI innovation out of Montreal. Karam Thomas, founder and CEO of CognitiveChem, a company leveraging AI to help chemists develop safer chemicals, stated, “Montreal’s unique advantage lies in its collaborative research between academia, startups and corporations.” Montreal’s AI boosters are hoping that collaboration will spur more entrepreneurs to build sizable new companies.
The most recent issue of MIT Technology Review shows their annual list of 35 Innovators Under 35. Of these, 15 are AI-based – 43%. Another 3 are in Computational Synthetic Biology that depends on deep learning. Similarly the website Angel.co which tracks the formation and investment in startups shows about 6,800 companies specifically relating to AI. That’s probably understated. […]
The most recent issue of MIT Technology Review shows their annual list of 35 Innovators Under 35. Of these, 15 are AI-based – 43%. Another 3 are in Computational Synthetic Biology that depends on deep learning.
Similarly the website Angel.co which tracks the formation and investment in startups shows about 6,800 companies specifically relating to AI. That’s probably understated. I’d round up to an even 10,000.
So it’s no surprise that AI is the siren song that launched 10,000 ships. The real question is how many will survive for even the next three years?
We’re not talking about how existing companies should capitalize on AI to enhance their business. We’re talking about how to become the next Google, Facebook, or Amazon with a lead so dominant that no one can catch up.
The Single Key Strategy that Defines AI Success: Data Dominance
Start to look at individual companies and you’ll see that they are focused on their technology, the user experience, and their product or platform. This perspective will take them no further than being just another product or perhaps only a feature. It will not take them to becoming a long term viable company that will return their investor’s capital, much less the desired multiple.
To create a successful AI company you must create such a wide moat that no one can catch up unless they pay your price. That moat is not about technology. There are essentially no monopolies on deep learning technologies, only leaders that can quickly be copied.
The secret to a wide moat in AI is to have a virtual monopoly on the data you are using to train. In this case monopoly also means such a large lead in users and data volume that no one can reasonably catch up.
How to Create a Data Monopoly
All AI companies face the same barrier when starting out: how to obtain enough data to train their product.
Everyone recognizes this virtuous feedback cycle, but without users you can’t generate sufficient data, and so it continues.
The question they should be asking, even before taking investment is how the data can be acquired in a way that is strategically defensible. The answer to this question will simply eliminate many markets and applications where data is not defensible or competitors already have substantial leads.
For example, there’s no wide moat available in advertising. Google dominates search-based advertising and Facebook dominates social media based advertising. General e-commerce? Can’t beat the lead that Amazon has in learning about our personal shopping desires. These three industry giants clearly have defensible positions by virtue of their dominant data.
So How Then to Identify and Collect Defensible Data
A defensible data strategy is not something you can sprinkle on any AI startup. It starts by carefully selecting the industry and the problem to be solved. These are not easy to find, but here are some examples to get your thought processes started.
You’ll find here a unique blend of identifying markets and market needs where the addition of AI creates opportunity. You’ll also see examples of creating new types of data in existing markets that competitors can’t duplicate.
Here are a few selected examples that exemplify good data strategies:
Blue River Technology: This is a company that offers agricultural optimization by evaluating each plant individually at each stage of growth. There are plenty of competitors that use drones or stationary sensors to divide a field into smaller segments to be optimized but no competitor that does this on a plant-by-plant basis.
Their technology platform looks like 30 foot wide arms on the front of a tractor that literally takes an image of each plant (think lettuce for example) as the arm passes over. Based on their AI model the platform makes an AI-driven instantaneous decision to provide water, fertilizer, or to apply an herbicide. No sense putting energy into a plant that’s not going to make it or if it’s a weed. Blue River calls this ‘see and spray’.
The process of getting the training data wasn’t simple and involved a significant investment in running their prototype platform over farm fields to acquire images of individual plants which were then coded for health, sickness, and optimum use of fertilizer and water. They now have the world’s largest database of plant images which continues to grow with each pass of their equipment over a field. Their lead in plant level AI image training data in unassailable.
[Editor’s Note: Deere & Company acquired Blue River Technology in September 2017 for $305 million.]
By Sultan Meghji, Founder & Managing Director at Virtova I remember grumbling, “Good lord this is a waste of time,” in 1992 while I was working on an AI application for lip-reading. The grumble escaped my lips because I felt like I was spending half my time inputting data cleanly into the video processing neural […]
By Sultan Meghji, Founder & Managing Director at Virtova
I remember grumbling, “Good lord this is a waste of time,” in 1992 while I was working on an AI application for lip-reading. The grumble escaped my lips because I felt like I was spending half my time inputting data cleanly into the video processing neural network. Bouncing from a video capture device to a DEC workstation to a Convex Supercomputer to a Cray, I felt like I had been thrown into a caldron of Chinese water torture.
Sultan Meghji, Founder & Managing Director at VirtovaSitting over my head was a joke happy birthday poster from Arthur C. Clarke’s Space Odyssey series featuring HAL 9000. I found it ironic that I was essentially acting like a highly-trained monkey, while a fictional AI stared down at me, laughing. Over the two years of that AI project, I easily spent 60% of my time just getting the data captured, cleaned, imported and in a place where it could be used by the training system. AI, as practitioners know, is the purest example of garbage in, garbage out. The worst part is that sometimes you don’t realize it until your AI answers “anvil” when you ask it what someone’s favorite food is.
Last month, I was having a conversation with the CEO of a well-respected AI startup when I was struck by deja-vu. He said, “I swear, we have spent at least half of our funding on data management.” I wondered if this could actually be the case, so I pushed him, probing him with questions on automation, data quality and scaling. His answers all sounded remarkably familiar. Over the next two weeks, I contacted a few other AI startup executives — my criteria was that they had raised at least $10 million in funding and had a product in the market — and their answers were all strikingly similar.
To be sure, there are significant improvements being made to decrease the amount of information needed to train AI systems and in building effective learning transference mechanisms. This week, in fact, Google revealed solid progress with the news that its AlphaGo is now learning automatically from itself. The advancement trends will continue, but such innovations are still very much still in their early stages. In the meantime, AI hype is very likely to outstrip real results.
So what are some things that can be done to raise the quality of AI development? Here are my suggestions for building a best-in-class AI system today:
Rely on peer-reviewed innovation. Companies using AI backed by thoughtful study, preferably peer reviewed by academics, are showing the most progress. However, that scrutiny should not stop with the algorithm. That same critical analysis should be true of the data. To that point, I recently suggested to a venture capital firm that if the due diligence process for a contemplated investment revealed a great disparity between the quality of the algorithms and the quality of the data utilized by the start-up, it should pass on the investment. Why? Because that disparity is a major red flag.
Organize data properly. There is an incredible amount of data being produced each day. But it should be kept in mind that learning vs. production data is different, and data must be stabilized as you move from a training environment to a production one. As such, utilizing a cohesive internal data model is critical, especially if the AI is built according to a recent ‘‘data-driven’ architecture vs. a ‘model-driven’ system. Without a cohesive system, you have a recipe for disaster. As one CEO recently told me, a year of development had to be discarded because his company hadn’t configured its training data properly.
Automate everything in the production environment. This goes hand in hand with being organized, but it needs to be called out separately. Transitioning from the research lab to the production environment, no matter what system you are building, requires a fully automated solution. One of the benefits of the maturation of Big Data and IOT systems is that building such a solution is a relatively straightforward part of developing an AI system. However, without full automation, errors in learning, production and a strain on human resources compound flaws and make their repair exceedingly difficult.
Choose quality over quantity. Today, data scientists find themselves in a situation where a large amount of the data they collect is of terrible quality. An example is clinical genetics, where the data sources used to analyze gene sequence variation are so inconsistent that ‘database of database’ systems have been built to make sense of the datasets. In the case of genetic analysis systems, for example, over 200 separate databases are often utilized. Banks too often must extract data from at least 15 external systems. Without a systemic basis for picking and choosing the data, any variances in data can work against the efficacies of an AI system.
Scale your data (and that’s hard to do). Given my previous comments about Big Data and IOT, you might think that scaled data management is easily available. But you would be wrong. That’s because once you clear the previous four steps, you may end up with very small relevant sample sets. In some applications, a small dataset may represent a good start; however, that doesn’t fly in AI systems. Indeed, would you want to release an AI program such as autonomous cars or individualized cancer drugs into the wild after being trained on a small database?
In aggregate, the considerations described above represent some fundamental starting points for ensuring that you are holding your data to the same standards to which you hold your AI. Ahead of coming technical advancements, especially around data management and optimization in algorithm construction, these tenets are a good starting point for those trying to avoid the common garbage in, garbage out issues that are (unfortunately) typifying many AI systems today.
The author is an experienced executive in high tech, life sciences and financial services. Starting his career as a technology researcher over 25 years ago, he has served in a number of senior management roles in financial services firms, as well as starting and exiting a number of startups.
Meetings cost time and money to run, and many of them are unnecessary, says Testfire Labs CEO Dave Damer. His solution: the company’s AI assistant, Hendrix.ai. Currently in its beta test phase, it takes a meeting’s minutes, noting questions, answers and action items by listening via microphone. Its meeting summaries leave out “chit chat” for […]
Meetings cost time and money to run, and many of them are unnecessary, says Testfire Labs CEO Dave Damer. His solution: the company’s AI assistant, Hendrix.ai.
Currently in its beta test phase, it takes a meeting’s minutes, noting questions, answers and action items by listening via microphone. Its meeting summaries leave out “chit chat” for clarity. Exact transcripts aren’t kept for reasons of confidentiality, said Damer, who founded the company in 2017.
“The demands to do more with less in modern business keep increasing,” Damer said. “AI gives us an opportunity to legitimately take things off peoples’ hands that are generally mundane tasks so they can focus on higher-value work.”
Hendrix.ai also tracks attendance rates, numbers of last-minute meetings and meeting lengths.
On May 25, Testfire Labs won a Startup Canada regional innovation award for its work on Hendrix.ai. Startup Canada CEO Victoria Lennox said the adjudicators liked how Testfire Labs integrated AI into audio-to-text technology with Hendrix.ai.
“There’s a lot of audio-to-text tools and they’re growing more and more,” Lennox said. What made Hendrix.ai different was its focus on meetings.
Damer’s goal is for Hendrx.ai to reach companies with more than 1,000 staff. It’s being tested by 100 organizations in its beta phase, including the City of Victoria and the Northern Alberta Institute of Technology. Torsten Prues of NAIT’s information technology department said he’s been using the system with a team of six since January.
“What made us interested (in using Hendrix.ai) was that NAIT is very meeting-heavy, and people don’t like taking minutes,” he said.
Edmonton: ‘On the cusp’ of tech
Damer graduated from the University of Alberta in 1991 as a computer engineer and has 25 years of experience in the technology industry. He calls Edmonton a good home for a tech startup, with an industry that’s “on the cusp.”
Before Testfire Labs, Damer founded ThinkTel Communications Ltd. in 2003, and spent 14 years there. ThinkTel is now the business services division of Distributel, an independent communications company.
Damer started Testfire Labs as a more creative project. Currently valued at $5 million, with 10 employees, Damer hopes to grow the business to $20 million next year. He sees Hendrix.ai becoming an asset to workplaces as more tasks — like note taking — become automated.
Randy Goebel, a professor at the University of Alberta and expert in natural language processing, said applying the science of natural language understanding to everyday use is “extremely difficult in practice.” The science is there, he said, but businesses like Hendrix are challenged with translating that science into something people will pay for.
“They provide a line of sight to scientists to add value to their work,” said Goebel, who is also a researcher at the Alberta Machine Intelligence Institute.
While the system is being honed for summarizing meetings, Damer plans to include more features that track whether certain speakers dominate meetings, gauge what tones discussions take, and find possible areas where different teams can collaborate.
“You can do so much more than notes,” he said. “We can do tone analysis on whether it was a positive, negative or neutral conversation. Was there joy in the words, or was there fear. What are the emotions that are being expressed.”
When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector. However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations. Here we take a closer look at three intriguing […]
When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector.
However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations.
Here we take a closer look at three intriguing stocks using AI to forge new advances in treating and tackling disease. To pinpoint these three stocks, we used TipRanks’ data to scan for ‘Strong Buy’ stocks in the healthcare sector. These are stocks with substantial Street support, based on ratings from the last three months. We then singled out stocks making important headways in AI and machine learning.
BioXcel Therapeutics Inc.
This exciting clinical stage biopharma is certainly unique. BioXcel (BTAI) applies AI and big data technologies to identify the next wave of neuroscience and immuno-oncology medicines. According to BTAI this approach uses “existing approved drugs and/or clinically validated product candidates together with big data and proprietary machine learning algorithms to identify new therapeutic indices.”
The advantage is twofold: “The potential to reduce the cost and time of drug development in diseases with substantial unmet medical need,” says BioXcel. Indeed, we are talking $50 – 100 million of the cost (over $2 billion) typically associated with the development of novel drugs. Right now, BioXcel has several therapies in its pipeline including BXCL501 for prostate and pancreatic cancer. And it seems like the Street approves. The stock has received five buy ratings in the last three months with an average price target of $20.40 (115% upside potential).
“Unlocking efficiency in drug development” is how H.C Wainwright analyst Ram Selvaraju describes Bioxcel’s drug repurposing and repositioning. “The approach BioXcel Therapeutics is taking has been validated in recent years by the advent of several repurposed products that have gone on to become blockbuster franchises (>$1 billion in annual sales).” However, he adds that “we are not currently aware of many other firms that are utilizing a systematic AI-based approach to drug development, and certainly none with the benefit of the prior track record that BioXcel Therapeutics’ parent company, BioXcel Corp., possesses.”
Software giant Microsoft (MSFT) believes that we will soon live in a world infused with artificial intelligence. This includes healthcare.
According to Eric Horvitz, head of Microsoft Research’s Global Labs, “AI-based applications could improve health outcomes and the quality of life for millions of people in the coming years.” So it’s not surprising that Microsoft is seeking to stay ahead of the curve with its own Healthcare NExT initiative, launched in 2017. The goal of Healthcare NExT is to accelerate healthcare innovation through artificial intelligence and cloud computing. This already encompasses a number of promising solutions, projects and AI accelerators.
Take Project EmpowerMD, a research collaboration with UPMC. The purpose here is to use AI to create a system that listens and learns from what doctors say and do, dramatically reducing the burden of note-taking for physicians. According to Microsoft, “The goal is to allow physicians to spend more face-to-face time with patients, by bringing together many services from Microsoft’s Intelligent Cloud including Custom Speech Services (CSS) and Language Understanding Intelligent Services (LUIS), customized for the medical domain.”
On the other end of the scale, Microsoft is also employing AI for genome mapping (alongside St Jude Children’s Research Hospital) and disease diagnostics. Most notably, Microsoft recently partnered with one of the largest health systems in India, Apollo Hospitals, to create the AI Network for Healthcare. Microsoft explains: “Together, we will be developing and deploying new machine learning models to gauge patient risk for heart disease in hopes of preventing or reversing these life-threatening conditions.”
Every once in a while, you meet an entrepreneur who is both fully present, but also has a head full of dreams. That was my experience meeting and hosting Alex Zhavoronkov, the founder and CEO of Insilico Medicine, a few weeks ago in Vienna at the Pioneers conference. There, he gave a presentation on how he […]
Every once in a while, you meet an entrepreneur who is both fully present, but also has a head full of dreams. That was my experience meeting and hosting Alex Zhavoronkov, the founder and CEO of Insilico Medicine, a few weeks ago in Vienna at the Pioneers conference. There, he gave a presentation on how he is going to defeat aging using a set of deep learning AI tools, and also told me that I am going to live forever because I am young enough to benefit from the tech he is developing.
I am a huge skeptic to be frank (particularly anytime deep learning gets bandied about), but after chatting with him both before and after getting on stage, I can’t preclude the possibility that aging is something that might be within humanity’s (or at least Zhavoronkov’s) grasp to control.
That belief in the company’s mission is reflected in a recent set of twin announcements. The company announced that it has received a strategic round of financing led by WuXi AppTec, a Chinese integrated R&D services platform, along with Peter Diamandis’ BOLD Capital and Pavilion Capital, a subsidiary of Singapore-based Temasek. In addition, the company announced a strategic partnership with WuXi, in which Insilico’s inventions will be tested by WuXi. The terms of the round were not disclosed, but Insilico has raised $14 million previously from investors according to Crunchbase.
In order to understand the company’s technology, we need to understand a bit more about how therapeutics are developed. In the classical model used by pharmaceutical companies, scientists in an R&D lab investigate naturally occurring molecules while searching for potential therapeutic properties. When they find a molecule that could be a candidate, they begin a series of tests to determine the treatment efficacy of the molecules (and also to receive FDA approval).
Rather than going forward through the process, Insilico works backwards. The company starts with an end objective — say stopping aging — and then uses a toolbox of deep learning algorithms to devise ideal molecules de novo. Those molecules may not exist anywhere in the world, but can be “manufactured” in the lab.
The key underlying technique for the company is what are known as GANs, or generative adversarial networks with reinforcement learning. At a high-level, GANs include a neural net “generator” that creates new products (in this case, molecules), and a discriminator that classifies the new product. Those neural nets then adapt over time in order to compete against each other more effectively.
GANs have been used to create fake photos that look almost photorealistic, but that no camera has ever taken. Zhavoronkov suggested to me that clinical patient data may one day be manufactured — providing far more data while protecting patient privacy.
While Zhavoronkov has bold dreams about conquering aging, today the company is focused more broadly on creating an inventory of new molecules that could provide new therapeutics, albeit particularly focused on longevity. Under the company’s new strategic partnership, WuXi will then take those new molecules and test them for efficacy in actual clinical settings.
R&D Mission Spawned Siri, Now Pursuing AI Innovations to Address Challenges from an Aging Population to Picking Apples Dr. William Mark is president of SRI International’s Information and Computing Sciences Division, which conducts leading-edge research with a strong focus on intellectual property creation and commercialization. He formerly worked with National Semiconductor, and Lockheed Martin’s […]
R&D Mission Spawned Siri, Now Pursuing AI Innovations
to Address Challenges from an Aging Population to Picking Apples
Dr. William Mark is president of SRI International’s Information and Computing Sciences Division, which conducts leading-edge research with a strong focus on intellectual property creation and commercialization. He formerly worked with National Semiconductor, and Lockheed Martin’s Palo Alto Research Labs. He holds a Ph.D. in computer science from MIT and has held positions at the University of Southern California Information Sciences Institute and the General Motors Research Laboratories. He recently spoke with AI Trends Editor John P. Desmond.
Q.Could you tell our readers a brief background of SRI and its mission?
A. SRI International is a nonprofit R&D company. Lots of companies do R&D, fewer than used to, but still quite a few. But very few companies have R&D as their business, and R&D is our business.
SRI started out as the Stanford Research Institute, but we’ve been independent of Stanford for decades. So we are an independent R&D company. We have clients all over the world. Our main client is the U.S. government, but we also have lots of commercial clients. And what we do for them is contract research and development. We are usually creating systems that have never been created before. And finally, one of the things we do is create spin off companies, and the most famous recent one is Siri.
Q. Can you talk about how Siri came about and the evolution of voice interaction that we’re seeing today?
A. Sure. SRI had worked on the core technology for Siri for decades. And again, this is part of the model I was talking about before. We do research for the United States government. We are allowed to keep the commercial rights to intellectual property that we create. The government gets government rights, we keep the commercial rights. And in some cases, we decide to exploit those through creating spin off companies. The underlying technology for Siri was based on decades of research.
And then right before Siri, we were engaged in a very large program with DARPA to create a personalized assistant that learned. And that really inspired the idea of creating a commercial personal assistant at the same time the iPhone was coming out. We decided to form a company that would create a personalized assistant on the iPhone so that you could talk to it. The primary form of interaction that we thought would be the most useful or convenient was voice interaction. And the rest is history. That turned out to be a very compelling idea for consumers, and it really has created this category of voice-based interactive systems that we now see in many parts of the world.
Q. Where would you say voice interaction is going now?
A. I think we are going to be seeing more voice interaction. And what gets to be interesting, as we go on to the future, is that we will be more and more in this Internet of Things environment, as it’s being called now. When you get into your car for example you’re getting into an environment that has many, many computers in it, 50 to 200 depending on the kind of car. And we’re seeing more and more sensors in the car. And there is a move to have the car environment know a lot more about the drivers and the passengers for safety reasons among others.
We’re going to see more sensors at home and at work. The implication is that instead of the way we think about human-computer interaction now, which is one person interacting with one computer, in this new world, it’s going to be one person interacting with many computers. And it’s going to be perhaps multiple people interacting in the presence of computers. That’s where human-computer interaction is going, and it goes beyond voice. You asked about where voice interaction is going, but in an environment like that, it’s going to be not only voice, but also things that visual sensors can pick up like the position of the person, where they’re looking. I think all of that will be part of interaction going into the future.
Q. What areas of AI innovation are you working on now at SRI?
A. We’re working hard on how to make systems that are better at interacting with humans who are trying to do what I will call sophisticated tasks. Right now, Siri, Amazon, Alexa, systems like that are very good. They have become extremely good at handling tasks like search, so you know, finding pizza near here, where is an ATM? What’s the capital of Kansas? They’re fantastic for doing things like that.
They’re also good at performing simple tasks, play this music, turn on the lights, set up an appointment for me. But if somebody is going to do something a little bit more complicated, different technology is required. We always use the example of banking because we have a spinoff company called Kasisto in that space. Imagine a system that allows you to do all of your banking on a mobile phone. That’s where we’re trying to go with these systems that have the conversational capability and the background knowledge to help people do complicated tasks. The complicated tasks include banking, lots of different kinds of shopping, healthcare, things like that.
Everyone in AI is working on machine learning and we are too. We’re looking at not just using deep-learning technology, but also combining deep-learning with other kinds of machine learning to produce even better results. Those are just a few examples.
Q. How is SRI using AI to address the problem of an aging population?
A. One of our SRI-wide initiatives is working on dealing with this global problem of a larger percentage of the population being in a non working age group, and also in a group that tends to need more health care. The AI applications of that include trying to help deal with problems like loneliness. A lot of people in the world, a lot of older people in the world are isolated. They’re not near their families. They don’t have the opportunity to interact as much as they should or would like to. These conversational systems can help to encourage people to talk.
And many geriatricians believe that alone is beneficial, just to get people to interact about things they want to talk about. It also provides an opportunity for us to use that interaction to help assess the person’s state of health in a way that is not intrusive and doesn’t invade privacy. You can tell a lot from the way people are talking about how they’re feeling; for example, whether they’re coughing or have other respiratory problems. Those are some of the uses of AI.
Q. How far along is this work?
A. This is work that’s in the research phase. And usually, what we do at SRI is build systems. We publish a lot of papers, but we also try to build things at least in the prototype stage. SRI rarely builds products. We either license our technology or we spin off companies to do that. This work is in the stage of building prototypes; there are now existing prototypes that can do some of this interaction.
Q. What are your thoughts on the issue of bias in AI algorithms? Can it be addressed?
A. Bias has many meanings. All AI systems are naturally biased by the information that they have to deal with. In the case of systems in which explicit knowledge is put in, like in the form of rules, the systems are biased by the rules that have been given to them. In the case of machine-learning systems that are data-driven, they’re biased by the data that they’re being shown.
All systems are biased to the extent that the data or the rules do not fully reflect the real world. They’re not going to necessarily be biased in a bad way. They’re still biased in the sense that we all are where if there’s data that we’re unaware of, we’re not going to be able to think about it and deal with it. The classic example in the world is the black swan. Until Australia was colonized by Europeans, the Europeans thought that there wasn’t such a thing as a black swan; they would use that as an example of something that didn’t exist. Well, then they discovered that in Australia and in other places, there are black swans. All systems have that bias based on the data that they know.
Q. What is intentional bias?
A. Well, that’s when somebody is trying to get the system to behave in a certain way through data or rules given to it. If I want the system to behave in a certain way, I can give it explicit rules that will make it behave that way. If I have a machine-learning system, I can show it data that will make it behave in a particular way.
And when you say intentional bias, that usually means that somebody is making a deliberate attempt to make the system behave in a way that’s not in accordance with the real world but is to their advantage.
Q. Can we trust AI?
A. That’s a big question. That’s something that we’re working on as are other people. And I guess the way I would answer that is that we need to create trustable AI systems. That raises the question of, how do you trust? How can you trust these systems? One way is through familiarity. One of my favorite examples is to ask people what’s their favorite man-machine interface? They almost never say the brake pedal in their car, but that is indeed a computer interface. And most people just completely trust that the computer system that’s making the brakes work is going to do the right thing. They have no idea how it works, usually, but they trust it because as far they know, it’s worked in the past and other people seem to think it works; there is a level of trust.
Things that we’re working on here in a technical sense, are building AI systems including learning systems that work within certain constraints, so you can guarantee or prove that the system will work within certain constraints. Another approach is usually called explainable AI. That is an AI system that can explain to users in terms that they understand why it is making a decision, why it is about to take some action. Through these mechanisms, I think we can make AI systems that are more trustable.
Q. Could you describe the model for how SRI works with startups?
A. We’re working on many projects all the time. My group alone works on more than a hundred projects per year. And over time, we build platforms, technology platforms. We see the flow of ideas, and we begin to think that an idea might be something that could create a venture or some set of ideas that can create a venture.
The process usually starts out internally with us looking at the way the world is going. We have a great advantage being in Silicon Valley and just seeing the activity. Everybody knows people who are in various startups, and who are venture capitalists. So we have some idea of the pulse of where things are going and we know where the technology is going. When we see what we think is a convergence, then we will start formulating a venture concept. We will then get feedback on that idea. We’ll talk to venture capitalists that we know to see what they think of it, see whether, you know, they have seen that idea 17 times already in the last two weeks or, “Wow this looks really interesting,” or “No, that’s crazy.”
If we see that it’s interesting and that other people think that it’s interesting, we will then invest a little of our own seed money in it. And that’s mostly used to bring in an entrepreneur from the outside; it is quite rare that somebody in SRI is the right person to be the CEO of one of these venture-backed companies. It happens, but is rare. We bring in an entrepreneur and work with them to develop the value proposition, get funding, and then launch the company. We get an equity position in return for helping to found the company and for the intellectual property that we put in.
Q. Outside of Siri, what are some other notable spinoffs from SRI?
A. Siri is currently most famous. Another SRI spinoff is Nuance, which went on to become the number one speech company in the world. Intuitive Surgical may not be familiar but they make robotic surgery machines. It turns out that some operations require so much precision that it’s better to do it robotically, so that the surgeon is making the robot move but the actual surgery is done by robotic manipulators, not human hands. And that’s become very important for surgeries worldwide.
I mentioned Kasisto, the one that’s in the world of banking. Another one that’s interesting is Abundant Robotics which is building apple picking robots, which are very cool.