Executive Interview: Dr. Russell Greiner, Professor CS and founding Scientific Director of the Alberta Machine Intelligence Institute

After earning a PhD from Stanford, Russ Greiner worked in both academic and industrial research before settling at the University of Alberta, where he is now a Professor in Computing Science and the founding Scientific Director of the Alberta Innovates Centre for Machine Learning (now Alberta Machine Intelligence Institute), which won the ASTech Award for “Outstanding Leadership in Technology” in […]

After earning a PhD from Stanford, Russ Greiner worked in both academic and industrial research before settling at the University of Alberta, where he is now a Professor in Computing Science and the founding Scientific Director of the Alberta Innovates Centre for Machine Learning (now Alberta Machine Intelligence Institute), which won the ASTech Award for “Outstanding Leadership in Technology” in 2006. He has been Program Chair for the 2004 “Int’l Conf. on Machine Learning”, Conference Chair for 2006 “Int’l Conf. on Machine Learning”, Editor-in-Chief for “Computational Intelligence”, and is serving on the editorial boards of a number of other journals. He was elected a Fellow of the AAAI (Association for the Advancement of Artificial Intelligence) in 2007, and was awarded a McCalla Professorship in 2005-06 and a Killam Annual Professorship in 2007. He has published over 200 refereed papers and patents, most in the areas of machine learning and knowledge representation, including 4 that have been awarded Best Paper prizes. The main foci of his current work are (1) bioinformatics and medical informatics; (2) learning and using effective probabilistic models and (3) formal foundations of learnability. He recently spoke with AI Trends.

Dr. Russell Greiner, Professor in Computing Science and founding Scientific Director of the Alberta Machine Intelligence Institute

Q: Who do you collaborate with in your work?
I work with many very talented medical researchers and clinicians, on projects that range from psychiatric disorders, to stroke diagnosis, to diabetes management, to transplantation, to oncology, everything from breast cancer to brain tumors. And others — I get many cold-calls from yet other researchers who have heard about this “Artificial Intelligence” field, and want to explore whether this technology can help them on their task.

Q: How do you see AI playing a role in the fields of oncology, metabolic disease, and neuroscience?

There’s a lot of excitement right now for machine learning (a subfield of Artificial Intelligence) in general, and especially in medicine, largely due to its many recent successes.  These wins are partly because we now have large data sets, including lots of patients — in some cases, thousands, or even millions of individuals, each described using clinical features, and perhaps genomics and metabolomics data, or even neurological information and imaging data. As these are historical patients, we know which of these patients did well with a specific treatment and which ones did not.  

I’m very interested in applying supervised machine learning techniques to find patterns in such datasets, to produce models that can make accurate predictions about future patients. This is very general — this approach can produce models that can be used to diagnose, or screen novel subjects, or to identify the best treatment — across a wide range of diseases.

It’s important to contrast this approach with other ways to analyze such data sets. The field of biostatistics includes many interesting techniques to find “biomarkers” — single features that are correlated with the outcomes — as a way to try to understand the etiology, trying to find the causes of the disease. This is very interesting, very relevant, very useful. But it does not directly lead to models that can decide how to treat Mr. Smith when he comes in with his particular symptoms.  

At a high level: I’m exploring ways to find personalized treatments — identifying the treatment that is best for each individual. These treatment decisions are based on evidence-based models, as they are learned from historical cases — that is, where there is evidence that the model will work effectively.

In more detail, our team has found patterns in neurological imaging, such as functional MRI scans, to determine who has a psychiatric disorder — here, for ADHD, or autism, or schizophrenia, or depression, or Alzheimer’s disease.

Another body of work has looked at how brain tumors will grow by looking at brain scans of people, using standard structural MRI imaging.  Other projects learn screening models that determine which people have adenoma (from urine metabolites), or models that predict which liver patients will most benefit from a liver transplant (from clinical features), or which cancer patients will have cachexia, etc.

Q: How can machine learning be useful in the field of Metabolomics?

Machine learning can be very useful here. Metabolomics has relied on technologies like mass spec and NMR spectroscopy to identify and quantify small molecules in a biofluid (like blood or urine); this previously was done in a very labor-intensive way, by skilled spectroscopists.

My collaborator, Dr. Dave Wishart (here at the University of Alberta) and some of our students, have designed tools to automate this process — that can effectively find the molecules present  in say blood. This means metabolic profiling is now high-throughput and automated, making it relatively easy to produce datasets that include the metabolic profiles from a set of patients, along with their outcome.  Machine learning tools can then use this labeled dataset to produce models for predicting who has a disease, for screening or for diagnosis. This has led to models that can detect cachexia (muscle wasting) and adenoma (with a local company, MTI).

Q: Can you go in to some detail on the work you have done designing algorithms to predict patient-specific survival times?

This is my current passion; I’m very excited about it.

The challenge is building models that can predict the time until an event will happen — for example, given a description of a patient with some specific disease, predict the time until his death (that is, how long he will live).  This seems very similar to the task of regression, which also tries to predict a real value for each instance –for example, predicting the price of a house based on its location, the number of rooms, and their sizes, etc.. Or given a description of a kidney patient (age, height, BMI, urine metabolic profile, etc.), predict the glomerular filtration rate of that patient, a day later.

Survival prediction looks very similar because both try to predict a number for each instance. For example, I describe a patient by his age, gender, height, and weight, and his genetic information, and metabolic information, and now I want to predict how long until his death — which is a real number.  

The survival analysis task is more challenging due to “censoring”.  To explain, consider a 5 year study that began in 1990. Over these five years, many patients have passed away, including some who lived for three years, others for 2.7 years, or 4.9 years. But many patients didn’t pass away during these 5 years –which is a good thing… I’m delighted these people haven’t died! But this makes the analysis much harder: for the  many patients alive at the end of the study, we know only that they lived at least 5 years, but we don’t know if they lived 5 years and a day or lived 30 years — we don’t know and never will know.

This makes the problem completely different from the standard regression tasks. The tools that work for predicting glomerular filtration rate or for predicting the price of a house just don’t apply here. You have to find other techniques.  Fortunately, the field of survival analysis provides many relevant tools. Some tools predict something called “risk”, which gives a number to each patient, with the understanding that this tool is predicting that patients with higher risks will die before those with lower risk. So if Mr A’s risk for cancer is 7.2 and Mr B’s is 6.3 — that is, Mr A has a higher risk — this model predicts that Mr Awill die of cancer before Mr B will. But does this mean that Mr A will die 3 days before Mr B, or 10 years — the risk score doesn’t say.

Let me give a slightly different way to use this. Recall that Mr A’s risk of dying of cancer is 7.2.  There are many websites that can do “what if” analysis: perhaps if he stops smoking, his risk reduces to 5.1.  This is better, but by how much? Will this add 2 more months to his life, or 20 years? Is this change worth the challenge of not smoking?

Other survival analysis tools predict probabilities — perhaps Ms C’s chance of 5-year disease-free survival, is currently is 65%. but if she changes her diet in certain way, this chance goes up to 78%. Of course, she wants to increase her five-year survival. But again, this is not as tangible as learning, “If I continue my current lifestyle then this tool predicts I will develop cancer in 12 years, but if I stop smoking, it goes from 12 to 30 years”.  I think this is much more tangible, and hence will be more effective in motivating people to change their lifestyle, versus changing their risk, or their 5-year survival probability.

So my team and I have provided a tool that do exactly that, by giving each person his or her individualized survival curve, which shows that person’s expected time to event. I think that will help motivate people to change their lifestyle. In addition, my colleagues and I also applied this to a liver transplant dataset, to produce a model that can determine which patient with end-stage liver failure, will benefit the most from a new liver, and so should be added to the waitlist.

Those examples all deal with time to death, but in general, survival analysis can deal with time to event, for any event. So it can be used to model a patient’s expected time to re-admission.   Here, we can seek a model that, given a description of a patient being discharged from a hospital, can predict when that patient will be readmitted — eg, if she will return to the hospital, for the same problem, soon or not.

Imagine this tool predicted that, given Ms Jones’ current status, if she leaves the hospital today, she will return within a week.   But if we keep her one more day and give some specific medications, we then predict her readmission time is 3 years. Here, it’s probably better to keep her that one more day and give one more medication. It will help the patient, and will also reduce costs.

Q: What do you see are the challenges ahead for the healthcare space in adopting machine learning and AI?

There are two questions: what machine learning can do effectively, and what it should do.

The second involves a wide range of topics, including social, political, and legal issues. Can any diagnostician — human or machine — be perfect? If not, what are the tradeoffs?  How to verify the quality of a computer’s predictions? If it makes a mistake, who is accountable? The learning system? Its designer? The data on which it was trained? Under what conditions should a learned system be accepted? … and eventually incorporated into standard of care?  Does the program need to be ‘‘convincing”, in the sense of being able to explain its reasoning — that is, explain why it asked for some specific bit of information? … or why it reached a particular conclusion? While I do think about these topics, I am not an expert here.

My interest is more in figuring what these systems can do — how accurate and comprehensive can they be? This requires getting bigger data sets — which is happening as we speak. And defining the tasks precisely — is the goal to produce a treatment policy that works in Alberta, or that works for any patient, anywhere in the world?  This helps determine the diversity of training data that is required, as well as the number of instances. (Hint: building an Alberta-only model is much easier than a universal one.) A related issue is defining exactly what the learned tool should do: In general, the learned performance system will return a “label” for each patient — which might be a diagnosis (eg, does the patient have ADHD), or a specific treatment (eg, give a SSRI [that is, a selective serotonin reuptake inhibitor]). Many clinicians assume the goal is a tool that does what they do. That would be great if there was an objective answer, and the doctor was perfect, but this is rarely the case.  First, in many situations, there is significantly disagreement between clinicians (eg, some doctors may think that a specific patient has ADHD, while others may disagree) — if so, which clinician should the tool attempt to emulate? It would be better if the label instead was some objective outcome — such as “3 year disease-free survival’’, or “progression within 1 year” (where there is an objective measure for “progression”, etc.)

This can get more complicated when the label is the best treatment — for example, given a description of the patient, determine whether that patient should get drug-A or drug-B. (That is, the task is prognostic, not diagnostic.)  While it is relatively easy to ask the clinician what she would do, for each patient, recall that clinicians may have different treatment preferences… and those preferences might not lead to the best outcome. This is why we advocate, instead, first defining what “best” means, by having a well-defined objective score for evaluating a patient’s status, post treatment.  We then define the goal of the learned performance system as finding the treatment, for each patient, that optimizes that score.

One issue here is articulating this difference, between “doing what I do” versus optimizing an objective function.  A follow-up challenge is determining this objective scoring function, as it may involve trading off, say, treatment efficacy with side-effects, etc. Fortunately, clinicians are very smart, and typically get it!  We are making in-roads.

Of course, after understanding and defining this objective scoring function, there are other challenges — including collecting data from a sufficient number of patients and possibly controls, from the appropriate distributions, then building a model from that data, and validating it, perhaps on another dataset.  Fortunately, there are an increasing number of available datasets, covering a wide variety of diseases, with subjects (cases and controls) described with a many different types of features (clinical, omics, imaging, etc etc etc). Finally comes the standard machine learning challenge of producing a model from that labeled data.  Here, too, the future is bright: There are faster machines, and more importantly, I have many brilliant colleagues developing ingenious new algorithms, to deal with many different types of information.

All told, this is a great time to be in this important field!  I’m excited to be a part of it.

Thank you Dr. Greiner!

Learn more at the Alberta Machine Intelligence Institute.

Catalia Health Tries Free Interactive Robots to for In-Home Patient Care

A little more than three-and-a-half years ago, Cory Kidd founded Catalia Health based on the work he did at the MIT Media Lab and Boston University Medical Center. Headquartered in San Francisco, the company’s overarching goal is to improve patient engagement and launch behavior change. But the way it goes about meeting that mission is unique. Through Catalia […]

A little more than three-and-a-half years ago, Cory Kidd founded Catalia Health based on the work he did at the MIT Media Lab and Boston University Medical Center.

Headquartered in San Francisco, the company’s overarching goal is to improve patient engagement and launch behavior change. But the way it goes about meeting that mission is unique.

Through Catalia Health’s model, each patient is equipped with an interactive robot to put in their home. Named Mabu, the robot learns about each patient and their needs, including medications and treatment circumstances.

Mabu can then have tailored conversations with a patient about their routine and how they’re feeling. The information from those talks securely goes back to the patient’s pharmacist or healthcare provider, giving them an update on the individual’s progress and alerting them if something goes wrong.

Right now, the company is focused on bringing Mabu to patients with congestive heart failure. It is currently working with Kaiser Permanente on that front. But Catalia Health is also doing work on other disease states, such as rheumatoid arthritis and late-stage kidney cancer.

“We’re not replacing a person,” Kidd, the startup’s CEO, said in a recent phone interview. “[Providers have] the ability now to have a lot more insight on all their patients on a much more frequent basis.”

Why use a robot as a means to gather such insight?

Kidd explained: “We get intuitively that face-to-face [interaction] makes a difference. Psychologically, we know what that difference is: We create a stronger relationship and we find the person to be more credible. The robot can literally look someone in the eyes, and we get the psychological effects of face-to-face interaction.”

The robot — and face-to-face interaction — helps keep patients engaged over a long period of time, Kidd added.

As for its business model, Catalia Health works directly with pharma companies and health systems. These organizations pay the startup on a per patient, per month basis. The patient using Mabu doesn’t have to pay.

The company is also currently offering interested heart failure patients a free trial of Mabu. The patient simply has to give Catalia feedback on their experience.

“That’s ongoing and very active right now,” Kidd said of the free trial effort.

In late 2017, the company closed a $4 million seed round, following two previous funding rounds amounting to more than $7.7 million. Ion Pacific led the $4 million round. Khosla Ventures, NewGen Ventures, Abstract Ventures and Tony Ling also participated.

Read the source article at MedCityNews.

Montreal-Toronto AI Startups Have Wide Range of Focus

Includes Healthcare, Biomed, Text Analysis, Legal Research, Image Analysis, Drug Discovery, Education Canada has made a commitment for many years to the study of AI at universities across the county, and today robust business incubation programs supported by Canada’s state and regional governments work to transform research into viable businesses. This AI ecosystem has produced […]

Includes Healthcare, Biomed, Text Analysis, Legal Research, Image Analysis, Drug Discovery, Education

Canada has made a commitment for many years to the study of AI at universities across the county, and today robust business incubation programs supported by Canada’s state and regional governments work to transform research into viable businesses. This AI ecosystem has produced breakthrough research and is attracting top talent and investment by venture capital. Here is a look at a selection of Montreal- and Toronto-based AI startups.

TandemLaunch, Technology Transfer Acceleration

TandemLaunch is a Montreal-based technology transfer acceleration company, founded in 2010, that works with academic researchers to commercialize their technological developments. CEO and General Partner Helge Seetzen was the founder and directs the company’s strategy and operations. TandemLaunch has raised $29.5 million since its founding, according to CrunchBase. The firm has spun out more than 20 companies and has been recognized for supporting women founders.

Seetzen was a successful entrepreneur who co-founded Sunnybrook Technologies and later BrightSide Technologies to commercialize display research developed at the University of British Columbia. BrightSide was sold to Dolby Laboratories for $28 million in 2007.

TandemLaunch provides startups with office space, access to IT infrastructure, shared labs for electronics, mechanical or chemical prototyping, mentoring, hands-on operational support and financing.

Asked by AI Trends to comment, CEO Seetzen said, “TandemLaunch has a long history of building leading AI companies based on technologies from international universities. Example successes include LandR – the world’s largest music production platform – and SportlogiQ which offers AI-driven game analytics for sports. Many younger TandemLaunch companies are at the brink of launching game-changing products onto the market such as Aerial’s AI for motion sensing from Wi-Fi signals which will be released in several countries as a home security solution later this year. With hundreds of AI developers across our portfolio of 20+ companies, TandemLaunch is well positioned to capitalize on AI opportunities of all stripes.”

Other companies in the TandemLaunch portfolio include: Kalepso, focused on blockchain and machine learning; Ora, offering nanotechnology for high-fidelity audio; Wavelite, aiming to increase the lifetime of wireless sensors used in IoT operations; Deeplite, providing an AI-driven optimizer to make deep neural networks faster; Soundskrit, changing how sound is measured using a bio-inspired design; and C2RO, offering a robotic SaaS platform to augment perception and collaboration capabilities of robots.

Learn more at TandemLaunch.

BenchSci for Biomedical Researchers

BenchSci offers an AI-powered search engine for biomedical researchers. Founded in 2015 in Toronto, the company recently raised $8 million in a series A round of funding led by iNovia Capital, with participation including Google’s recently-announced Gradient Ventures.

BenchSci uses machine learning to translate both closed-and open-access data into recommendations for specific experiments planned by researchers.  The offering aims to speed up studies to help biomedical professionals find reliable antibodies and reduce resource waste.

“Without the use of AI, basic biomedical research is not only challenging, but drug discovery takes much longer and is more expensive,” BenchSci cofounder and CEO Liran Belenzon stated in an account in VentureBeat. “We are applying and developing a number of advanced data science, bioinformatics and machine learning algorithms to solve this problem and accelerate scientific discovery by ending reagent failure.” (A reagent is a substance used to detect or measure a component based on its chemical or biological activity.)

In July 2017, Google announced its new venture fund aimed at early-stage AI startups. In the year since, Gradient Ventures has invested in nine startups including BenchSci, the fund’s first known health tech investment and first outside the US.

“Machine learning is transforming biomedical research,” stated Gradient Ventures founding partner Ankit Jain. “BenchSci’s technology provides a unique value proposition for this market, enabling academic researchers to spend less time searching for antibodies and more time working on their experiments.”

BenchSci told VentureBeat is tripled its headcount last year and plans to add 16 new hires throughout 2018.

Learn more at BenchSci.

Imagia to Personalize Healthcare Solutions

Imagia is an AI healthcare company that fosters collaborative research to accelerate accessible, personalized healthcare.

Founded in 2015 in Montreal, the company in November 2017 acquired Cadens Medical Imaging for an undisclosed amount, to accelerate development of its biomarker discovery processes. Founded in 2008, Cadens develops and markets medical imaging software products designed for oncology, the study of tumors.

Venture-backed Imagia acquired Cadens Medical Imaging.

“This strategic transaction will significantly accelerate Imagia’s mission of delivering AI-driven accessible personalized healthcare solutions. Augmenting Imagia’s deep learning expertise with Cadens’ capabilities in clinical AI and imaging was extremely compelling, to ensure our path from validation to commercialization,” stated Imagia CEO Frederic Francis in a press release. “This is particularly true for our initial focus on developing oncology biomarkers that can improve cancer care by predicting a patient’s disease progression and treatment response.”

Imagia co-founder and CTO Florent Chandelier said “Our combined team will build upon the long-term outlook of clinical research together with healthcare partnerships, and the energy and focus of a technology startup with privileged access to deep learning expertise and academic research from Yoshua Bengio’s MILA lab. We are now uniquely positioned to deliver AI-driven solutions across the healthcare ecosystem.”

In prepared remarks, Imagia board chair Jean-Francois Pariseau stated, “Imaging evolved considerably in the past decade in terms of sequence acquisition as well as image quality. We believe AI enables the creation of next generation diagnostics that will also allow personalization of care. The acquisition of Cadens is an important step in building the Imagia platform and supports our strategy of investing in ground breaking companies with the potential to become world leaders in their field.”

Learn more at Imagia.

Ross Intelligence: Where AI Meets Legal Research

Ross Intelligence is where AI meets legal research. The firm was founded in 2015 by Andrew Arruda, Jimoh Ovbiagele and Pargies Dall ‘Oglio, machine learning researchers from the University of Toronto. Ross, headquartered in San Francisco, in October 2017 announced an $8.7 million Series A investment round led by iNovia Capital, seeing an opportunity to compete with the legal research firms LexisNexis and Thomson Reuters.

The platform helps legal teams sort through case law to find details relevant to new cases. Using standard keyword search, the process takes days or weeks. With machine learning, Ross aims to augment the keyword search, speed up the process and improve the relevancy of terms found.

“Bluehill [Research] benchmarks Lexis’s tech and they are finding 30 percent more relevant info with Ross in less time,” stated Andrew Arruda, co-founder and CEO of Ross, in an interview with TechCrunch.

Ross uses a combination of off-the-shelf and proprietary deep learning algorithms for its AI stack. The firm is using IBM Watson for some of its natural language processing as well. To build training data, Ross is working with 20 law firms to simulate workflow example and test results.

Ross has raised a total of $13.1 million in four rounds of financing, according to Crunchbase.

The firm recently hired Scott Sperling, former head of sales at WeWork, as VP of sales. In January, Ross announced its new EVA product, a brief analyzer with some of the power of the commercial version. Ross is giving it away for free to seed the market. The tool can check the recent history related to cited cases and determine if they are still good law, in a manner similar to that of LexisNexis Shepard’s and Thomson Reuters KeyCite, according to an account in LawSites.

EVA’s coverage of cases includes all US federal and state courts, across all practice areas. “With EVA, we want to provide a small taste of Ross in a practical application, which is why we are releasing it completely free,” Arruda told LawSites. “We’re deploying a completely new way to doing research with AI at its core. And because it is based on machine learning, it gets smarter every day.”

For more information, go to Ross Intelligence.

Phenomic AI Uses Deep Learning to Assist Drug Discovery

Phenomic AI is developing deep learning solutions to accelerate drug discovery. The company was founded in Toronto in June 2017 by Oren Kraus, from the University of Toronto, and Sam Cooper, a graduate of the Institute of Cancer Research in London. The aim is to use machine learning algorithms to help scientists studying image screenings to learn which cells are resistant to chemotherapy, thus fighting the recurrence of cancer in many patients. The AI enables the software to comb through thousands of cell culture images to identify those responsible for being chemo-resistant.

Phenomic AI founders Oren Kraus, left, and Sam Cooper. Photo by Olympus Digital Camera

“My PhD at U of T was looking at developing deep-learning techniques to automate the process of analyzing images of cells, so I wanted to create a company looking at this issue,” stated Kraus in an account in StartUp Here Toronto.  “There are key underlying mechanisms that allow cancer cells to survive in the first place. If we can target those underlying mechanisms that prevent cancer coming back in entire groups of patients, that’s what we’re going for.”

Cooper is working towards his PhD with the department of Computational Medicine at Imperial College, London, and also with the Dynamical Cell Systems team at the Institute of Cancer Research. His research focuses on developing deep and reinforcement learning solutions for pharmaceutical research.

An early research partner of Phenomic AI is the Toronto Hospital for Sick Children, in a project to study a hereditary childhood disease.

The company has raised $1.5 million in two funding rounds, according to Crunchbase.

Learn more at Phenomic AI.

Erudite.ai Aims at Peer Tutoring

Erudite.ai is marketing ERI, a product that aims to connect a student who needs help on a subject with a peer who has shown expertise in the same subject. The company was founded in 2016 in Montreal and has raised $1.1 million to date, according to Crunchbase. The firm uses an AI system to analyze the content of conversations and specific issues the student faces. From that, it generates personalized responses for the peer-tutor. ERI is offered free to students and schools.

Erudite.ai is competing for an IBM Watson XPrize for Artificial Intelligence, being one of three top 10 teams announced in December, from 150 entrants competing for $5 million in prize money. President and founder Patrick Poirier was quoted in The Financial Post on the market opportunity, “Tutoring is very efficient at helping people improve their grades. It’s a US $56 billion market. But at $40 an hour, it’s very expensive.” Erudite.ai is giving away its product, for now. The plan is to go live in September and host 200,000 students by year-end. By mid-2019, the company plans to sell a version of the platform to commercial tutoring firms, to help them speed teaching time and reduce costs.

The company hopes to extend beyond algebra to geometry, then the sciences, in two years. “The AI will continue to improve,” states Poirier. “In five years, I hope we will be helping 50 million people.”

Learn more at Erudite.ai.

Keatext Comprehends Customer Communication Text

Keatext’s AI platform interprets customers’ written feedback across various channels to highlight recommendations aimed at improving the customer experience. The firm’s product is said to enable organizations to audit customer satisfaction, identify new trends, and keep track of the impact of actions or events affecting the clients. Keatext’s technology aims to mimic human comprehension of text to deliver reports to help managers make decisions.

Keatext Team, founder Narjes Boufaden in foreground.

The company was founded in 2010 in Montreal by Narjes Boufaden, first as a professional services company. From working with clients, the founder identified a gap in the text analytics industry she felt the firm could address. In 2014, Keatext began offering a SaaS product offering.

Boufaden holds an engineering degree in computer science and a PhD in natural language processing, earned with the supervision of Yoshua Bengio and Guy Lapalme. Her expertise is in developing algorithms to analyze human conversations. She has published many articles on NLP, machine learning, and text mining from conversational texts.

Keatext in April announced a new round of funding, adding CA$1.72 million to support commercial expansion, bringing the company’s funding total to CA$3.32 million since launching its platform two years ago. “This funding will help us gain visibility on a wider scale as well as to consolidate our technological edge,” stated Boufaden in a press release. “Internet and intranet communication allows organizations to hold ongoing conversations with the people they serve. This gives them access to an enormous amount of potentially valuable information. Natural language understanding and deep learning are the keys to tapping into this information and revealing how to better serve their audiences.”

Learn more at Keatext.

Dataperformers in Applied AI Research

Founded in 2013 in Montreal, Dataperformers is an applied research company that works on advanced AI technologies. The company has attracted top AI researchers and engineers to work on Deep Learning models to enable E-commerce and FinTech business uses.

Calling Dataperformers “science-as-a-service,” co-founder and CEO Mehdi Merai stated, “We are a company that solves problems through applied research work in artificial intelligence,” in an article in the Montreal Gazette. Among the first clients is Desjardins Group, an association of credit unions using the service to analyze large data volumes, hoping to discover hidden patterns and trends.

Dataperformers is also working on a search engine for video called SpecterNet, that combines use of AI and computer vision to find specific content. Companies could use the search engine to identify videos where their products appear, then market the product to the video’s audience. The company is using reinforcement learning to help the video search AI to learn on its own.

Learn more at Dataperformers.

Botler.ai Bot Helps Determine Sexual Harassment

Botler.ai was founded in January 2018 by Ritika Dutt, COO, and Amir Moraveg, CEO, as a service to help victims of sexual harassment determine whether they have been violated. The bot was created following a harassment experienced by cofounder Dutt.

Left to right: Cofounders Amir Moravej and Ritika Dutt with advisor Yoshua Bengio. Photo by Eva Blue

She was unsure how to react after the experience, but once she researched the legal code, she gained confidence. “It wasn’t just me making things up in my head. There was a legal basis for the things I was feeling, and I was justified in feeling uncomfortable,” she stated in an account in VentureBeat.

The bot uses natural language processing to determine whether an incident could be classified as sexual harassment. The bot learned from 300,000 court cases in Canada and the US, drawing on testimony from court filings, since testimony aligns most closely with conversational tone. The bot can generate an incident report.

This is Botler.ai’s second product, following a bot made last year to help people navigate the Canadian immigration system.

Yoshua Bengio of MILA is an advisor to the startup.

Next in AI in Canada series: AI in Edmonton

 

  • By John P. Desmond, AI Trends Editor

 

MIT Researchers Pushing Machine Learning to Speed Drug Development

Designing new molecules for pharmaceuticals is primarily a manual, time-consuming process that’s prone to error. But MIT researchers have now taken a step toward fully automating the design process, which could drastically speed things up — and produce better results. Drug discovery relies on lead optimization. In this process, chemists select a target (“lead”) molecule […]

Designing new molecules for pharmaceuticals is primarily a manual, time-consuming process that’s prone to error. But MIT researchers have now taken a step toward fully automating the design process, which could drastically speed things up — and produce better results.

Drug discovery relies on lead optimization. In this process, chemists select a target (“lead”) molecule with known potential to interact with a specific biological target, then tweak its chemical properties for higher potency and other factors.

Chemists use expert knowledge and conduct manual tweaking of the structure of molecules, adding and subtracting functional groups — groups of atoms and bonds with specific properties. Even when they use systems that predict optimal desired properties, chemists still need to do each modification step themselves. This can take a significant amount of time at each step and still not produce molecules with desired properties.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Electrical Engineering and Computer Science (EECS) have developed a model that better selects lead molecule candidates based on desired properties. It also modifies the molecular structure needed to achieve a higher potency, while ensuring the molecule is still chemically valid.

The model basically takes as input molecular structure data and directly creates molecular graphs — detailed representations of a molecular structure, with nodes representing atoms and edges representing bonds. It breaks those graphs down into smaller clusters of valid functional groups that it uses as “building blocks” that help it more accurately reconstruct and better modify molecules.

“The motivation behind this was to replace the inefficient human modification process of designing molecules with automated iteration and assure the validity of the molecules we generate,” says Wengong Jin, a PhD student in CSAIL and lead author of a paper describing the model that’s being presented at the 2018 International Conference on Machine Learning in July.

Joining Jin on the paper are Regina Barzilay, the Delta Electronics Professor at CSAIL and EECS and Tommi S. Jaakkolathe Thomas Siebel Professor of Electrical Engineering and Computer Science in CSAIL, EECS, and at the Institute for Data, Systems, and Society.

The research was conducted as part of the Machine Learning for Pharmaceutical Discovery and Synthesis Consortium between MIT and eight pharmaceutical companies, announced in May. The consortium identified lead optimization as one key challenge in drug discovery.

“Today, it’s really a craft, which requires a lot of skilled chemists to succeed, and that’s what we want to improve,” Barzilay says. “The next step is to take this technology from academia to use on real pharmaceutical design cases, and demonstrate that it can assist human chemists in doing their work, which can be challenging.”

“Automating the process also presents new machine-learning challenges,” Jaakkola says. “Learning to relate, modify, and generate molecular graphs drives new technical ideas and methods.”

Read the source article at MIT News.

VA, IBM Extend Watson For Genomics Partnership

The US Department of Veterans Affairs (VA) and IBM Watson Health have announced the extension of a public-private partnership to apply artificial intelligence (AI) to help interpret cancer data in the treatment of Veteran patients. First announced two years ago as part of the National Cancer Moonshot Initiative, VA oncologists have now used IBM Watson for Genomics […]

The US Department of Veterans Affairs (VA) and IBM Watson Health have announced the extension of a public-private partnership to apply artificial intelligence (AI) to help interpret cancer data in the treatment of Veteran patients.

First announced two years ago as part of the National Cancer Moonshot Initiative, VA oncologists have now used IBM Watson for Genomics technology to support precision oncology care for more than 2,700 Veterans with cancer.

VA’s precision oncology program primarily supports stage 4 cancer patients who have exhausted other treatment options. The partnership extension with IBM will enable VA oncologists to continue using Watson for Genomics through at least June 2019.

“Our mission with VA’s precision oncology program is to bring the most advanced treatment opportunities to Veterans, in hopes of giving our nation’s heroes better treatments through these breakthroughs,” said Acting VA Secretary Peter O’Rourke in a press release. “We look forward to continuing this strategic partnership to assist VA in providing the best care for our Veterans.”

VA treats 3.5% of the nation’s cancer patients — the largest group of cancer patients within any one health-care group. In order to bring precision oncology advances to this large group of patients, with equal access available anywhere in the country, VA established a central “hub” in Durham, North Carolina.

In this facility, a small group of oncologists and pathologists receive tumor samples from patients nationwide and sequence the tumor DNA. They then use AI to help interpret the genomic data, identifying relevant mutations and potential therapeutic options that target those mutations.

More than one-third of the patients who have benefited from VA’s precision oncology program are Veterans from rural areas where it has traditionally been difficult to deliver cutting-edge medical breakthroughs.

“VA is leading the nation to scale and spread the delivery of high quality precision oncology care, one Veteran at a time,” said Dr. Kyu Rhee, chief health officer for IBM Watson Health in the same statement. “It is incredibly challenging to read, understand and stay up-to-date with the breadth and depth of medical literature and link them to relevant mutations for personalized cancer treatments. This is where AI can play an important role in helping to scale precision oncology, as demonstrated in our work with VA, the largest integrated health system in the US.

To learn more, go to IBM.

3 Companies Using AI to Forge New Advances in Healthcare

When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector. However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations. Here we take a closer look at three intriguing […]

When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector.

However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations.

Here we take a closer look at three intriguing stocks using AI to forge new advances in treating and tackling disease. To pinpoint these three stocks, we used TipRanks’ data to scan for ‘Strong Buy’ stocks in the healthcare sector. These are stocks with substantial Street support, based on ratings from the last three months. We then singled out stocks making important headways in AI and machine learning.

BioXcel Therapeutics Inc.

This exciting clinical stage biopharma is certainly unique. BioXcel (BTAI) applies AI and big data technologies to identify the next wave of neuroscience and immuno-oncology medicines. According to BTAI this approach uses “existing approved drugs and/or clinically validated product candidates together with big data and proprietary machine learning algorithms to identify new therapeutic indices.”

The advantage is twofold: “The potential to reduce the cost and time of drug development in diseases with substantial unmet medical need,” says BioXcel. Indeed, we are talking $50 – 100 million of the cost (over $2 billion) typically associated with the development of novel drugs. Right now, BioXcel has several therapies in its pipeline including BXCL501 for prostate and pancreatic cancer. And it seems like the Street approves. The stock has received five buy ratings in the last three months with an average price target of $20.40 (115% upside potential).

“Unlocking efficiency in drug development” is how H.C Wainwright analyst Ram Selvaraju describes Bioxcel’s drug repurposing and repositioning. “The approach BioXcel Therapeutics is taking has been validated in recent years by the advent of several repurposed products that have gone on to become blockbuster franchises (>$1 billion in annual sales).” However, he adds that “we are not currently aware of many other firms that are utilizing a systematic AI-based approach to drug development, and certainly none with the benefit of the prior track record that BioXcel Therapeutics’ parent company, BioXcel Corp., possesses.”

Microsoft Corp.

Software giant Microsoft believes that we will soon live in a world infused with artificial intelligence. This includes healthcare.

According to Eric Horvitz, head of Microsoft Research’s Global Labs, “AI-based applications could improve health outcomes and the quality of life for millions of people in the coming years.” So it’s not surprising that Microsoft is seeking to stay ahead of the curve with its own Healthcare NExT initiative, launched in 2017. The goal of Healthcare NExT is to accelerate healthcare innovation through artificial intelligence and cloud computing. This already encompasses a number of promising solutions, projects and AI accelerators.

Take Project EmpowerMD, a research collaboration with UPMC. The purpose here is to use AI to create a system that listens and learns from what doctors say and do, dramatically reducing the burden of note-taking for physicians. According to Microsoft, “The goal is to allow physicians to spend more face-to-face time with patients, by bringing together many services from Microsoft’s Intelligent Cloud including Custom Speech Services (CSS) and Language Understanding Intelligent Services (LUIS), customized for the medical domain.”

On the other end of the scale, Microsoft is also employing AI for genome mapping (alongside St Jude Children’s Research Hospital) and disease diagnostics. Most notably, Microsoft recently partnered with one of the largest health systems in India, Apollo Hospitals, to create the AI Network for Healthcare. Microsoft explains: “Together, we will be developing and deploying new machine learning models to gauge patient risk for heart disease in hopes of preventing or reversing these life-threatening conditions.”

Globus Medical Inc.

This medical device company is pioneering minimally invasive surgery, including with the assistance of the ExcelsiusGPS robot. Globus Medical describes how the Excelsius manages to combine the benefits of navigation, imagery and robotics into one single technology. And the future possibilities are even more exciting.

According to top Canaccord Genuity analyst Kyle Rose, there are multiple growth opportunities for GMED. He explains: “Currently, ExcelsiusGPS supports the placement of nails and screws in both trauma and spine cases, and we expect Globus to leverage the platform for broader orthopedic indications in future years.” Encouragingly, Rose notes that management has already received positive early feedback and robust demand for the medical robot.

Indeed, in the first quarter Globus reported placing 13 robots vs. Rose’s estimate of just 5 robots. This extra success translated to ~$7.8 million in upside relative to his estimates. On the earnings call, Globus revealed reiterated their long-term vision for ExelsiusGPS as a robotic platform with far more advanced capabilities. This could even include using augmented reality to construct a 3D view of the patient’s external and internal anatomy.

Read the source article in TheStreet.

Accenture: Most Health Organizations Can’t Ensure Responsible AI Use

Despite a growing interest in artificial intelligence, most healthcare organizations still lack the tools necessary to ensure responsible use of such technologies, finds a report from Accenture Health. According to the report, Digital Health Technology Vision 2018, 81% of healthcare executives said they are not yet prepared to face the societal and liability issues needed to explain […]

Despite a growing interest in artificial intelligence, most healthcare organizations still lack the tools necessary to ensure responsible use of such technologies, finds a report from Accenture Health.

According to the report, Digital Health Technology Vision 2018, 81% of healthcare executives said they are not yet prepared to face the societal and liability issues needed to explain their AI systems’ decisions. Additionally, while 86% of respondents said that their organizations are using data to drive automated decision-making, the same proportion (86%) report they have not invested in the capabilities needed to verify data sources across their most critical systems.

Kaveh Safavi, head of Accenture’s health practice, observed that the current lack of AI data verification investment activity is exposing healthcare organizations to inaccurate, manipulated and biased data that can lead to corrupted insights and skewed results. “The 86% figure is critical,” he stated, “given that 24% of executives also said that they have been the target of adversarial AI behaviors, such as falsified location data or bot fraud on more than one occasion.” On a positive note, the study found that 73% of respondents plan to develop internal ethical standards for AI to ensure that their systems act responsibly.

Kaveh Safavi of Accenture

As a growing number of AI-powered healthcare tools enter the market, hospitals, clinics and other healthcare organizations are using intelligent technologies in various ways to become more agile, productive and collaborative. “Until recently, AI was mainly used as a back-end tool but is increasingly becoming part of the everyday consumer and clinician experience,” Safavi noted.

AI’s ability to sense, understand, act and learn enables it to augment human activity by supporting, or even taking over, tasks that bridge administrative and clinical healthcare functions — from risk analysis to medical imaging to supporting human judgment. “In terms of value and cost-savings, there are many ways in which AI can improve and change healthcare,” Safavi observed. Accenture estimates that key clinical health AI applications can create $150 billion in annual savings for the U.S. healthcare economy by 2026.

Moving toward greater compliance

Healthcare organizations recognize the value of AI not only for its potential cost savings, but also for its ability to tackle entrenched issues related to sustainability and access. Adopters are also hoping that IT will help them address the growing healthcare workforce shortage and the increasing dissatisfaction of healthcare consumers. “AI can help increase productivity and personalization in healthcare in ways that few other technologies can,” Safavi explained.

Ultimately, healthcare organizations will need to turn to AI-powered automation to improve a wide range of services. “Because of this, the market will help drive compliance over time as trust both from consumers and clinicians is the only way to truly foster adoption,” Safavi predicted.

Read the source article in Information Week.

How 3 Companies Use AI to Forge Advances in Healthcare

When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector. However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations. Here we take a closer look at three intriguing […]

When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector.

However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations.

Here we take a closer look at three intriguing stocks using AI to forge new advances in treating and tackling disease. To pinpoint these three stocks, we used TipRanks’ data to scan for ‘Strong Buy’ stocks in the healthcare sector. These are stocks with substantial Street support, based on ratings from the last three months. We then singled out stocks making important headways in AI and machine learning.

BioXcel Therapeutics Inc.

This exciting clinical stage biopharma is certainly unique. BioXcel (BTAI) applies AI and big data technologies to identify the next wave of neuroscience and immuno-oncology medicines. According to BTAI this approach uses “existing approved drugs and/or clinically validated product candidates together with big data and proprietary machine learning algorithms to identify new therapeutic indices.”

The advantage is twofold: “The potential to reduce the cost and time of drug development in diseases with substantial unmet medical need,” says BioXcel. Indeed, we are talking $50 – 100 million of the cost (over $2 billion) typically associated with the development of novel drugs. Right now, BioXcel has several therapies in its pipeline including BXCL501 for prostate and pancreatic cancer. And it seems like the Street approves. The stock has received five buy ratings in the last three months with an average price target of $20.40 (115% upside potential).

“Unlocking efficiency in drug development” is how H.C Wainwright analyst Ram Selvaraju describes Bioxcel’s drug repurposing and repositioning. “The approach BioXcel Therapeutics is taking has been validated in recent years by the advent of several repurposed products that have gone on to become blockbuster franchises (>$1 billion in annual sales).” However, he adds that “we are not currently aware of many other firms that are utilizing a systematic AI-based approach to drug development, and certainly none with the benefit of the prior track record that BioXcel Therapeutics’ parent company, BioXcel Corp., possesses.”

Microsoft Corp.

Software giant Microsoft (MSFT) believes that we will soon live in a world infused with artificial intelligence. This includes healthcare.

According to Eric Horvitz, head of Microsoft Research’s Global Labs, “AI-based applications could improve health outcomes and the quality of life for millions of people in the coming years.” So it’s not surprising that Microsoft is seeking to stay ahead of the curve with its own Healthcare NExT initiative, launched in 2017. The goal of Healthcare NExT is to accelerate healthcare innovation through artificial intelligence and cloud computing. This already encompasses a number of promising solutions, projects and AI accelerators.

Take Project EmpowerMD, a research collaboration with UPMC. The purpose here is to use AI to create a system that listens and learns from what doctors say and do, dramatically reducing the burden of note-taking for physicians. According to Microsoft, “The goal is to allow physicians to spend more face-to-face time with patients, by bringing together many services from Microsoft’s Intelligent Cloud including Custom Speech Services (CSS) and Language Understanding Intelligent Services (LUIS), customized for the medical domain.”

On the other end of the scale, Microsoft is also employing AI for genome mapping (alongside St Jude Children’s Research Hospital) and disease diagnostics. Most notably, Microsoft recently partnered with one of the largest health systems in India, Apollo Hospitals, to create the AI Network for Healthcare. Microsoft explains: “Together, we will be developing and deploying new machine learning models to gauge patient risk for heart disease in hopes of preventing or reversing these life-threatening conditions.”

Read the source article at TheStreet.com.

Using Deep Learning to Defeat Aging is Mission of Insilico Medicine

Every once in a while, you meet an entrepreneur who is both fully present, but also has a head full of dreams. That was my experience meeting and hosting Alex Zhavoronkov, the founder and CEO of Insilico Medicine, a few weeks ago in Vienna at the Pioneers conference. There, he gave a presentation on how he […]

Every once in a while, you meet an entrepreneur who is both fully present, but also has a head full of dreams. That was my experience meeting and hosting Alex Zhavoronkov, the founder and CEO of Insilico Medicine, a few weeks ago in Vienna at the Pioneers conference. There, he gave a presentation on how he is going to defeat aging using a set of deep learning AI tools, and also told me that I am going to live forever because I am young enough to benefit from the tech he is developing.

Insilico Medicine founder and CEO Alex Zhavoronkov

I am a huge skeptic to be frank (particularly anytime deep learning gets bandied about), but after chatting with him both before and after getting on stage, I can’t preclude the possibility that aging is something that might be within humanity’s (or at least Zhavoronkov’s) grasp to control.

That belief in the company’s mission is reflected in a recent set of twin announcements. The company announced that it has received a strategic round of financing led by WuXi AppTec, a Chinese integrated R&D services platform, along with Peter Diamandis’ BOLD Capital and Pavilion Capital, a subsidiary of Singapore-based Temasek. In addition, the company announced a strategic partnership with WuXi, in which Insilico’s inventions will be tested by WuXi. The terms of the round were not disclosed, but Insilico has raised $14 million previously from investors according to Crunchbase.

In order to understand the company’s technology, we need to understand a bit more about how therapeutics are developed. In the classical model used by pharmaceutical companies, scientists in an R&D lab investigate naturally occurring molecules while searching for potential therapeutic properties. When they find a molecule that could be a candidate, they begin a series of tests to determine the treatment efficacy of the molecules (and also to receive FDA approval).

Rather than going forward through the process, Insilico works backwards. The company starts with an end objective — say stopping aging — and then uses a toolbox of deep learning algorithms to devise ideal molecules de novo. Those molecules may not exist anywhere in the world, but can be “manufactured” in the lab.

The key underlying technique for the company is what are known as GANs, or generative adversarial networks with reinforcement learning. At a high-level, GANs include a neural net “generator” that creates new products (in this case, molecules), and a discriminator that classifies the new product. Those neural nets then adapt over time in order to compete against each other more effectively.

GANs have been used to create fake photos that look almost photorealistic, but that no camera has ever taken. Zhavoronkov suggested to me that clinical patient data may one day be manufactured — providing far more data while protecting patient privacy.

While Zhavoronkov has bold dreams about conquering aging, today the company is focused more broadly on creating an inventory of new molecules that could provide new therapeutics, albeit particularly focused on longevity. Under the company’s new strategic partnership, WuXi will then take those new molecules and test them for efficacy in actual clinical settings.

Read the source article at TechCrunch.

Voice Assistant for Doctors Coming from Suki, with $20M in Funding

When trying to figure out what to do after an extensive career at Google, Motorola, and Flipkart, Punit Soni decided to spend a lot of time sitting in doctors’ offices to figure out what to do next. It was there that Soni said he figured out one of the most annoying pain points for doctors […]

When trying to figure out what to do after an extensive career at Google, Motorola, and Flipkart, Punit Soni decided to spend a lot of time sitting in doctors’ offices to figure out what to do next.

It was there that Soni said he figured out one of the most annoying pain points for doctors in any office: writing down notes and documentation. That’s why he decided to start Suki  — previously Robin AI — to create a way for doctors to simply start talking aloud to take notes when working with patients, rather than having to put everything into a medical record system, or even writing those notes down by hand. That seemed like the lowest hanging fruit, offering an opportunity to make it easier for doctors that see dozens of patients to make their lives significantly easier, he said.

“We decided we had found a powerful constituency who were burning out because of just documentation,” Soni said. “They have underlying EMR systems that are much older in design. The solution aligns with the commoditization of voice and machine learning. If you put it all together, if we can build a system for doctors and allow doctors to use it in a relatively easy way, they’ll use it to document all the interactions they do with patients. If you have access to all data right from a horse’s mouth, you can use that to solve all the other problems on the health stack.”

The company said it has raised a $15 million funding round led by Venrock, with First Round, Social+Capital, Nat Turner of Flatiron Health, Marc Benioff, and other individual Googlers and angels. Venrock also previously led a $5 million seed financing round, bringing the company’s total funding to around $20 million. It’s also changing its name from Robin AI to Suki, though the reason is actually a pretty simple one: “Suki” is a better wake word for a voice assistant than “Robin” because odds are there’s someone named Robin in the office.

The challenge for a company like Suki is not actually the voice recognition part. Indeed, that’s why Soni said they are actually starting a company like this today: voice recognition is commoditized. Trying to start a company like Suki four years ago would have meant having to build that kind of technology from scratch, but thanks to incredible advances in machine learning over just the past few years, startups can quickly move on to the core business problems they hope to solve rather than focusing on early technical challenges.

Instead, Suki’s problem is one of understanding language. It has to ingest everything that a doctor is saying, parse it, and figure out what goes where in a patient’s documentation. That problem is even more complex because each doctor has a different way of documenting their work with a patient, meaning it has to take extra care in building a system that can scale to any number of doctors. As with any company, the more data it collects over time, the better those results get — and the more defensible the business becomes, because it can be the best product.

Read the source article at TechCrunch.