AI and the Org Chart: As Business Deploys AI, “Work Architecture” Need a Redesign

As editor of AI Trends, I am researching the impact of AI on how companies are organized to do work. I am interested in new job descriptions around data science, big data, machine learning, digital knowledge, AI interaction, natural language processing and others not mentioned here but that you might be involved in. I would […]

As editor of AI Trends, I am researching the impact of AI on how companies are organized to do work. I am interested in new job descriptions around data science, big data, machine learning, digital knowledge, AI interaction, natural language processing and others not mentioned here but that you might be involved in. I would like to describe your experience, what brought it about, what your organization is trying to achieve with AI. Please email me. I will respond and start the discussion, respecting all requirements of your organization for outside communication, and always with your permission for what gets published. My hope is to create a guide for AI and business professionals navigating this new and evolving field.]

Some 65% of children entering primary school today will have jobs that do not now exist, according to one estimate. To gain an understanding of what jobs are up and coming, and what skills are needed to succeed, LinkedIn studied data from five years to spot trends.

Among the key findings:

  • Machine learning engineers, data scientists and big data engineers were among the top emerging jobs, with companies in a wide range of industries seeking those skills.
  • Talent is scarce. Data Scientist roles have increases 650 percent since 2012, but currently in the US, 35,000 people are said to have data science skills. The supply of candidates for these roles cannot keep up with demand from the companies hiring.
  • Many of the emerging needed skills did not exist five years ago; many professionals are not confident that their current skill set will still be relevant in one to two years.
  • Software engineers are feeding into all the technology-related professions.

Here are some example strong growth titles from the LinkedIn study:

Machine Learning Engineer

  1. Software Engineer
  2. Research Assistant
  3. Teaching Assistant
  4. Data Scientist
  5. System Engineer

Data Scientist

  1. Research Assistant
  2. Teaching Assistant
  3. Software Engineer
  4. Data Scientist
  5. Business Analyst

Big Data Developer

  1. Software Engineer
  2. Hadoop Developer
  3. System Engineer
  4. Java Engineer
  5. ETL Developer

AI is Seen Adding More Jobs Than Lost

The emergence of AI in the organization was seen to be adding more jobs than those lost by attendees at the EmTech Digital Conference from Ernst & Young and MIT Technology Review held in the spring of 2018.

While many companies are striving to implement AI on projects, few have tied AI into the overall business strategy.  A basic notion is that AI will free people to do more interesting work.

Jeff Wong, EY Global Chief Innovation Officer, said in an article in MIT Technology Review,  “As businesses deploy AI strategies,  they’re increasingly aware of how the roles, responsibilities and skills of their talent is changing.  With AI taking a leading role on tackling organizations’ simple and repetitive tasks, the human workforce can focus more on complex work that ultimately provides a greater level of professional fulfilment to employees and a more efficient use of critical thinking power.”

Asked if AI is being used currently in their organizations, most respondents said AI is being piloted in one or more areas but there was no overall enterprise AI strategy. The next cluster reported that AI is currently not a strategic priority.

Chris Mazzei, EY Chief Data & Analytics Officer and Global Innovation Technologies Leader, stated, “While we’re seeing momentum in businesses deploying AI more strategically across the enterprise, its application is often fragmented across business functions, leaving much of the potential untapped.”

When asked for the top three desired business outcomes from the application of AI, the answers were: to improve and/or develop new products and services; achieve cost efficiencies and streamlined business operations, and to accelerate decision-making.

Chris Mazzei added: “AI technologies have been proven to streamline operations and speed-up internal processes. However, businesses should think more holistically about the competitive advantages that can be reaped from thoughtful applications of AI in product and service development, sales enablement, enhancing customer experience, or capturing business intelligence that helps impact the bottom line.”

The talent shortage is holding things back. “Despite AI’s potential to drive transformational change, adoption continues to be hampered by a shortage of talent,” stated Nigel Duffy, EY Global Innovation Artificial Intelligence Leader. “Businesses must invest in and create a culture of continuous learning that comprises skills programs, training sessions, and research partnerships to attract and retain leading AI practitioners.”

Businesses are aware they need to diversify their AI talent pools to try to prevent bias in results.

Jeff Wong stated, “There is a correlation between the continued lack of diverse AI talent and the distortions being found in some machine-learning outcomes. To mitigate this, businesses need to look for a wide variety of talent to ensure a diversity of experience, and social and professional perspectives are integrated at the coding stage.”

AI on the March, with Humans in the Loop.

The 2018 Global Human Capital Trends report from Deloitte Insights found that the influx of AI, robotics, and automation into the workplace has dramatically accelerated in the last year, and “uniquely human” skills and roles were found to be critically important. Skills seen to be in high demand in the future included complex problem-solving (63 percent), cognitive abilities (55 percent), and social skills (52 percent).

Reinforcing this view, a recent World Economic Forum study found that the top 10 skills for the next decade include essential human skills such as critical thinking, creativity, and people management.

To maximize the potential value of these new technologies today and minimize the potential adverse impacts on the workforce, organizations must put “humans in the loop” —reconstructing work, retraining people, and rearranging the organization. The greatest opportunity is not just to redesign jobs or automate routine work, but to fundamentally rethink “work architecture” to benefit organizations, teams, and individuals.

The Deloitte study found a “readiness gap” with 72 percent of respondents seeing AI as important and 31 percent reported being ready to address it.

Leading companies are recognizing that the technologies are more effective when used to complement and not replace humans. Manufacturers including Airbus and Nissan are finding ways to use collaborative robots, or “co-bots,” that work side by side with workers in factories.

An algorithm is only as effective as “the quantity and quality of the training data to get [it] going,” stated Lukas Biewald, CEO of CrowdFlower, a startup that provides algorithm trainers. This realization has given rise to new jobs with titles such as “bot trainer,” “bot farmer,” and “bot curator.”

Tell the Humans They are Not Fired

As AI technology is introduced and deployed, the workforce needs new skills to be able to exploit the new technologies. “Work architecture” needs to be redesigned. Work needs to be decomposed into it fundamental components  – for example production, problem-solving, communication, supervision – and ways that new combinations of humans and technology working together need to be defined.

Despite this recognition, the Deloitte study found companies are slow to develop the needed human skills of the future. Some 49 percent of respondents said they do not have a plan to cultivate them. “We see this as an urgent human capital challenge requiring top executive support to transform organizational structures, cultures, career options and performance management practices,” the report stated.

Further, “Absent a thoughtful approach, organizations may not only risk failing to identify the skills they need to take effective advantage of technology, but also suffer damage to their employee and corporate brand due to perceptions around (real or supposed) workforce reductions.”

The integration of early AI tools is also causing organizations to become more collaborative and team-oriented, to move away from traditional top-down hierarchical structures, according to an account in Fast Company..

“To integrate AI, you have to have an internal team of expert product people and engineers that know its application and are working very closely with the frontline teams that are actually delivering services,” stated Ian Crosby, co founder and CEO of Bench, a digital bookkeeping provider. “When we are working AI into our frontline service, we don’t go away to a dark room and come back after a year with our masterpiece. We work with our frontline bookkeepers day in, day out.”

Org Charts Moving Away from Top-Down, Towards Teams

The Deloitte survey also found organizations are moving away from a top-down structure and toward multidisciplinary teams. Some 32% of respondents said they are redesigning their organizations to be more team-centric, optimizing them for adaptability and learning in preparation for technological disruption.

Finding a balanced team structure, however, doesn’t happen overnight, Crosby suggested. In large organizations, “It’s better to start with a small team first, and let them evolve and scale up, rather than try to introduce the whole company all at once.”

Crosby adds that Bench’s eagerness to integrate new technologies also determines the skills the company seeks in recruiting and hiring. Beyond checking the boxes of the job’s technical requirements, he says the company looks for candidates that are ready to adapt to the changes that are coming.

“When you’re working with AI, you’re building things that nobody has ever built before, and nobody knows how that will look yet,” he said. “If they’re not open to being completely wrong, and having the humility to say they were wrong, we need to reevaluate.”

Where to Start

When building something never built before, where does one start? “This is one of those instances where getting started is more important than where to start,” suggests Trent Weier, a senior director with SAP who works with customers on projects, writing in Digitalist Magazine from SAP. “Building AI capabilities like machine learning is an evolutionary process and lends itself to short, focused discovery, design, prototyping, and delivery cycles.”

SAP has found early use case experience for AI and machine learning have seen benefits in process optimization, demand planning and forecast applications. The forecast algorithm, for example, evaluates errors for each cycle and recommends or automatically adapts the forecasting method to produce the best result.

For inventory applications, machine learning can automatically adjust optimal safety stock values and inventory parameters at each echelon of the supply chain. Multi-echelon inventory optimization (MEIO) strives to maintain the optimal balance of components, work in process, and finished goods inventory.

AI Impact on Daily Work Environment

AI stands to change the daily work environment, suggests a recent article in MIT Sloan Management Review.  “What people don’t talk about is the integration problem. Even if you can develop the system to do very focused, individual tasks for what people are doing today, as long as you can’t entirely remove the person from the process, you have a new problem that arises — which is coordinating the work of, even communication between, people and these AI systems,” stated Julie Shah, an associate professor of aeronautics at MIT. “And that interaction problem is still a very difficult problem for us, and it’s currently unsolved.”

The article is based on findings from the 2017 AI Global Executive Study and Research project conducted at MIT in partnership with Boston Consulting Group. The partners surveyed 3,000 business executives in the spring of 2017 from 112 countries and 21 industries, from organizations of various sizes, two-thirds of them outside the US.

While organizing for AI broadly, the enterprise will place a premium on soft skills and new forms of collaboration, including project teams composed of humans and machines.

Companies deploying AI are exploring many models of organization, with the Pioneers evenly split among centralized, distributed and hybrid organization models. The report suggests a hybrid model may make the most sense for large organizations, because companies need AI resources both centrally and locally. TIAA, for example, has an analytics center of excellence and a number of decentralized groups.

“The center of excellence is not intended to be the group that will provide all analytics for the entire organization. It provides expertise, guidance and direction to other internal teams that are working to deploy AI and analytics,” said J.D. Elliott, director of enterprise data management for TIAA, a Fortune 100 financial services organization with nearly $1 trillion of assets under management.

The message is not having all the answers is not a reason to hold back from where AI will take your organization.

— By John Desmond, AI Trends Editor, jd@aiworld.com

AI Customer Targeting Levels – From Customer Segmentation to Recommendation Systems

This is the first of a four part series on machine learning and deep learning written for AI Trends by Piotr Migdal, Ph.D., deepsense.ai. Every time you watch a film Netflix has suggested or buy a “similar product” on Amazon, it is a personalized recommendation. Can you make such recommendations work for your business as […]

This is the first of a four part series on machine learning and deep learning written for AI Trends by Piotr Migdal, Ph.D., deepsense.ai.

Every time you watch a film Netflix has suggested or buy a “similar product” on Amazon, it is a personalized recommendation. Can you make such recommendations work for your business as well?

Currently there are four levels of advancement in customer targeting, from no segmentation at all to advanced recommendation systems.

  1. No segmentation at all – targeting all potential customers the same way
  2. Manual segmentation – the most intuitive technique, the segmentation being done by human analysts
  3. Automated segmentation – using machine learning to segment datasets and look for hidden patterns
  4. Recommendation systems – instead of building a limited number of segments, these systems build an individual representation of each customer and product

Each of the four approaches has unique benefits.

1. No segmentation

In the age of the Internet, treating all customers as a homogenous group will nick your popularity. This is why 83% of companies use at least a basic form of segmentation in their daily business. On the other hand, 43% of marketers don’t send targeted emails.

Sometimes it is just not necessary to do so. If the business is a highly specialized or niche one or involves companies with few customers, further segmentation would not provide a significant return.

  • When an online bookstore sells only legal publications, there is usually no need to segment customers, as only lawyers or professionals may need such books.

The need for segmentation grows alongside the scale of the business, as even with the most narrow segment customers are not a homogenous group and their needs may differ.

No segmentation at all

Benefits:

  • Simple and cheap
  • Effective at the beginning, when there are few customers
  • No costs to maintain or implement

Drawbacks:

  • Ineffective at larger scale
  • Inflexible
  • Lost opportunities resulting in low effectiveness

Who can benefit:

  • Small companies with few customers or companies with a narrow target group and high specialization

2. Manual customer segmentation

Human analysts using tools of varying degrees of complication can handle segmentation manually: from Excel sheets to Tableau and advanced Business Intelligence tools. The analysts usually look at intuitive segments, for example demographic divisions (age, gender) and other criteria including geography, income, total purchase value or other factors.

  • An online bookstore selling popular literature could segment readers into three groups: youth, women and men, each of whom obviously have their own preferences.

Even with all the benefits that attend building groups of customers, doing it manually presents significant challenges:

  • Analysts processing the data may be biased. Teenage boys are stereotyped as computer gamers, but mature women play computer games more than young boys.
  • With the dynamic nature of the market, every manual segmentation quickly becomes outdated. Then the work needs to be redone – again manually.
  • The number of groups and segments researchers can create, validate and maintain is limited.
  • Manual segmentation is not scalable

Even considering all the challenges the manual customer segmentation comes with, it is still a powerful efficiency boosting tool. For many companies, including most small businesses, manual segmentation is just enough.

Manual customer segmentation

Benefits:

  • Intuitive and simple
  • Greater efficiency than no segmentation at all
  • Transparent and easy to understand

Drawbacks:

  • Not scalable
  • Fairly inflexible
  • Requires constant maintenance, updates and supervision by human analysts

Who can benefit:

  • Small, middle and sometimes large companies with easily segmented customer groups, including companies selling products tailored for demographics or using other straightforward criteria to target customers

3. Automatic segmentation done with machine learning

Machine learning can be used to predict behaviour such as affinity for a given product or churn probability. However, this approach becomes slightly more challenging if you want to cluster similar customers, when there is no “ground truth”.

K-means and hierarchical aggregation are currently the most widely used algorithms to cluster datasets without human supervision. Each point (customers in the dataset) is assigned to a class. Leaving behind the limited perception of the human researcher, including hidden biases and presuppositions, these algorithms can spot the most obscure and surprising and least obvious clusters within the dataset.

  • In k-means, the number of clusters is fixed and the algorithm finds neighbouring points accordingly, as explained in this visualization.
  • Hierarchical aggregation is an array of techniques that connect neighbouring points step-by-step, forming a dendrogram. While the algorithm provides more flexibility (we can set the number of clusters post factum), it requires stricter supervision and the clustering may be less stable.

One double take-inducing example comes from a man who used clustering to find the perfect woman for him from OkCupid’s database. Such inspiring applications aside, using algorithms to segment datasets will present a number of challenges:

  • Clustering is done on all data, both the useful and the irrelevant (e.g. hair color and complexion may be useful for shampoos but useless for taste in films)
  • There is a fixed number of clusters. Although there are heuristics designed to tackle the problem, it remains an arbitrary choice. Clustering puts everyone in a distinct group, but there are surely more than 50 shades of grey between black and white.
    • Establishing a sharp line between readers of fiction and non-fiction books is every bit as hard as distinguishing between pure high-fantasy lovers and hard science fiction readers.
  • Usually there is an actionable interpretation only for some groups..

It is therefore sometimes better to treat every customer individually instead of building groups and trying to find which group is the best fit. That’s the bedrock of recommendation systems.

Automatic segmentation done with machine learning

Benefits:

  • Finds hidden clusters within the dataset
  • Automated and therefore has no presuppositions
  • Easier to scale and maintain productivity

Drawbacks:

  • Requires human supervision, further interpretation of the segments, may produce segments that make no sense
  • Requires maintenance and updates

Who can benefit:

  • Larger organizations with too much data to handle manually

4. Recommendation systems

Instead of seeking groups within a dataset, recommendation engines provide the customer with a representation in the form of a multidimensional vector (much like word2vec for discovering word similarities and analogies). It shows how the customer is perceived in terms of inferred (not chosen!) factors:

  • How much do our book readers like fiction or non-fiction books?
  • What’s their attitude to fantasy and science fiction novels?
  • How do they feel about romantic, dramatic and action-packed plots?
  • What is the writer’s political affiliation?

Modern recommendation systems, such as Factorization Machines, are able to leverage both official data (the author, genre, date of publication) and less obvious information inferred from buying patterns (are there any fictional monsters in the plot?, are there any supernatural horror elements?).

  • If the reader likes political fiction, is slightly interested in science fiction and loves dramatic stories, he would probably be keen to read Margaret Atwood’s “The Handmaid’s Tale”,

Assigning vectors to both customers and products allows the company to build narrow segments within segments and launch precisely targeted marketing campaigns.

  • Instead of “fiction weekend”, the online bookstore targeted narrow segments with the books individuals in a given group prefer. The group of fantasy novel readers got information about “high fantasy weekend” and selected 10 books that may be the most interesting for them.

We can take things even a step further, showcasing not only similar products, but analogous ones too – for example, a maternity version of a dress a client likes.

The recommendation systems also work with datasets that are not fully covered. No one reads all the books that are published. But a dozen books may be enough to predict what a person will like.

Using vector representation makes it is easier to validate if the system is working properly – every purchase is a feedback and may be used to readjust the model.

Recommendation systems

Benefits:

  • Every customer is treated individually
  • Constantly updated and evaluated
  • Scalable

Drawbacks:

  • Requires vast amounts of data to work properly
  • The technology is complicated and requires skilled data scientists

Who can benefit:

  • Large organizations with data-oriented culture, which process great amounts of data and are able to leverage the system.

Summary

While recommendation systems are more flexible and sophisticated than segmentation, the tools a company should use must be suited to the type of business they do and the challenges they have to be solved. Machine learning is not a magic problem solver to provide a company with out-of-box solutions that work everywhere. But where the scale, variety and complexity of datasets are overwhelming for human data scientists, ML is the best tool available.

For more information, go to deepsense.ai

AI Researchers Have a Plan to Save Coral Reefs

Climate change has been bleaching coral reefs, decimating the local marine species that call them home, since at least the first major observations were recorded in the Caribbean in 1980. Thankfully, new A.I. cataloguing designed to identify the geographic regions where coral is still thriving hopes to reverse the trend, saving some of the world’s most dense and […]

Climate change has been bleaching coral reefs, decimating the local marine species that call them home, since at least the first major observations were recorded in the Caribbean in 1980. Thankfully, new A.I. cataloguing designed to identify the geographic regions where coral is still thriving hopes to reverse the trend, saving some of the world’s most dense and varied aquatic ecosystems from all-but-certain extinction.

There are numerous reasons why we need to care about saving coral reefs, from the ethical to the economic. In addition to housing about a quarter of marine species, these reefs provide $375 billion USD in revenue to the world economy, according to the Guardian, and food security to half a billion people. Without them, researchers say countless species and the entire ocean fishing industry that depends on them would simply evaporate.

The problem is that there’s only so much money and so much time to devote to mitigating the damage already in progress, while the 172 nations who ratified the United Nations Framework Convention on Climate Change “Paris Agreement” race to cut back on their carbon emissions. But an international consortium of researchers say they hope that artificial intelligence can fill in the gaps, and help the reefs get the attention and resources they need to survive.

The solution involved a team of researchers deploying underwater scooters with 360-degree cameras photographing 1487 square miles of reef off the coast of Sulawesi Island in Indonesia. (Sulawesi, nestled in the middle of the Coral Triangle is surrounded by the highest concentration of marine biodiversity on the planet.)

Those images were then fed into a form of deep learning A.I. that had been taught over the course of 400 to 600 images to identify types of coral, and other reef invertebrates, to assess that region’s ecological health.

“The use of A.I. to rapidly analyze photographs of coral has vastly improved the efficiency of what we do,” Emma Kennedy, PhD., a benthic marine ecologist at the University of Queensland, said in a statement. “What would take a coral reef scientist 10 to 15 minutes now takes the machine a few seconds.”

“The machine learns in a similar way to a human brain, weighing up lots of minute decisions about what it’s looking at until it builds up a picture and is confident about making an identification.”

Kennedy and other researchers have also been using a custom, iterative clustering algorithm to identify coral reefs across the world that seem most likely to benefit from conservation resources. Their formula is based on 30 metrics known to impact coral reef ecology, broadly divided into categories like historical activity, thermal conditions, cyclone wave damage, and coral larvae behavior. A map of these prime sites for future coral conservation was published in Conservation Letters, a journal of the Society for Conservation Biology late this July.

The research was made possible by generous donations from the Australian government, the Nature Conservancy, Bloomberg Philanthropies, the Tiffany & Co. Foundation, and the Paul G. Allen Family Foundation, whose namesake’s pleasure barge has a notable record in the field of coral reef depletion.

Kennedy and her team hope that these A.I. techniques will be further refined to help manage coral reefs on the more local level as well as several ecologically significant sites, including the Meso‐American Barrier Reef and the corals in Hawaii, both of which had to be excluded from their study.

Local versions of their global study, they believe, would benefit from data that is not uniformly available for reefs internationally: information about ocean chemistry, the ‘adaptive capacity’ of local reefs to withstand climate change or other stress on their systems, or the particulars of the local economic dependence on these coral reefs.

Read the source article at Inverse.com.

AI in Edmonton: Home to Reinforcement Learning

Edmonton, Home to Reinforcement Learning, now a Foundation of AI, is Retaining AI Talent, Attracting Investment Including DeepMind The capital of Edmonton in the Canadian province of Alberta, like its counterparts in Toronto and Montreal, has a number of strengths in AI research that are attracting engineering talent and private investors. These include: — The […]

Edmonton, Home to Reinforcement Learning, now a Foundation of AI,
is Retaining AI Talent, Attracting Investment Including DeepMind

The capital of Edmonton in the Canadian province of Alberta, like its counterparts in Toronto and Montreal, has a number of strengths in AI research that are attracting engineering talent and private investors. These include:

— The University of Alberta, considered a bedrock of Reinforcement Learning (RL) thanks to pioneering work done by Prof. Richard Sutton. The Royal Bank of Canada’s RBC Research arm announced in early 2017 it would hire Prof. Sutton to advise on a new research lab opening in Alberta to research the application of AI in banking.

RBC CEO Dave McKay stated at the time, “There is a lot of investment discussion about AI creating new capabilities. And it is a tool we are very excited about harnessing it within our own organization.”

Amii (Alberta Machine Intelligence Institute), a research group set up by Prof. Sutton, has continued to attract top students from around the world.

Borealis AI is a research center funded by RBC and aligned with U Alberta and Amii,  aimed at technology transfer from AI research to commercial business opportunities. Prof. Mathew Taylor, a RL expert from Washington State University, leads research at Borealis and currently has 15 researchers focused on solving RL problems.

ACAMP (Alberta Center for Advanced Micro Nano Technology), is an industry-led product development center founded in 2007 and used by advanced technology entrepreneurs to move their innovation from proof-of-concept to manufactured product. The center provides entrepreneurs access to multidisciplinary engineers, technology experts, unique specialized equipment, and industry expertise.

Located in Edmonton’s Research Park, ACAMP has a focus on electronics hardware, firmware, sensors, and embedded systems. The center’s product development group provides a range of support at each stage of the product development process.

The firm cites client testimonials from Xtel International, Ltd., Symroc, Nanolog Audio, the University of Dayton, Medella Health and Hifi Engineering.

Prof. Sutton Recognized for Reinforcement Learning Research

Dr. Sutton is recognized for his work in reinforcement learning, an area of machine learning that focuses on making predictions without historical data or explicit examples. Reinforcement learning techniques have been shown to be powerful in determining ideal behaviors in complex environments. For example, the techniques were used to secure a first-ever victory over a human world champion in the game of Go, as have been used in recent applications in robotics and self-driving cars.

“The collaboration between RBC Research and Amii will help support the development of an AI ecosystem in Canada that will push the boundaries of academic knowledge,” stated Dr. Sutton in a press release. “With RBC’s continued support, we will cultivate the next generation of computer scientists who will develop innovative solutions to the toughest challenges facing Canada and beyond. We’ve only scratched the surface of what reinforcement learning can do in finance.”

“We are thrilled to be opening a lab in Edmonton and to collaborate with world-class scientists like Dr. Sutton and the other researchers at Amii,” stated Dr. Foteini Agrafioti, head of RBC Research. “RBC Research has built strong capabilities in deep-learning, and with this expansion, we are well poised to play a major role in advancing research in AI and impact the future of banking.”

Gabriel Woo, VP of innovation at RBC Research in Toronto, stated in  the Financial Post that while Toronto’s and Montreal’s AI ecosystems are further along, “you have a comparable academic lab at AMII, and it is home to Sutton, who literally wrote the textbook on reinforcement learning that is being read around the world. Because of that, we are partnering with them to create and fuel opportunities to help that talent stay in Edmonton.”

Woo believes the community can expect to see more investors and startups in the near future. “If we are able to provide opportunities for them to apply their research, it will attract more attention from VCs and others and increase the opportunities for commercialization.”

Edmonton Startups Have Access to Capital, Work Space

That notion was seconded by Shawn Abbott, a general partner at iNovia Capital, which backs early stage companies. “The rising tide in AI has been due to the avalanche of large-scale cloud computing capacity, which has made the techniques of scientific AI development practical,” he said in an interview with AI Trends. “AI helps make a commodity of prediction; the ability to forecast what will happen next is now available in many industries. It’s a new way to build software and to provide cognitive augmentation, the ability to support intellectual or human endeavors with software.”

The advances of Dr Sutton have been pivotal to the advance of AI generally and in Edmonton in particular. “Dr. Sutton’s group has turned out more PhDs in AI than any other group in Canada,” Abbott said.

Keeping that talent in Canada has been the focus of Startup Edmonton, funded by the Edmonton Economic Development Corp., since its founding in 2009. The group supports entrepreneurs with mentorship programs, coworking space and community events, bringing together developers, students, founders and investors. The effort has helped to some degree to stem the brain drain of AI talent from Canada. “I don’t think it’s completely stopped but it has slowed down,” said Tiffany Linke-Boyko, CEO of Startup Edmonton, in an interview with AI Trends. A more favorable cost of living in Edmonton also helps. “The expense of living in some of the US high tech cities is insane,” she said.

She called the effort to raise awareness of Edmonton as a good location to build new AI companies as off to a good start and early stage. “We still need more companies; it’s a young ecosystem with interesting momentum,” she said.

DeepMind Commitment a Boost

Edmonton got a boost with the announcement in July 2017 that DeepMind would open its first international AI research lab in downtown Edmonton. The 10-person lab, to operate in partnership with the University of Alberta, will be headed by three University of Alberta PhDs: Richard Sutton, Michael Bowling and Patrick Pilarski.

From left, Dr. Rich Sutton, Dr. Michael Bowling and Dr. Patrick Plarski, all professors of AI at the University of Alberta, will run the Edmonton research lab of DeepMind, an AI research division of Google.

“This is a huge reputational win for the University of Alberta,” stated U of A’s dean of science, Jonathan Schaeffer, himself an AI pioneer, in an account in the Edmonton Journal.  “We’ve been one of the best AI research centres in the world for more than 10 years. The academic world knows this, but the business community doesn’t. The DeepMind announcement puts us on the map in a big way. It’s going to wake up a lot of people.”

Bowling is a leading expert on AI and games. He and his team created computer programs that beat champion human poker players. Pilarski, an engineer, specializes in adapting AI to medical uses, from helping to create intelligent prosthetic limbs to reading and screening medical tests. DeepMind of London wanted them, but the three didn’t want to leave Edmonton to move to London. So DeepMind decided to come to them.

“We’ve reached a critical mass here. There’s a kind of stickiness,” stated Pilarski.”This is the right place at the right time. It’s like nowhere else in the world.”

Now the three are in a good position to attract some of their best students back to Edmonton and to recruit more top students. “A lot of our graduates are dying for a chance to use their education in Edmonton,” stated Bowling. “We’re hoping this is a catalyst for more of a tech build-up in Edmonton.”

Over the last 15 years, the Alberta government has invested $40 million in AI and machine learning research, mostly at the U of Alberta.  That steady funding lured Sutton and Bowling to Edmonton initially.

DeepMind in January announced funding for an endowed chair at the University of Alberta’s department of computer science. The person who fills the position will be given academic freedom to explore any interest that could advance the field of AI.

“The DeepMind endowed chair, together with additional funding to support AI research at the department of computing science, is a sign of our continued commitment to this cause, and we look forward to the research breakthroughs this deep collaboration will bring,” stated Demis Hassabis, founder and CEO of DeepMind, in a press release.

Interesting AI Startups in and Around Edmonton

Here is a look at selected Edmonton-area startups that incorporate AI in their products or services.

Testfire Labs: Machine Learning Underlies the Hendrix AI Assistant

Testfire Labs, founded in 2017, is a startup that uses machine learning and artificial intelligence to build productivity solutions that modernize the way people work. Testfire’s flagship product, Hendrix.ai, is an AI assistant that captures meeting notes, action items and data points by listening via a microphone.

Currently in its beta test phase, Hendrix is said to produce meeting summaries that leave out “chit chat” for clarity.

“The demands to do more with less in modern business keep increasing,” stated Dave Damer, founder and CEO, in an account on Testfire recently published in AI Trends. “AI gives us an opportunity to legitimately take things off people’s’ hands that are generally mundane tasks so they can focus on higher-value work.”

Testfire has had three rounds of funding, with the amount raised undisclosed, according to Crunchbase.

Stream Technologies, Inc.

Stream combines the power of spectroscopy and AI machine learning to make detection quick and easy. Test results normally received from a lab or from those with a certain level of expertise are now identified in near and real time.

Within the agriculture sector, customer may want to identify anything from an invasive species to a disease, to a nutrient deficiency or levels of oil in plants, seeds and fertilizers.

Stream uses a three-stage system of capture, analyze and visualize to deliver its services. The capture is executed by a multispectral camera or spectrometer; that data is fed into the Stream Analytics Engine, which creates an application to analyze the spectral data; in the visualization stage, the data is ready in minutes, either colored images or levels of the detected element.

The Analytics Engine combined machine learning techniques and neural net design specifically to show the test results from spectral images and spectrometer scans.

One example is the ability to detect the difference between organic and polyethylene leaves. After the analysis, the polyethylene leaves are colored red and the organic leaves are colored blue.  

Learn more at Stream Technologies.

DrugBank is a Leading Online Drug Database

DrugBank is a curated pharmaceutical knowledge base for precision medicine, electronic health records and drug development.

“Our mission is to enable advanced in precision medicine and drug discovery,” said co-founder and CEO Mike Wilson of OMx Personal Health Analytics, Inc., which operates DrugBank, in comments to AI Trends.

DrugBank founders Craig Knox, left, and Mike Wilson.

DrugBank provides structured drug information that covers drugs from discovery stage to approval stage. It includes comprehensive molecular information about drugs, their mechanisms, their interactions and their targets as well as detailed regulatory information including indications, marketing status and clinical trials. DrugBank has become one of the world’s most widely used reference drug resources. It is routinely used by the general public, educators, pharmacists, pharmacologists, the pharmaceutical industry and regulatory agencies.

The first version of DrugBank was released in 2006. Version 5.1.1 was released in July 2018. The online database started as a project of computer science professor Dr. David Wishart of the U of Alberta. Undergraduate students Craig Knox and Mike Wilson helped develop the tool as undergraduates. The two later made a deal with the university to commercialize the database and set up shop at Startup Edmonton, which provides workspace and support for entrepreneurs. .

“The first weekend we released it, the servers crashed because there was so much traffic coming in,” stated co-founder Craig Knox in an account in Startup Edmonton. “It was quite popular and grew in its popularity over the years.” Over the next decade, DrugBank became ubiquitous in the pharma world, with millions of global users.

“We sell subscriptions for our datasets and software for precision medicine, electronic health records, and drug development. We also provide datasets for academic researchers for free,” Wilson told AI Trends.

Now DrugBank’s commercial clients include some of the largest pharmaceutical companies in the world, as well as mid-sized companies, a growing number of pharma startups, and companies providing scientific reference software. “The value for the users is saving time by finding the information in one place,” stated Wilson.

Each month, a million users visit the site, making DrugBank the most popular drug database in the world. It has information on more than 20,000 individual drugs, including approved drugs, drugs in clinical trials and drug formulas that show potential.

With pharma research advancing rapidly, the database must be continually updated with new information. To do this, the company uses a team of nine ‘bio curators’ — representing pharmacy, medicine, biochemistry, and other fields — who comb the academic literature for new information to add to the resource daily.

New offerings use AI to provide insights for precision medicine and pharmaceutical analytics. “Our latest offering analyzes an individual’s medical history and medications and provides important insights based on an analysis of various factors including side effects, interactions and comparisons to similar medications,” Wilson said. “The offering leverages our extremely detailed structured knowledge base and a proprietary AI algorithm to provide the analysis.”

The founders spoke highly of the support they get from Startup Edmonton, which has helped them lay a foundation for a global, scalable technology product. They enjoy being located in the downtown facility with its network of entrepreneurs. “You learn from each other which is a really cool benefit,” stated Wilson.

— By John P. Desmond, AI Trends Editor

Next: AI in Vancouver

 

Montreal-Toronto AI Startups Have Wide Range of Focus

Includes Healthcare, Biomed, Text Analysis, Legal Research, Image Analysis, Drug Discovery, Education Canada has made a commitment for many years to the study of AI at universities across the county, and today robust business incubation programs supported by Canada’s state and regional governments work to transform research into viable businesses. This AI ecosystem has produced […]

Includes Healthcare, Biomed, Text Analysis, Legal Research, Image Analysis, Drug Discovery, Education

Canada has made a commitment for many years to the study of AI at universities across the county, and today robust business incubation programs supported by Canada’s state and regional governments work to transform research into viable businesses. This AI ecosystem has produced breakthrough research and is attracting top talent and investment by venture capital. Here is a look at a selection of Montreal- and Toronto-based AI startups.

TandemLaunch, Technology Transfer Acceleration

TandemLaunch is a Montreal-based technology transfer acceleration company, founded in 2010, that works with academic researchers to commercialize their technological developments. CEO and General Partner Helge Seetzen was the founder and directs the company’s strategy and operations. TandemLaunch has raised $29.5 million since its founding, according to CrunchBase. The firm has spun out more than 20 companies and has been recognized for supporting women founders.

Seetzen was a successful entrepreneur who co-founded Sunnybrook Technologies and later BrightSide Technologies to commercialize display research developed at the University of British Columbia. BrightSide was sold to Dolby Laboratories for $28 million in 2007.

TandemLaunch provides startups with office space, access to IT infrastructure, shared labs for electronics, mechanical or chemical prototyping, mentoring, hands-on operational support and financing.

Asked by AI Trends to comment, CEO Seetzen said, “TandemLaunch has a long history of building leading AI companies based on technologies from international universities. Example successes include LandR – the world’s largest music production platform – and SportlogiQ which offers AI-driven game analytics for sports. Many younger TandemLaunch companies are at the brink of launching game-changing products onto the market such as Aerial’s AI for motion sensing from Wi-Fi signals which will be released in several countries as a home security solution later this year. With hundreds of AI developers across our portfolio of 20+ companies, TandemLaunch is well positioned to capitalize on AI opportunities of all stripes.”

Other companies in the TandemLaunch portfolio include: Kalepso, focused on blockchain and machine learning; Ora, offering nanotechnology for high-fidelity audio; Wavelite, aiming to increase the lifetime of wireless sensors used in IoT operations; Deeplite, providing an AI-driven optimizer to make deep neural networks faster; Soundskrit, changing how sound is measured using a bio-inspired design; and C2RO, offering a robotic SaaS platform to augment perception and collaboration capabilities of robots.

Learn more at TandemLaunch.

BenchSci for Biomedical Researchers

BenchSci offers an AI-powered search engine for biomedical researchers. Founded in 2015 in Toronto, the company recently raised $8 million in a series A round of funding led by iNovia Capital, with participation including Google’s recently-announced Gradient Ventures.

BenchSci uses machine learning to translate both closed-and open-access data into recommendations for specific experiments planned by researchers.  The offering aims to speed up studies to help biomedical professionals find reliable antibodies and reduce resource waste.

“Without the use of AI, basic biomedical research is not only challenging, but drug discovery takes much longer and is more expensive,” BenchSci cofounder and CEO Liran Belenzon stated in an account in VentureBeat. “We are applying and developing a number of advanced data science, bioinformatics and machine learning algorithms to solve this problem and accelerate scientific discovery by ending reagent failure.” (A reagent is a substance used to detect or measure a component based on its chemical or biological activity.)

In July 2017, Google announced its new venture fund aimed at early-stage AI startups. In the year since, Gradient Ventures has invested in nine startups including BenchSci, the fund’s first known health tech investment and first outside the US.

“Machine learning is transforming biomedical research,” stated Gradient Ventures founding partner Ankit Jain. “BenchSci’s technology provides a unique value proposition for this market, enabling academic researchers to spend less time searching for antibodies and more time working on their experiments.”

BenchSci told VentureBeat is tripled its headcount last year and plans to add 16 new hires throughout 2018.

Learn more at BenchSci.

Imagia to Personalize Healthcare Solutions

Imagia is an AI healthcare company that fosters collaborative research to accelerate accessible, personalized healthcare.

Founded in 2015 in Montreal, the company in November 2017 acquired Cadens Medical Imaging for an undisclosed amount, to accelerate development of its biomarker discovery processes. Founded in 2008, Cadens develops and markets medical imaging software products designed for oncology, the study of tumors.

Venture-backed Imagia acquired Cadens Medical Imaging.

“This strategic transaction will significantly accelerate Imagia’s mission of delivering AI-driven accessible personalized healthcare solutions. Augmenting Imagia’s deep learning expertise with Cadens’ capabilities in clinical AI and imaging was extremely compelling, to ensure our path from validation to commercialization,” stated Imagia CEO Frederic Francis in a press release. “This is particularly true for our initial focus on developing oncology biomarkers that can improve cancer care by predicting a patient’s disease progression and treatment response.”

Imagia co-founder and CTO Florent Chandelier said “Our combined team will build upon the long-term outlook of clinical research together with healthcare partnerships, and the energy and focus of a technology startup with privileged access to deep learning expertise and academic research from Yoshua Bengio’s MILA lab. We are now uniquely positioned to deliver AI-driven solutions across the healthcare ecosystem.”

In prepared remarks, Imagia board chair Jean-Francois Pariseau stated, “Imaging evolved considerably in the past decade in terms of sequence acquisition as well as image quality. We believe AI enables the creation of next generation diagnostics that will also allow personalization of care. The acquisition of Cadens is an important step in building the Imagia platform and supports our strategy of investing in ground breaking companies with the potential to become world leaders in their field.”

Learn more at Imagia.

Ross Intelligence: Where AI Meets Legal Research

Ross Intelligence is where AI meets legal research. The firm was founded in 2015 by Andrew Arruda, Jimoh Ovbiagele and Pargies Dall ‘Oglio, machine learning researchers from the University of Toronto. Ross, headquartered in San Francisco, in October 2017 announced an $8.7 million Series A investment round led by iNovia Capital, seeing an opportunity to compete with the legal research firms LexisNexis and Thomson Reuters.

The platform helps legal teams sort through case law to find details relevant to new cases. Using standard keyword search, the process takes days or weeks. With machine learning, Ross aims to augment the keyword search, speed up the process and improve the relevancy of terms found.

“Bluehill [Research] benchmarks Lexis’s tech and they are finding 30 percent more relevant info with Ross in less time,” stated Andrew Arruda, co-founder and CEO of Ross, in an interview with TechCrunch.

Ross uses a combination of off-the-shelf and proprietary deep learning algorithms for its AI stack. The firm is using IBM Watson for some of its natural language processing as well. To build training data, Ross is working with 20 law firms to simulate workflow example and test results.

Ross has raised a total of $13.1 million in four rounds of financing, according to Crunchbase.

The firm recently hired Scott Sperling, former head of sales at WeWork, as VP of sales. In January, Ross announced its new EVA product, a brief analyzer with some of the power of the commercial version. Ross is giving it away for free to seed the market. The tool can check the recent history related to cited cases and determine if they are still good law, in a manner similar to that of LexisNexis Shepard’s and Thomson Reuters KeyCite, according to an account in LawSites.

EVA’s coverage of cases includes all US federal and state courts, across all practice areas. “With EVA, we want to provide a small taste of Ross in a practical application, which is why we are releasing it completely free,” Arruda told LawSites. “We’re deploying a completely new way to doing research with AI at its core. And because it is based on machine learning, it gets smarter every day.”

For more information, go to Ross Intelligence.

Phenomic AI Uses Deep Learning to Assist Drug Discovery

Phenomic AI is developing deep learning solutions to accelerate drug discovery. The company was founded in Toronto in June 2017 by Oren Kraus, from the University of Toronto, and Sam Cooper, a graduate of the Institute of Cancer Research in London. The aim is to use machine learning algorithms to help scientists studying image screenings to learn which cells are resistant to chemotherapy, thus fighting the recurrence of cancer in many patients. The AI enables the software to comb through thousands of cell culture images to identify those responsible for being chemo-resistant.

Phenomic AI founders Oren Kraus, left, and Sam Cooper. Photo by Olympus Digital Camera

“My PhD at U of T was looking at developing deep-learning techniques to automate the process of analyzing images of cells, so I wanted to create a company looking at this issue,” stated Kraus in an account in StartUp Here Toronto.  “There are key underlying mechanisms that allow cancer cells to survive in the first place. If we can target those underlying mechanisms that prevent cancer coming back in entire groups of patients, that’s what we’re going for.”

Cooper is working towards his PhD with the department of Computational Medicine at Imperial College, London, and also with the Dynamical Cell Systems team at the Institute of Cancer Research. His research focuses on developing deep and reinforcement learning solutions for pharmaceutical research.

An early research partner of Phenomic AI is the Toronto Hospital for Sick Children, in a project to study a hereditary childhood disease.

The company has raised $1.5 million in two funding rounds, according to Crunchbase.

Learn more at Phenomic AI.

Erudite.ai Aims at Peer Tutoring

Erudite.ai is marketing ERI, a product that aims to connect a student who needs help on a subject with a peer who has shown expertise in the same subject. The company was founded in 2016 in Montreal and has raised $1.1 million to date, according to Crunchbase. The firm uses an AI system to analyze the content of conversations and specific issues the student faces. From that, it generates personalized responses for the peer-tutor. ERI is offered free to students and schools.

Erudite.ai is competing for an IBM Watson XPrize for Artificial Intelligence, being one of three top 10 teams announced in December, from 150 entrants competing for $5 million in prize money. President and founder Patrick Poirier was quoted in The Financial Post on the market opportunity, “Tutoring is very efficient at helping people improve their grades. It’s a US $56 billion market. But at $40 an hour, it’s very expensive.” Erudite.ai is giving away its product, for now. The plan is to go live in September and host 200,000 students by year-end. By mid-2019, the company plans to sell a version of the platform to commercial tutoring firms, to help them speed teaching time and reduce costs.

The company hopes to extend beyond algebra to geometry, then the sciences, in two years. “The AI will continue to improve,” states Poirier. “In five years, I hope we will be helping 50 million people.”

Learn more at Erudite.ai.

Keatext Comprehends Customer Communication Text

Keatext’s AI platform interprets customers’ written feedback across various channels to highlight recommendations aimed at improving the customer experience. The firm’s product is said to enable organizations to audit customer satisfaction, identify new trends, and keep track of the impact of actions or events affecting the clients. Keatext’s technology aims to mimic human comprehension of text to deliver reports to help managers make decisions.

Keatext Team, founder Narjes Boufaden in foreground.

The company was founded in 2010 in Montreal by Narjes Boufaden, first as a professional services company. From working with clients, the founder identified a gap in the text analytics industry she felt the firm could address. In 2014, Keatext began offering a SaaS product offering.

Boufaden holds an engineering degree in computer science and a PhD in natural language processing, earned with the supervision of Yoshua Bengio and Guy Lapalme. Her expertise is in developing algorithms to analyze human conversations. She has published many articles on NLP, machine learning, and text mining from conversational texts.

Keatext in April announced a new round of funding, adding CA$1.72 million to support commercial expansion, bringing the company’s funding total to CA$3.32 million since launching its platform two years ago. “This funding will help us gain visibility on a wider scale as well as to consolidate our technological edge,” stated Boufaden in a press release. “Internet and intranet communication allows organizations to hold ongoing conversations with the people they serve. This gives them access to an enormous amount of potentially valuable information. Natural language understanding and deep learning are the keys to tapping into this information and revealing how to better serve their audiences.”

Learn more at Keatext.

Dataperformers in Applied AI Research

Founded in 2013 in Montreal, Dataperformers is an applied research company that works on advanced AI technologies. The company has attracted top AI researchers and engineers to work on Deep Learning models to enable E-commerce and FinTech business uses.

Calling Dataperformers “science-as-a-service,” co-founder and CEO Mehdi Merai stated, “We are a company that solves problems through applied research work in artificial intelligence,” in an article in the Montreal Gazette. Among the first clients is Desjardins Group, an association of credit unions using the service to analyze large data volumes, hoping to discover hidden patterns and trends.

Dataperformers is also working on a search engine for video called SpecterNet, that combines use of AI and computer vision to find specific content. Companies could use the search engine to identify videos where their products appear, then market the product to the video’s audience. The company is using reinforcement learning to help the video search AI to learn on its own.

Learn more at Dataperformers.

Botler.ai Bot Helps Determine Sexual Harassment

Botler.ai was founded in January 2018 by Ritika Dutt, COO, and Amir Moraveg, CEO, as a service to help victims of sexual harassment determine whether they have been violated. The bot was created following a harassment experienced by cofounder Dutt.

Left to right: Cofounders Amir Moravej and Ritika Dutt with advisor Yoshua Bengio. Photo by Eva Blue

She was unsure how to react after the experience, but once she researched the legal code, she gained confidence. “It wasn’t just me making things up in my head. There was a legal basis for the things I was feeling, and I was justified in feeling uncomfortable,” she stated in an account in VentureBeat.

The bot uses natural language processing to determine whether an incident could be classified as sexual harassment. The bot learned from 300,000 court cases in Canada and the US, drawing on testimony from court filings, since testimony aligns most closely with conversational tone. The bot can generate an incident report.

This is Botler.ai’s second product, following a bot made last year to help people navigate the Canadian immigration system.

Yoshua Bengio of MILA is an advisor to the startup.

Next in AI in Canada series: AI in Edmonton

 

  • By John P. Desmond, AI Trends Editor

 

With its Academics, Culture of Collaboration, Access to Capital, Concern with Social Impact, Montreal Poised to be AI Startup Hotbed

With its confluence of academics, international accessibility, culture of collaboration, many startups and access to capital, Montreal may be poised to become the next Silicon Valley. This might be especially true given the current America political climate hostile to the international cooperation on which research institutions and technology companies thrive. Montreal is benefitting today from […]

With its confluence of academics, international accessibility, culture of collaboration, many startups and access to capital, Montreal may be poised to become the next Silicon Valley. This might be especially true given the current America political climate hostile to the international cooperation on which research institutions and technology companies thrive.

Montreal is benefitting today from a long-term commitment by the Canadian government to fund AI research.

“Canada has supported the fundamental basics of AI by financing Bengio (Yoshua Bengio,University of Montreal and MILA), LeCun (Yann LeCun, VP and Chief AI Scientist, Facebook) and Geoff Hinton (University of Toronto and Google), over 25 years, back to when AI was not as strong a bet “ said Chris Arsenault, General Partner, iNovia Capital, Montreal, in an interview with AI Trends. “That’s why Canada is in such a great position right now.”

These scientists are a big pull for Canada to attract students and the many big technology companies who have opened research labs in Canada, especially in Montreal and Toronto. These include: IBM AI Lab; Facebook AI Center (FAIRE); Google AI Lab; Microsoft (acquired Maluuba in January 2017); Tencent, via an investment in Element.ai; Intel, also via Element.ai; Google DeepMind Center; Samsung AI Center; Thales Centre of Research & Tech in AI; the RBC (Royal Bank of Canada) Borealis AI Center; Uber AI; ADM AI lab (opening soon); NVIDIA, SunLife, Adobe; LG, Fujitsu; and TD (Toronto-Dominion Bank)/Layer 6.

(See coverage in AI Trends of AI Innovation in Canada:  Canada’s AI Initiative Brings Together Government, Academia, Industry In Quest to Expand National Economy; and AI Ecosystem in Toronto a Model: Region’s AI Talent Attracting Support From Investors, Major Players.)

“We are just starting to see the fruits of the results of all this research in the form of companies with business models and platforms incorporating AI,” Arsenault said. Advances in chip design and availability of compute power via the cloud are also enabling the rush. “This was not possible five or 10 years ago,” Arsenault added.

Companies Finding AI Talent in Montreal

A chief attraction for companies pursuing AI research and commercialization, is the access to top talent centered around the universities, in particularly the McGill University and the University of Montreal, which includes the Montreal Institute for Learning Algorithms (MILA), said to be one of the largest deep learning labs in the world. Partly this is due to the accomplishments of Dr. Bengio, one of the world’s leading deep learning researchers. (See Executive Interview with Dr. Bengio in AI Trends.)

“Montreal has the largest concentration of deep learning academics in the world. This attracts some of the best students, postdocs, professors, researchers, engineers and entrepreneurs interested in contributing to the ongoing AI revolution,” Dr. Bengio stated.

The Canadian government’s commitment to AI is exemplified in its support for MILA. The government of Quebec recently allocated $80 million over the near five years to support its growth, and the federal government’s Pan-Canadian AI Strategy unit has granted MILA $44 million to supports its activities.

The MILA mission is to attract and retain talent in the machine learning field; to propel advanced research in deep learning and reinforcement learning; to transfer technology by supporting private AI startups and established businesses; and to contribute to the social dialogue and the development of applications that benefit society.

The new Facebook Artificial Intelligence Research (FAIR) in Montreal will be led by McGill University professor Joelle Pineau, a member of MILA. The plan is to employ research scientists and engineers engaged in a wide range of projects, with a focus on reinforcement learning and dialog systems.

“Montreal already has an existing fantastic academic AI community, an exciting ecosystem of startups, and promising government policies to encourage AI research,” stated LeCun in a press release about the investment. “We are excited to become part of this larger community, and we look forward to engaging with the entire ecosystem and helping it continue to thrive.”

Joelle Pineau, head of Facebook FAIR lb in Toronto, professor at McGill University

“For many years, I have seen a steady stream of talented AI researchers with Masters and PhDs from our universities move to the US to find the best research jobs,” Prof. Pineau stated in a release from McGill University. “They will now have an opportunity to do this right here in Montreal. The Montreal FAIR Lab will initially launch with ten researchers, with the aim of scaling up to more than 30 researchers in the coming year.”

Technical talent in Montreal is attracted to companies who offer a chance to publish papers and “do something good for humanity,” in the words of Patrick Poirier, chief technology of startup Erudite AI. “Trying to fight for talent with pure cash is a losing bet for startups in Montreal,” he told Daniel Faggella, the founder of Tech Emergence, a market research company focused on AI and machine learning, who spent 12 days visiting AI related ventures and executives in Montreal last year and wrote an account of his conclusions.

Montreal Cost of Living, Diversity Are Strengths

The Montreal culture, lifestyle and relatively low cost of living compared to other urban tech centers such as San Francisco and Boston, is also attractive.

One technologist who made the move from Silicon Valley to Montreal is Maxime Chevalier-Boisvert, who returned to Montreal in mid-2017 after working at Apple for 13 months, according to an account in the New York Times. She had an opportunity to work with Yoshua Bengio at MILA and could not pass it up. Her title at MILA is Architect of Imaginary Machines. While her salary was about one-third of what she made at Apple, her rent for a two-bedroom apartment in Montreal was less than a third of the monthly rent she paid for a one-bedroom apartment in Sunnyvale. “Living in Montreal is pretty good,” she stated.

The Montreal AI culture has also attracted investments from those concerned with the social impact and risks of AI. The Open Philanthropy Project in July 2017 awarded $2.4 million to MILA to support “technical research on potential risks from advanced AI,” stated the announcement from OPP, which has a focus area on Global Catastrophic Risks that includes advanced AI. The OPP’s two primary aims are to increase high-quality research on the safety of AI, and the number of people knowledgeable about both machine learning and the potential risks of AI.

Montreal’s diversity of culture is also helping to attract talent. Dr. Alexandre Le Bouthillier, founder of machine vision healthcare company Imagia, observed that most talent in Montreal’s AI community is foreign-born, with his own team coming from all over the globe. “Smart people know that talent attracts talent,” he has stated.

Montreal and Toronto are benefitted from a Canadian immigration strategy consistent with the country’s AI initiative. Canada launched a fast-track visa program for high-skilled workers in the summer of 2017. Today, foreign students make up 20 percent of all students at Canadian universities compared with less than five percent in the US, according to a recent account in Politico written by two University of Toronto professors, Richard Florida and Joshua Gans. Canadian immigration law also makes it easier for foreign students to remain in Canada after they graduate.

Since the election of Donald Trump as US president in November 2016, applications to Canadian universities have spiked upward. International student applications jumped 70 percent in the fall of 2017 compared to the previous year; applications to McGill University in Montreal jumped 30 percent; and those to the University of British Columbia in Vancouver increased by 25 percent, according to the authors.

Canadian Prime Minister Justin Trudeau views immigrants as contributing to the growth of the Canadian economy, particularly in areas of technical innovation. “People choosing to move to a new place are self-selected to be ambitious, forward-thinking, brave and builders of a better future,” he stated in a recent account in TechCrunch. “For someone who chooses to do this to ensure their kids have a good life is a big step.” The Canadian perspective on innovation is helping to attract talent not only for the opportunity to conduct technical research but also to study “the consequences of AI, the consequences of automation,” Trudeau stated.

French culture has a big impact on Montreal, expanding beyond the delis and coffee shops and into business life. Many of the larger businesses primarily speak French in the office and in many of the top universities, including the University of Montreal.

Montreal Attracting Investment Capital

The ability of Montreal’s universities and startups to attract capital from tech giants and investors has helped to cement its position. The ability of Montreal-based platform and incubator Element AI, to raise $102 million in a Series A round of investment in June 2017, was a tipping point. The firm’s mission is to lower the barrier to entry for commercial applications in AI by offering AI talent and resources to companies that need to supplement their own staffs.

The round was led by Data Collective, which backs entrepreneurs applying deep learning technologies to transform giant industries, and included as partners Microsoft Ventures and NVIDIA. The Series A round came six months after Element AI announced a seed round from Microsoft Ventures (for an undisclosed amount) and eight months after the company launched.

The firm’s approach is to build an “incubator” or “safe space” where companies that might sometimes compete, sit alongside each other and collaborate to build new products. Some believe this may be an industry first. Data Collective sees an opportunity to close the gap between the AI have and have-nots.

“There is not a lot left in the middle,” Data Collective managing partner Matt Ocko told TechCrunch. “The issue with corporations, governments and others trapped in that no man’s land of AI ‘have-nots’ is that their rivals with superior AI-powered decision making and signal processing will dominate global markets.”

Element AI foresees initial product pickup in areas of: predictive modeling, forecasting models for small data sets, conversational AI and natural language processing, aggregation techniques based on machine learning, reinforcement learning for physics-based motion control, statistical machine learning algorithms, voice recognition, fluid simulation and consumer engagement optimization.

Element AI is not yet discussing customer engagements in depth, a spokesman told AI Trends, but they have signed up as customers the Port of Montreal, Radio-Canada (Canadian media company) and the Canadian Space Agency. According to a recent article in Fortune, the company sees an opportunity to embed itself in large organizations that may use Google for email and Amazon for web services, but are reluctant to give those companies access to internal databases with company-sensitive information. Element AI sees an opportunity to position as a more ethical AI company than those involved with military contracts and election influencers.

The future looks good for AI innovation out of Montreal. Karam Thomas, founder and CEO of CognitiveChem, a company leveraging AI to help chemists develop safer chemicals, stated, “Montreal’s unique advantage lies in its collaborative research between academia, startups and corporations.” Montreal’s AI boosters are hoping that collaboration will spur more entrepreneurs to build sizable new companies.

Next: A look at AI startups in Montreal.

 

  • By John P. Desmond, AI Trends Editor

Ultrasonic Harm and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider The law of unintended consequences is going to impact AI self-driving cars. You can bet on it. Actually, as a society, we’re likely mainly interested in the “adverse” unintended consequences side of that natural law, since there are bound to be lots of otherwise “favorable” unintended consequences – […]

By Lance Eliot, the AI Trends Insider

The law of unintended consequences is going to impact AI self-driving cars. You can bet on it. Actually, as a society, we’re likely mainly interested in the “adverse” unintended consequences side of that natural law, since there are bound to be lots of otherwise “favorable” unintended consequences – the favorable benefits we can all readily live with. It’s the adverse ones that pose potential concern and could lead to harm.

You might recall that in the 1990’s there was the advent of the passenger side airbags on cars, which everyone at first thought would be a great safety add-on to cars. Few cars had it initially and only the more expensive new cars were outfitted with it. Gradually, the cost dropped and most of the auto makers included those passenger side airbags in their basic models. So far, so good.

But, what began to emerge were reports of small children getting harmed when the passenger side airbags deployed. This was due to the aspect that small children and in particular babies in their special car seats were not the designated occupants that the airbag was intended to save. Those airbags were intended to save someone larger and older, such as teenagers and adults. Unfortunately, it was actually at times harmful to the youngest occupants. It became recommended to put your baby in its car seat into the backseat of the car, thus avoiding getting harmed by a passenger side airbag that might deploy in an accident. This though led to parents forgetting that their baby was in the backseat of the car and produced hot-car deaths, another adverse unintended consequence.

The sad but telling point to the story is that something that was supposed to be good turned out to have unintended consequences. In this case, I’ve focused on the adverse unintended consequences. As a society, we need to determine whether the adverse unintended consequences are so bad that it perhaps causes us to rethink whether the innovation should be continued. Before an innovation is unleashed onto the world, presumably someone is calculating the risks versus rewards to ascertain that the ROI or rewards exceed the risks, but this is usually done only with respect to the intended consequences. Often, the unintended consequences are unforeseen. Once those unintended consequences are encountered, we need to rebalance the equation to include both the adverse unintended consequences and the favorable unintended consequences.

Thus, this:

  •         Initial ROI calculation: Risk versus rewards of Favorable intended consequences + Adverse intended consequences
  •         Emergent ROI calculation: Risk versus rewards of Favorable unintended consequences + Adverse unintended consequences
  •         Full ROI calculation: Risk versus rewards of Favorable intended consequences + Adverse intended consequences + Risk versus rewards of Favorable unintended consequences + Adverse unintended consequences

Let’s take another example and see how it played out. In Australia, when they first mandated that bicycle riders must wear bike helmets, it was done to save lives. Research had shown that bike riders without helmets would often land on the ground at high speeds and their skulls would get damaged or cracked. Wearing a helmet seemed like a good idea. Rather than making it a voluntary act, the viewpoint was to make it mandatory. Of course, throughout the United States there are many jurisdictions that have done the same.

Pass a Helmet Law, Get Fewer Bike Riders

In this case of Australia, a follow-up study that was undertaken after the helmet law was first enacted discovered that many young people such as teenagers were no longer riding bikes at all, due to the helmet law. These young people perceived that wearing a helmet made them look bad, and culturally it was considered out-of-touch to wear the helmets. But they also faced strict enforcement of the helmet law and so they knew that if they rode their bikes and didn’t wear the helmet they would likely get caught and punished. So, they opted to do less bike riding. The study suggested that this led to those young people doing much less exercise and tending toward becoming physically unfit or even overweight. This is another example of an adverse unintended consequence.

The rise of electronic devices in our lives has offered many great benefits, but they have also raised some adverse consequences. Remember the argument that holding a cell phone to your ear could possibly cause cancer? This is still being debated today. Another adverse aspect involves possibly playing games on your smartphone and doing so to the extent that you become anti-social and no longer communicate human-to-human with those around you. These adverse aspects are considered unintended consequences. In theory, nobody that designed and is selling these phones is doing so to purposely make people become anti-social and nor so that they will get cancer.

Another example of the potential dangers of electronic devices might be the underlying explanation for the sicknesses that have befallen the United States diplomats that were stationed in Cuba and that were stationed in China. You might have seen in the news that there were U.S. diplomats in both of those countries that began to say that they were experiencing an unusually large number of headaches and dizziness. At times, it was a mild aspect. For some of those diplomats it became debilitating. The symptoms seem to come and go, for some of the diplomats, while others of those complaining about the health concerns appear to have more enduring complications, and deeper complications such as ongoing nausea and other incapacitating problems.

No one really knows what is causing the health issues. Could it be mass hysteria that has overtaken them? This seems highly unlikely. Could it be something they ingested like water or food? This also has been generally ruled out. Could it be some kind of deliberate attack against them? This certainly seems like a strong possibility because of who they are and what they represent, thus, it is a carved out slice of the population that have in common their work mission. But, Cuba and China have indicated that this is nothing they have caused and do not know what is producing these results.

One of the latest theories is that it might be something electronically based. Maybe these diplomats are being targeted with some kind of special ray gun. The ray gun beams electro-magnetic waves at them. The intensity and prolonged nature of exposure to the rays then causes the symptoms that have been reported. It could be some new sneaky approach to “invisible” attacks against our diplomats. The U.S. State Department is investigating these matters and not yet stated whether these are deliberate attacks and nor whether there is any kind of electronic connection to the matter.

Another similar theory is that the symptoms are indeed electronically based but perhaps accidental in their consequences. Perhaps the diplomats have been working or living in a building that so happens to have an abundance of electronic sensory devices and that the ultrasonic signals that emanate from those devices are the culprit to all of this. The motion detectors surrounding them, the air-quality sensors, the automatic light switches, and so on, perhaps those in-combination are producing a bombardment of signals that fall outside our hearing range, and yet can also impact our brains.

The prolonged exposure to these ultrasonic signals might be scrambling the neurons of the human brain and thus leading to the dizziness and headaches. Consequent symptoms like the nausea and the rest might all be attributed to the distortions to the brain. If the distortions are long lasting, it could lead to a long lasting physical manifestation of the symptoms in the rest of the human body. High frequency noise has been shown to have adverse consequences that can produce these kinds of health issues.

If you are the suspicious type of person, you might even suspect that the governments in those locations have maybe opted to purposely cause this. Perhaps they are using special ray guns that produce ultrasonic signals and they are beaming them at our diplomats. Why? Maybe to see whether it works to harm and disrupt them, and maybe as an experiment to ascertain whether it might be handy for other situations and against other potential “enemies” when needed.

If you are a less suspicious person, you might go with the explanation that maybe there was some kind of experiment, and maybe it wasn’t quite so lethal, but that it combined with other ultrasonic “exhaust” already in that location. Thus, let’s suppose there is a normal amount of ultrasonic exhaust, and you add to it with a bit more for the “experiment” and then the combined total goes over a threshold. In that sense, the experiment wasn’t purposely trying to harm, and maybe it was a listening device that was supposed to be able to listen-in on our diplomats. This is seemingly less evil in that they were indeed doing something untoward, but not in a means that was quite so dastardly.

Nobody knows right now for sure. Well, at least nobody is openly telling what it is. Could be a secretive cold war kind of fight taking place and maybe the public will never know what happened. The main “solution” right now has been to remove the diplomats from where they are working and living, and hopefully wait and see that the symptoms subside. Let’s hope that however it has occurred that there isn’t any permanent damage to them.

With the advent of ultrasonic tones throughout our daily lives, maybe all of us are gradually getting similar exposure. There are devices such as automatic door openers and smart street lights that tend to give off some amount of ultrasonic exhaust. We might all be daily exposed to these same kinds of signals. You might not be getting sufficient exposure to yet react to it. Or, you might react to it and shrug it off as some other aspect, like maybe you aren’t getting enough sleep or maybe that you bumped your head on a low doorway frame the other day.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are examining whether the use of numerous sensory devices on a self-driving car might have unintended adverse consequences due to ultrasonic exhaust.

Let’s consider this aspect for a moment. The good news about AI self-driving cars is that they potentially can save lives and make our world into a better place. Those are some of the stated intended consequences. True self-driving cars, considered at the Level 5, will drive entirely without human intervention, and indeed the thinking is that a human driver won’t be allowed – no driving controls for a human, and instead it is entirely and exclusively driven by the AI. Some say that this means that ultimately humans won’t be allowed to drive at all, and for those that like driving a car, this seems like an adverse intended consequence.

For more about the levels of self-driving cars, see my article:  https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

What about potential unintended consequences?

Potential Unintended Consequence of ULtrasonic Exhaust

One such potential unintended consequence of self-driving cars might be that we would become exposed to ultrasonic exhaust.

I am sure we would all agree that we can classify this as an adverse unintended consequence, rather than being considered a favorable one (unless you have an evil plot to destroy mankind and figure this is a means maybe to do so; or maybe try to beam some kind of mind control at humans as they go around in their AI self-driving cars!). There really is not much dispute that there will be some amount of ultrasonic exhaust, which comes with the territory of the sensory devices on an AI self-driving car. The question arises as to how much is too much?

There’s an added twist too. One key idea is that a confluence of ultrasonic rays is the manner in which this happens, namely that if you have a lot of devices doing the ultrasonic exhaust, they in accumulation lead to excessive amounts that are then harmful. Not just one everyday electronic device is likely to be enough exposure. Well, we know that an AI self-driving car is a smorgasbord of electronic devices. You’ve got your radar devices, back and front of the self-driving car. You’ve got your sonar devices all around the self-driving car. You’ve got a LIDAR device on the self-driving car, depending upon the type of self-driving car. And so on.

For more about LIDAR, see my article: https://aitrends.com/selfdrivingcars/lidar-secret-sauce-self-driving-cars/

Therefore, by design, a self-driving car is chock full of electronic devices. This means that the opportunity for them to create a confluence is pretty high. When the self-driving car is in motion, you can bet that nearly all of those electronic sensory devices will be active. Indeed, at higher speeds they are even more active in order to detect what’s going on around the self-driving car. As it were, as a human occupant in a self-driving car, you will be in a virtual shower of ultrasonic exhaust. It will be all around you, and you won’t see it.

Sometimes we can hear the ultrasonic exhaust. It depends on the nature of your hearing and the nature of the frequencies of the ultrasonic sounds. It has been reported that some of the diplomats claim they believe they did hear tones in their ears, or sometimes a tingling sensation in their ears. Was this real? Or, is it something in hindsight that they believe because they are told that it might be an ultrasonic bombardment? Even if they did hear something, perhaps it has nothing to do with the situation at hand at all.  Again, still a mystery.

In addition to the sensory devices on the self-driving car, you are likely to have something like Siri or Alexa on-board too. The odds are that you’ll talk to and with your AI self-driving car. Take me to the ballgame, you tell your AI self-driving car. Stop at the market so I can get some beer on the way, you command. A few years ago this idea of talking to your car might have seemed space age and science fiction like. Given the popularity nowadays of speech interaction systems via our smartphones and specialized devices, I think we can all agree that it is highly likely that these speech interacting systems will be used in AI self-driving cars, and there’s nothing odd or peculiar or unusual about it.

See my article about natural language processing and in-car commands for self-driving cars: https://aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

This though means more ultrasonic sounds will be involved. Yet again adding to the confluence. Furthermore, there is a potential added “adverse unintended consequence” to the use of the in-car commands capabilities, namely that someone nefarious can try to send sound signals to your in-car command system and take over the control of your self-driving car. There have been experiments shown that via high-frequency ultrasonic sounds that aren’t heard by humans, you can send commands to Siri and Alexa, and those systems will act on those commands as though they were spoken directly to those devices. This is a loophole that hopefully will ultimately be closed off.

For ways in which cyber hacking will occur to AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/catastrophic-cyber-hacking-self-driving-cars/

What else have we got inside the electronic bazaar of an AI self-driving car? Well, you’ve got your full-on entertainment system. Since we will be in our AI self-driving cars a lot, maybe around the clock, you might have big screen TV’s inside your self-driving car. You might have some kind of LED external displays on the outside of your self-driving car, doing advertising and generating you some cash by the advertisers eager to use your self-driving car to push their wares. More and more ultrasonic signals that can be added to the confluence.

See my article about the predicted non-stop use of AI self-driving cars: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

See my article about the framework for AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

Suppose you opt to essentially live in your AI self-driving car. You go to work in it. You sleep at night in it, perhaps while it is driving you to your next destination or wherever. You use it for trips to the store. You use it for going to a vacation spot. All the time, maybe getting exposed to ultrasonic exhaust. Today, most people are only in their cars for short bursts of time. In the future, it is likely you’ll be spending extended periods of time in your AI self-driving cars.

What about your children that will be extensively using your family AI self-driving car? They’ll get exposed too to the ultrasonic exhaust. What about other human occupants, such as if you opt to turn your AI self-driving car into a ridesharing vehicle. You rent it out for use, hoping to make some money and cover the cost and expenses of the AI self-driving car. Perhaps each of those occupants also now becomes exposed to the ultrasonic exhaust.

Who will be responsible if we later on discover that the amount of ultrasonic exhaust was harmful? You, the owner of the AI self-driving car? Or, the auto maker that made the car? Or, the tech firm that did the electronics and the AI of the self-driving car? It could be a messy legal matter to sort out. Worse, still, the health harm could have arisen, occurring before we even knew that it could happen, and ended-up harming a lot of people.

That’s a potential adverse unintended consequence, for sure.

Should we just wait and see how this plays out?

Hopefully, instead, we’ll all be working toward figuring it out beforehand. It really should be in the category of potential “intended” adverse consequences, rather than the unintended bucket. We are already aware of the possibility, so let’s get to it now. How much ultrasonic exhaust are we as a society willing to allow, given that the AI self-driving car has so many other societal benefits. There’s that risk versus reward equation to be dealt with.

See my article about the product liability aspects of AI self-driving cars: https://aitrends.com/selfdrivingcars/product-liability-self-driving-cars-looming-cloud-ahead/

I’d even wager that there is an additional exposure that goes beyond just your own individual AI self-driving car. Once we have lots of AI self-driving cars on our roadways, will this then allow for even greater levels of confluence. There you are, heading along on the freeway, in your AI self-driving car, and not a care in the world. Meanwhile, next to your car there is another AI self-driving car – in fact, you have other AI self-driving cars to your left, to your right, in front of you, and behind you. Suppose the ultrasonic exhaust spills over into your AI self-driving car?

It could be that maybe a solo AI self-driving car only produces some amount N of ultrasonic exhaust, not enough to directly harm you, but when the surrounding AI self-driving cars are producing some amount Y, the combined N plus Y is sufficient to harm those nearby. Thus, even if you study the impact of one AI self-driving car, you might be missing the bigger picture that someday they will be all around us. The confluence might only be triggered once there are enough of them on the roadways and driving relatively near to each other.

One possibility of resolving this consists of lowering the amount of ultrasonic exhaust being emitted. This requires likely redesigning the electronic devices that are being used on AI self-driving cars. No one is going to worry about a costly redesign until or if someone says that there are dangers from the ultrasonic exhaust.

Another possibility consists of some form of shielding within the AI self-driving car or something that surrounds the sensory devices to dampen the ultrasonic exhaust. This though again requires a belief that there is a potential harm and so worth the cost to devise. It also could increase the weight and size of the devices, all of which will impact the weight and size of the AI self-driving car. It might rise the costs of the AI self-driving car, making it less affordable. It might turn it into a heavy tank, impacting gas mileage or EV consumption, and so on.

We need to keep at top-of-mind that for each such solution there is a likely intended and unintended consequence.

Allow me to offer the added thought that we don’t yet know that this ultrasonic exhaust is even an issue at all. Some might contend that we don’t have any proof as yet that the ultrasonic exhaust was the culprit in the case of the Cuba and China incidents. Nor do we have any proof that an AI self-driving car might have this kind of unintended adverse consequence. I agree that it’s speculation and conjecture at this time.

It’s timely for the AI self-driving car industry to consider doing experiments and research to try and ascertain whether there is any validity to these potential concerns. A colleague the other day said to me that he was going to put a bunch of white mice into a self-driving car and have it drive them around for a week to see what happens. I realize the idea of the ultrasonic exhaust might elicit these kinds of comments (he wasn’t serious; it was his way of making a joke about it). I’m not so sure that we should just laugh off the matter. I don’t want to be accused of falsely saying that the sky is falling, and so don’t please mistake my remarks in that manner. Just figured that I’d bring up something worthy of consideration. And, try to get us to consider it beforehand, rather than after-the-fact when the damage has already been done.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

High Quality Data Key to Eliminating Bias in AI

Biases are an incurable symptom of the human decision-making process. We make assumptions, judgements and decisions on imperfect information as our brains are wired to take the path of least resistance and draw quick conclusions which affect us socially as well as financially. The inherent human “negative bias” is a byproduct of our evolution. For […]

Biases are an incurable symptom of the human decision-making process. We make assumptions, judgements and decisions on imperfect information as our brains are wired to take the path of least resistance and draw quick conclusions which affect us socially as well as financially.

The inherent human “negative bias” is a byproduct of our evolution. For our survival it was of primal importance to be able to quickly assess the danger posed by a situation, an animal or another human. However our discerning inclinations have evolved into more pernicious biases over the years as cultures become enmeshed and our discrimination is exacerbated by religion, caste, social status and skin color.

Human bias and machine learning

In traditional computer programming people hand code a solution to a problem. With machine learning (a subset of AI) computers learn to find the solution by finding patterns in the data they are fed, ultimately, by humans. As it is impossible to separate ourselves from our own human biases and that naturally feeds into the technology we create.

Examples of AI gone awry proliferate technology products. In an unfortunate example, Google had to apologise for tagging a photo of black people as gorillas in its Photos app, which is supposed to auto-categorise photos by image recognition of its subjects (cars, planes, etc). This was caused by the heuristic know as “selection bias”. Nikon had a similar incident with its cameras when pointed at Asian subjects, when focused on their face it prompted the question “is someone blinking?”

Potential biases in machine learning:
  • Interaction bias: If we are teaching a computer to learn to recognize what an object looks like, say a shoe, what we teach it to recognize is skewed by our interpretation of a shoe (mans/womans or sports/casual) and the algorithm will only learn and build upon that basis.

  • Latent bias: If you’re training your programme to recognize a doctor and your data sample is of previous famous physicists, the programme will be highly skewed towards males.

  • Similarity bias: Just what it sounds like. When choosing a team, for example, we would favor those most similar to us than as opposed to those we view as “different”.

  • Selection bias: The data used to train the algorithm over represents one population, making it operate better for them at the expense of others.

Algorithms and artificial intelligence (AI) are intended to minimize human emotion and involvement in data processing that can be skewed by human error and many would think this sanitizes the data completely. However, any human bias or error collecting the data going into the algorithm will actually be exaggerated in the AI output.

Gender bias in Fintech

Every industry has its own gender and race skews and the technology industry, like the financial industry, is dominated by white males. Silicon Valley has earned the reputation as a Brotopia due to its boy club culture.

Cambridge Innovation Institute Announces the Acquisition of the AI World Conference & Expo and AI Trends

CAMBRIDGE INNOVATION INSTITUTE ANNOUNCES THE ACQUISITION OF THE AI WORLD CONFERENCE & EXPO AND AI TRENDS The Leading Artificial Intelligence Conference and Publication to Join Cambridge Innovation Institute’s Extensive Portfolio NEEDHAM, MA— (April 18, 2018) – Cambridge Innovation Institute (CII), announces the acquisition of Westborough, Massachusetts based Trends Equity, Inc., which includes the properties of […]

CAMBRIDGE INNOVATION INSTITUTE ANNOUNCES THE ACQUISITION OF
THE AI WORLD CONFERENCE & EXPO AND AI TRENDS

The Leading Artificial Intelligence Conference and Publication to Join
Cambridge Innovation Institute’s Extensive Portfolio

NEEDHAM, MA— (April 18, 2018) – Cambridge Innovation Institute (CII), announces the acquisition of Westborough, Massachusetts based Trends Equity, Inc., which includes the properties of the annual AI World Conference & Expo and AI Trends media offerings.

Phillips Kuhl, President, Cambridge Innovation Institute, says, “CII currently has an extensive product portfolio that spans the life science and rechargeable battery industries, featuring conferences, training seminars and publications. The addition of both the AI World Conference & Expo and AI Trends publication aligns with our current strategy, but also allows us to expand our reach to additional verticals such as financial, manufacturing and robotics.  I am very excited about the opportunities in front of us, and the fast-moving world of AI.”

The AI World Conference & Expo, Accelerating Innovation in the Enterprise, is taking place December 3-5 at the World Trade Center in Boston, MA and will feature over 70 sessions, 150+ speakers and 2,700+ attendees. Event sponsors include: Veritone, DELL EMC, KOGENTiX, Nuance, WorkFusion, DataRobot, PureStorage, ALEGION, Interactions, IDC, Splunk, UIPath, Fortune and MIT Sloan Management Review

AI Trends, launched in January 2016, is the leading industry media channel focused on the business and technology of enterprise AI with over 8,700+ subscribers.

Eliot Weinman, Founder of AI World and AI Trends says “We are extremely pleased to join CII.  After only two years, AI World has become the largest independently produced business AI event in the U.S.  The world class event and publishing team at CII will enable AI World to continue to accelerate the growth of both AI World and AI Trends, and enable us to rapidly expand into key vertical markets such as Healthcare and Pharma.”

All Trends Equity, Inc. employees have been retained, and the Westborough, Massachusetts office will remain open for the interim.

During the acquisition, Cambridge Innovation Institute was represented in the transaction by John McGovern of Grimes, McGovern & Associates of New York.

About Cambridge Innovation Institute (www.CambridgeInnovationInstitute.com)

A vision since 1992: Cambridge Innovation Institute (CII) delivers cutting edge information through events, publishing, and training to leading commercial, academic, government and research institutes across the life science and energy industries. Cambridge Innovation Institute consists of two business areas: our coverage of advances in life sciences under the well-established Cambridge Healthtech Institute (CHI) brand, and coverage of rechargeable batteries under the newly established Cambridge EnerTech (CET) brand. We focus on high technology fields where research and development are essential for the advancement of innovation.

About AI World (https://aiworld.com)

AI World Conference & Expo is focused on the state of the practice of artificial intelligence in the enterprise. The 3-day conference and expo is designed for business and technology executives who want to learn about innovative implementations of AI in the enterprise.

AI Trends (https://aitrends.com)

AI Trends is the leading industry media channel focused on the business and technology of enterprise AI. It is designed for business executives wishing to keep track of the major industry business trends, technologies and solutions that can help them keep in front of the fast-moving world of AI and gain competitive advantage.

Contact:
Lisa Scimemi
Corporate MARCOM Director
lscimemi@cambridgeinnovationinstitute.com

Here Are 14 Amazing Facts About Alibaba’s Co-Founder Jack Ma

Jack Ma is one of the richest people in China, and his way to the top has been a long and tough journey. A business magnate and philanthropist, Jack Ma is the cofounder of Alibaba, a conglomerate that’s focused on technology, artificial intelligence, retail, e-commerce, and the internet. If that leaves a lot to the imagination, […]

Jack Ma is one of the richest people in China, and his way to the top has been a long and tough journey.

A business magnate and philanthropist, Jack Ma is the cofounder of Alibaba, a conglomerate that’s focused on technology, artificial intelligence, retail, e-commerce, and the internet. If that leaves a lot to the imagination, think something along the lines of a combination of eBay and Amazon.

Something as impressive as starting a company and turning it into one as big as Alibaba naturally spark the interest of many. Getting a closer look at Ma’s history and tidbits about him could be just the thing to know more about how it came to be.

As of this writing, he’s behind only Tencent Holding’s CEO and chairman Ma Huateng, making him the second richest in China, according to Forbes. On the international stage, he’s in the 20th spot.

Beyond his home country of China, Ma may not be a household name such as Steve Jobs, Bill Gates, Warren Buffett, Mark Zuckerberg, or Jeff Bezos just yet. However, all that has been changing since his accomplishments and whatnot have made their way to the worldwide scene.

For starters, Alibaba shares opened at $92.70 per piece, which is the biggest initial public offering or IPO in the history of the United States.

Now to whet the appetite: his real name is Ma Yun. That’s just one of the many amazing facts about Ma, and there’s a lot more to find out.

He Started Out As An English Teacher, Earning $12 To $15 Per Month

After he graduated from Hangzhou Teachers University – now known as Hangzhou Normal University – with a bachelor’s degree in English in 1988, Ma was the only one chosen out of 500 students to be a university teacher.

It was a stroke of good luck, and it was probably an honor. During his stint as an English teacher, he was earning somewhere between 100 to 120 Renminbi a month. At the time, that was equivalent to roughly about $12 to $15.

He spent a total of five years teaching before he moved on to other things, including but not limited to starting his own businesses.

He Learned English By Giving Visitors Tours Free Of Charge For 8 Years

When he was 12 years old, Ma had a strong desire to learn the English language.

To do that, he would give foreigners tours for free, riding his bicycle during the early hours of each morning to a hotel in Hangzhou that’s at least 40 minutes away.

He would then improve his English by conversing with the visitors as they go through the tours. Not only that, but he also learned “Western people’s system, ways, methods and techniques.” In that, he developed a globalized view, which was in conflict with what his teachers and studies taught him.

This went on for eight years.

He Flunked His University Application To Hangzhou Teachers’ University Not Once, But Twice

With billionaires such as Gates and Zuckerberg dropping out of Harvard University, it’s easy to mistake Ma as following in their footstep.

The thing is, his story didn’t go like that at all. He flunked his university admission exam at Hangzhou Teachers University two times. In an interview with Inc., he even said that the university could be considered as the worst in the city.

What’s more, he wasn’t really a good student to begin with.

“I failed a key primary school test two times, I failed the middle school test three times, I failed the college entrance exam two times and when I graduated, I was rejected for most jobs I applied for out of college,” he said.

Needless to say, Ma didn’t let his failures stop him or slow him down on his way to success.

He Was Rejected By Harvard 10 Times

During the World Economic Forum 2015, Ma revealed that he was rejected by Harvard University.

For most people, one rejection is enough to stop them, but that wasn’t the case for Ma. He applied over and over again to the university until he was rejected a total of 10 times.

“I applied for Harvard ten times, got rejected ten times and I told myself that ‘Someday I should go teach there,’” he said.

In 2002, he gave a speech in Harvard where he was called a “mad man” for his way of managing Alibaba by a CEO of a foreign company, whose mind was changed after Ma invited him for a three-day stay at his business.

Ma earned his Master of Business Administration degree or MBA from Cheung Kong Graduate School of Business.

He Was Rejected For A Job Application At KFC

Right after leaving behind his five-year career as an English teacher in 1995, Ma started his search for other opportunities.

One of the 30 jobs he set his eyes on was a local KFC branch in Hangzhou, but he was rejected by the fast-food restaurant. To add insult to injury, he was the only applicant who was rejected out of 24 candidates. In other words, the other 23 people who applied got in, and Ma was the only person who didn’t get hired.

After that, he went on to pursue his own business, which was a small-time translation and interpretation company.

Read the source article in TechTimes.