Entrepreneurs Taking on Bias in Artificial Intelligence

Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life. “Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL. AI has also been touted as the new must-have […]

Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life.

“Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL.

AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.

Some of the examples of bias are blatant, such as Google’s facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of “winners” were light-skinned. Search Google for images of “unprofessional hair” and the results you see will mostly be pictures of black women (even searching for “man” or “woman” brings back images of mostly white individuals).

While more light has been shined on the problem recently, some feel it’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.

“Fundamentally, bias, if not addressed, becomes the Achilles’ heel that eventually kills artificial intelligence,” says Chad Steelberg, CEO of Veritone. “You can’t have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem.”

As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.

Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

Solution: Algorithm auditing

Back in the early 2010s, Cathy O’Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.

Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

However, O’Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O’Neil’s algorithms were discriminating against users of certain backgrounds, based on the other cues.

As O’Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren’t just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O’Neil and others.)

What’s more, in some industries — for example, housing — if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.

“I had left the finance [world] because I wanted to do better than take advantage of a system just because I could,” O’Neil says. “I’d entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place.”

O’Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracyabout the perils of letting algorithms run the world, and started consulting.

Eventually, she settled on a niche: auditing algorithms.

“I have to admit that it wasn’t until maybe 2014 or 2015 that I realized this is also a business opportunity,” O’Neil says.

Right before the election in 2016, that realization led her to found O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).

“I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn’t actually know how to do it,” O’Neil says. “I didn’t actually know. I didn’t have good advice to give them.” But, she wanted to figure it out.

So, what does it mean to audit an algorithm?

Here are 8 Myths About AI in the Workplace Debunked – With Infographic

By Jeff Desjardins, The Visual Capitalist The interplay between technology and work has always been a hot topic. While technology has typically created more jobs than it has destroyed on a historical basis, this context rarely stops people from believing that things are “different” this time around. In this case, it’s the potential impact of artificial intelligence […]

By Jeff Desjardins, The Visual Capitalist

The interplay between technology and work has always been a hot topic.

While technology has typically created more jobs than it has destroyed on a historical basis, this context rarely stops people from believing that things are “different” this time around.

In this case, it’s the potential impact of artificial intelligence (AI) that is being hotly debated by the media and expert commentators. Although there is no doubt that AI will be a transformative force in business, the recent attention on the subject has also led to many common misconceptions about the technology and its anticipated effects.

DISPROVING COMMON MYTHS ABOUT AI

Today’s infographic comes to us from Raconteur and it helps paint a clearer picture about the nature of AI, while attempting to debunk various myths about AI in the workplace.

AI is going to be a seismic shift in business – and it’s expected to create a $15.7 trillion economic impact globally by 2030.

But understandably, monumental shifts like this tend to make people nervous, resulting in many unanswered questions and misconceptions about the technology and what it will do in the workplace.

DEMYSTIFYING MYTHS

Here are the eight debunked myths about AI:

1. Automation will completely displace employees
Truth: 70% of employers see AI in supporting humans in completing business processes. Meanwhile, only 11% of employers believe that automation will take over the work found in jobs and business processes to a “great extent”.

2. Companies are primarily interested in cutting costs with AI
Truth: 84% of employers see AI as obtaining or sustaining a competitive advantage, and 75% see AI as a way to enter into new business areas. 63% see pressure to reduce costs as a reason to use AI.

3. AI, machine learning, and deep learning are the same thing 
Truth: AI is a broader term, while machine learning is a subset of AI that enables “intelligence” by using training algorithms and data. Deep learning is an even narrower subset of machine learning inspired by the interconnected neurons of the brain.

4. Automation will eradicate more jobs than it creates 
Truth: At least according to one recent study by Gartner, there will be 1.8 million jobs lost to AI by 2020 and 2.3 million jobs created. How this shakes out in the longer term is much more debatable.

5. Robots and AI are the same thing
Truth: Even though there is a tendency to link AI and robots, most AI actually works in the background and is unseen (think Amazon product recommendations). Robots, meanwhile, can be “dumb” and just automate simple physical processes.

6. AI won’t affect my industry 
Truth: AI is expected to have a significant impact on almost every industry in the next five years.

7. Companies implementing AI don’t care about workers
Truth: 65% of companies pursuing AI are also investing in the reskilling of current employees.

8. High productivity equals higher profits and less employment
Truth: AI and automation will increase productivity, but this could also translate to lower prices, higher wages, higher demand, and employment growth.

Read the source article at The Visual Capitalist.

Catalia Health Tries Free Interactive Robots to for In-Home Patient Care

A little more than three-and-a-half years ago, Cory Kidd founded Catalia Health based on the work he did at the MIT Media Lab and Boston University Medical Center. Headquartered in San Francisco, the company’s overarching goal is to improve patient engagement and launch behavior change. But the way it goes about meeting that mission is unique. Through Catalia […]

A little more than three-and-a-half years ago, Cory Kidd founded Catalia Health based on the work he did at the MIT Media Lab and Boston University Medical Center.

Headquartered in San Francisco, the company’s overarching goal is to improve patient engagement and launch behavior change. But the way it goes about meeting that mission is unique.

Through Catalia Health’s model, each patient is equipped with an interactive robot to put in their home. Named Mabu, the robot learns about each patient and their needs, including medications and treatment circumstances.

Mabu can then have tailored conversations with a patient about their routine and how they’re feeling. The information from those talks securely goes back to the patient’s pharmacist or healthcare provider, giving them an update on the individual’s progress and alerting them if something goes wrong.

Right now, the company is focused on bringing Mabu to patients with congestive heart failure. It is currently working with Kaiser Permanente on that front. But Catalia Health is also doing work on other disease states, such as rheumatoid arthritis and late-stage kidney cancer.

“We’re not replacing a person,” Kidd, the startup’s CEO, said in a recent phone interview. “[Providers have] the ability now to have a lot more insight on all their patients on a much more frequent basis.”

Why use a robot as a means to gather such insight?

Kidd explained: “We get intuitively that face-to-face [interaction] makes a difference. Psychologically, we know what that difference is: We create a stronger relationship and we find the person to be more credible. The robot can literally look someone in the eyes, and we get the psychological effects of face-to-face interaction.”

The robot — and face-to-face interaction — helps keep patients engaged over a long period of time, Kidd added.

As for its business model, Catalia Health works directly with pharma companies and health systems. These organizations pay the startup on a per patient, per month basis. The patient using Mabu doesn’t have to pay.

The company is also currently offering interested heart failure patients a free trial of Mabu. The patient simply has to give Catalia feedback on their experience.

“That’s ongoing and very active right now,” Kidd said of the free trial effort.

In late 2017, the company closed a $4 million seed round, following two previous funding rounds amounting to more than $7.7 million. Ion Pacific led the $4 million round. Khosla Ventures, NewGen Ventures, Abstract Ventures and Tony Ling also participated.

Read the source article at MedCityNews.

Addressing AI and the Emerging Crisis of Trust

By Doug Bordonaro, Chief Data Evangelist at ThoughtSpot Earlier this month, a newspaper in Ohio invited its Facebook followers to read the Declaration of Independence, which it posted in 12 bite-sized chunks in the days leading up to July 4. The first nine snippets posted fine, but the 10th was held up after Facebook flagged the […]

By Doug Bordonaro, Chief Data Evangelist at ThoughtSpot

Earlier this month, a newspaper in Ohio invited its Facebook followers to read the Declaration of Independence, which it posted in 12 bite-sized chunks in the days leading up to July 4. The first nine snippets posted fine, but the 10th was held up after Facebook flagged the post as “hate speech.” Apparently, the company’s algorithms didn’t appreciate Thomas Jefferson’s use of the term “Indian Savages.”

Doug Bordonaro of ThoughtSpot

It’s a small incident, but it highlights a larger point about the use of artificial intelligence and machine learning. Besides being used to filter content, these technologies are making their way into all aspects of life, from self-driving cars to medical diagnoses and even prison sentencing. It doesn’t matter how well the technology works on paper, if people don’t have confidence that AI is trustworthy and effective, it will not be able to flourish.

The issue boils down to trust, and it goes beyond just the technology itself. If people are to accept AI in their homes, their jobs and other areas of life, they also need to trust the businesses that develop and commercialize artificial intelligence to do the right thing, and here too there are challenges. Last month, Google faced down a minor revolt by thousands of employees over a contract with the U.S. military that provides its AI for use in drone warfare. Microsoft and Amazon faced a similar worker uprising, over use of their facial recognition technologies by law enforcement.

We humans are generally skeptical of things we don’t understand, and AI is no exception. Presented with a list of popular AI services, 41.5 percent of respondents in a recent survey could not cite a single example of AI that they trust. Self-driving cars also incite wariness: Just 20 percent of people said they would feel safe in a self-driving car, even though computers are less likely to make errors than people.

The industry needs to address these challenges if we’re all to enjoy the value and benefits that AI can bring. To do so, it helps to start by looking at the ways trust intersects with AI and then consider ways to address each.

Trust in businesses. Consumers need confidence that early adopters of AI, notably technology giants like Google and Facebook, will apply AI in ways that benefit the greater good. People don’t inherently trust corporations — a recent Salesforce study found that 54 percent of customers don’t believe companies have their best interests at heart. Businesses need to earn that trust by applying AI wisely and judiciously. That means not making clumsy mistakes like telling a family their teen daughter is pregnant before she’s broken the news herself.

Trust in third parties. Consumers also need confidence that a company’s partners will use AI appropriately. AI and machine learning require massive amounts of data to function. The more data, and the greater variety of data, available to these systems enable more nuanced and entirely new use cases.  While many businesses share personal data with third parties for marketing and other purposes, incidents like the Cambridge Analytica fiasco create a backlash that make people less willing to entrust their data with businesses. Failing to build trust between both data collectors and those that eventually use that data will dramatically hinder AI’s long term potential.

Trust in people. For all its potential to automate tasks and make smarter decisions, AI is programmed and controlled by humans. People build the models and write the algorithms that allow AI to do its work. Consumers must feel confident these decisions are being made by professionals who have their users’ interests at heart. AI can also be a powerful tool for criminals, and developers of the technology need to be accountable for how it is used.

Trust in the technology. AI’s “black box problem” makes people skeptical of results because, very often, no one really knows how they were arrived at. This opens the technology to charges of bias in important areas like criminal sentencing. The black box problem can also inhibit adoption of AI tools in business. Employees are asked to devote time and resources to recommendations made by machines, and they won’t do so unless they have confidence in the recommendations being made.

These challenges aren’t stifling AI’s development significantly today, but they will if they are not addressed. Issues of public trust may also determine which businesses succeed with AI and which do not. We need to nip this issue in the bud. It does no good to blame consumers and employees for not understanding AI or being skeptical of its applications. The industry has a duty to itself and the public to build confidence in AI if it’s to fulfil its promise. Here are some ways it can achieve this:

Standards and Principles. To address its employee uprising, Google published a list of AI principles that included a pledge to use the technology only in ways that are “socially beneficial.” Rather than every business doing the same, the industry should agree to a set of standards and principles that guide its use of artificial intelligence. The nonprofit OpenAI consortium is addressing concerns about AI safety; it should broaden its mandate to encompass public trust in AI more broadly.

Transparent usage. GDPR disclosures may help build trust among consumers about how their data will be used, and companies should be equally transparent about how they use AI and machine learning. Consumers should be able to answer questions like: What data is being captured to use in an AI or ML system, what kind of applications are they running using my data, and when am I interacting with an AI or ML system?  If a system fails, we need to be candid not only about what caused the problem, but how it will be addressed in future.

Read the source article at insideBIGDATA.

AI Robot, Immune to Moral Factors, Helping to Make China’s Foreign Policy

Attention, foreign-policy makers. You will soon be working with, or competing against, a new type of robot with the potential to change the game of international politics forever. Diplomacy is similar to a strategic board game. A country makes a move, the other(s) respond. All want to win. Artificial intelligence is good at board games. […]

Attention, foreign-policy makers. You will soon be working with, or competing against, a new type of robot with the potential to change the game of international politics forever.

Diplomacy is similar to a strategic board game. A country makes a move, the other(s) respond. All want to win.

Artificial intelligence is good at board games. To get the game started, the system analyses previous play, learns lessons from defeats or even repeatedly plays against itself to devise a strategy that can be never thought of before by humans.

It has defeated world champions in chess and Go. More recently, it has won at no-limit Texas Hold’em poker, an “imperfect information game” in which a player does not have access to all information at all times, a situation familiar in the world of diplomatic affairs.

Several prototypes of a diplomatic system using artificial intelligence are under development in China, according to researchers involved or familiar with the projects. One early-stage machine, built by the Chinese Academy of Sciences, is already being used by the Ministry of Foreign Affairs.

The ministry confirmed to the South China Morning Post that there was indeed a plan to use AI in diplomacy.

“Cutting-edge technology, including big data and artificial intelligence, is causing profound changes to the way people work and live. The applications in many industries and sectors are increasing on daily basis,” a ministry spokesman said last month.

The ministry “will actively adapt to the trend and explore the use of emerging technology for work enhancement and improvement”.

China’s ambition to become a world leader has significantly increased the burden and challenge to its diplomats. The “Belt and Road Initiative”, for instance, involves nearly 70 countries with 65 per cent of the world’s population.

The unprecedented development strategy requires up to a US$900 billion investment each year for infrastructure construction, some in areas with high political, economic or environmental risks.

The researchers said the AI “policymaker” was a strategic decision support system, with experts stressing that it will be humans who will make any final decision.

The system studies the strategy of international politics by drawing on a large amount of data, which can contain information varying from cocktail-party gossip to images taken by spy satellites.

When a policymaker needs to make a quick, accurate decision to achieve a specific goal in a complex, urgent situation, the system can provide a range of options with recommendations for the best move, sometimes in the blink of an eye.

Dr. Feng Shuai, senior fellow with the Shanghai Institutes for International Studies, whose research focuses on AI applications, said the technology of the AI policymaking system was already attracting attention despite being in its early stages.

Several research teams were developing these systems, Feng said. A conference discussing the impact of AI on diplomacy was hosted by the University of International Business and Economics last month in Beijing, in which researchers shared some recent progress.

“Artificial intelligence systems can use scientific and technological power to read and analyse data in a way that humans can’t match,” Feng said.

“Human beings can never get rid of the interference of hormones or glucose.”

The AI policymaker, however, would be immune to passion, honour, fear or other subjective factors. “It would not even consider the moral factors that conflict with strategic goals,” Feng added.

Other nations are believed to be conducting similar research into AI uses in policymaking fields, though details are not available publicly.

But AI does have its own problems, researchers say. It requires a large amount of data, some of which may not be immediately available in certain countries or regions. It requires a clear set of goals, which are sometimes absent at the start of diplomatic interaction. A system operator can also temper the results by altering some parameters.

Read the source article in the South China Morning Post.

Unboxing Google’s 7 New Principles of Artificial Intelligence

By Ivan Rodriguez, founder, Geek on Record, and a software engineering manager at Microsoft How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant […]

By Ivan Rodriguez, founder, Geek on Record, and a software engineering manager at Microsoft

How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant that enables it to make phone calls on your behalf to book appointments with small businesses.

The root of the controversy lied on the fact that the Assistant successfully pretended to be a real human, never disclosing its true identity to the other side of the call. Many tech experts wondered if this is an ethical practice or if it’s necessary to hide the digital nature of the voice.

Google was also criticized last month by another sensitive topic: the company’s involvement in a Pentagon program that uses AI to interpret video imagery and could be used to improve the targeting of drone strikes. Thousands of employees signed a letter protesting the program and asking for change:

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

A “clear policy” around AI is a bold ask because none of the big players have ever done it before, and for good reasons. It is such a new and powerful technology that it’s still unclear how many areas of our life will we dare to infuse with it, and it’s difficult to set rules around the unknown. Google Duplex is a good example of this, it’s a technological development that we would have considered “magical” 10 years ago, that today scares many people.

Regardless, Sundar Pichai not only complied with the request, but took it a step further by creating 7 principles that the company will promote and enforce as one of the industry drivers of AI. Here are some remarks on each of them:

1. Be socially beneficial

For years, we have dealt with comfortable boundaries, creating increasingly intelligent entities in very focused areas. AI is now getting the ability to switch between different domain areas in a transparent way for the user. For example, having an AI that knows your habits at home is very convenient, especially when your home appliances are connected to the same network. When that same AI also knows your habits outside home, like your favorite restaurants, your friends, your calendar, etc., its influence in your life can become scary. It’s precisely this convenience that is pushing us out of our comfort zone.

This principle is the most important one since it bows to “respect cultural, social, and legal norms”. It’s a broad principle, but it’s intended to ease that uncomfortable feeling by adapting AI to our times and letting it evolve at the same pace as our social conventions do.

2. Avoid creating or reinforcing unfair bias

AI can become racist if we allow it. A good example of this happened in March 2016, when Microsoft unveiled an AI with a Twitter interface and in less than a day people taught it the worst aspects of our humanity. AI learns by example, so ensuring that safeguards are in place to avoid this type of situations is critical. Our kids are going to grow in a world increasingly assisted by AI, so we need to educate the system before it’s exposed to internet trolls and other bad players.

3. Be built and tested for safety

This point goes hand in hand with the previous one. In fact, Microsoft’s response to the Tai fiasco was to take it down and admit an oversight on the type of scenarios that the AI was tested against. Safety should always be one of the first considerations when designing an AI.

4. Be accountable to people

The biggest criticism Google Duplex received was whether or not it was ethical to mimic a real human without letting other humans know. I’m glad that this principle just states that “technologies will be subject to appropriate human direction and control”, since it doesn’t discount the possibility of building human-like AIs in the future.

An AI that makes a phone call on our behalf must sound as human as possible, since it’s the best way of ensuring a smooth interaction with the person on the other side. Human-like AIs shall be designed with respect, patience and empathy in mind, but also with human monitoring and control capabilities.

5. Incorporate privacy design principles

When the convenience created by AI intersects with our personal feelings or private data, a new concern is revealed: our personal data can be used against us. Cambridge Analytica’s incident, where personal data was shared with unauthorized third parties, magnified the problem by jeopardizing user’s trust in technology.

Google didn’t use many words on this principle, probably because it’s the most difficult one to clarify without directly impacting their business model. However, it represents the biggest tech challenge of the decade, to find the balance between giving up your privacy and getting a reasonable benefit in return. Providing “appropriate transparency and control over the use of data” is the right mitigation, but it won’t make us less uncomfortable when an AI knows the most intimate details about our lives.

Read the source post at Geek on record.

Executive Interview: Yoshua Bengio of MILA, University of Montreal

Combining AI Research, Business Collaboration, Thoughts on Impact of AI on Society Yoshua Bengio is among the most cited Canadian computer scientists. He is the author of two books and more than 200 publications, the most cited being in the areas of deep learning, recurrent neural networks, probabilistic learning algorithms, natural language processing and manifold […]

Combining AI Research,

Business Collaboration,

Thoughts on Impact of AI on Society

Yoshua Bengio is among the most cited Canadian computer scientists. He is the author of two books and more than 200 publications, the most cited being in the areas of deep learning, recurrent neural networks, probabilistic learning algorithms, natural language processing and manifold learning.

Yoshua Bengio of MILA, University of Montreal

He earned a PhD in Computer Science from McGill University in 1991 and worked at the Canadian Institute for Advanced Research (CIFAR) alongside with Yann LeCun (now at Facebook) and Geoffrey Hinton (now at Google). He has collaborated with IBM in work on the Watson supercomputer.

His current interests are centered around a quest for AI through machine learning, and include fundamental questions on deep learning and representation learning, the geometry of generalization in high-dimensional spaces, manifold learning, biologically inspired learning algorithms, and challenging applications of statistical machine learning. He recently participated in an interview with journalists learning about Canada’s AI initiative that included Eliot Weinman, Executive Editor of AI Trends.

Q. Why is it important for the Canadian government to engage in this AI initiative?

A. AI is not just another technology. It will have a big impact on our societies, and there are many ethical and social questions associated with how AI is being deployed and how it will be deployed. If we don’t think about these considerations, the public will eventually reject advanced technologies that they see as threatening and against their well-being. So governments have to really care about these questions, whether for moral reasons or for practical reasons.

Q. What would be your AI horror scenario?  

A. I am most concerned about the use of AI in the military and security arenas. I’m sure you’ve heard about killer robots, and you may have also heard of how the technology can be used to recognize people from their facial images. So there are Big Brother scenarios that could be upon us if we’re not careful. I also have concerns related to privacy issues when we are dealing with private data. Then we have economic issues. Automation will be accelerated with AI; that may create more inequality than we already suffer. And that is at the level of people, companies and countries. To have more countries involved will create a healthier playing field.  

Q. What is the role of universities in the evolution of AI?

A. I am a professor at University of Montreal; we have created MILA (Montreal Institute for Learning Algorithms), which is similar to the Vector Institute (collaboration of government and business in partnership with University of Toronto) which has similar goals and were both funded by the federal government and the provincial government. These institutes – there’s another in Alberta (Alberta Machine Intelligence Institute) — have been set up so they will have more agility than universities have, but they’re still academic research organizations. They also have a mandate to help the ecosystem through the startups and companies that are creating value with AI.

These institutes are in a better position to be neutral about how AI will be used and keep in mind the well-being of people, and to orient research in directions that will be good for people, and engage in the public dialogue in a credible way. I think it’s good that companies like Facebook and Google participate in that dialogue, but I’m not sure if they are neutral agents in those discussions. Universities, which care, first and foremost, about the public good, are really important agents in the discussions and in the kind of research that can be done.  

Q. What steps can government take to foster this dialogue?

A. Here in Montreal, we are creating an organization that will be focused on the social, economic and ethical questions around AI. It will sponsor research in the social sciences and humanities around AI, but also will participate in the public debate. I think we don’t have all the answers to how to do this right. Scholars and scientists need to really think through this and engage the public. We did something like this in the last six months in Montreal and in Quebec, and also in Ontario. After a forum of experts, we brought in ordinary people. We went to public libraries and places where people could comment and discuss the questions. We’re coming up with something that will be initiated by scholars and experts, and also have feedback and contributions from ordinary people. I think we have to continue in that direction.  

This observatory on AI will be in a good position to make recommendations to governments, which will be part of the mission both locally and in different countries. The questions are pretty much the same in most countries. I think there should be a global coordination about these questions. There are issues like military use which will obviously need to be international, and even questions about regulating companies, which are multinationals. It would be much better if we can agree on principles globally.  

Q. What do you see as the next evolution of the core technology that enables what we know of as AI today?

A. I’m a scientist. I don’t have a crystal ball. And I can make educated guesses like many people. But one thing for sure is that there are obstacles on our way towards smarter machines, and it’s always been like this when we make progress. We’ve achieved something important, and now we see that there are other challenges. We’ve made huge progress in industry using supervised learning where humans have to really teach machines by telling them what to do. A lot of the current emphasis in basic research is on supervised learning or reinforcement learning, where the machines have to learn in a more autonomous way. And we haven’t solved that in a satisfactory way yet. It will probably take years, or decades to really make big breakthroughs there. But given the exponential growth of research in these areas, I’m very optimistic that things will move very swiftly.

Q. Are you concerned that the massive investments in AI today are too risky?

A. One reason why companies are investing so much, and are so optimistic, is that a lot of future wealth growth from AI doesn’t depend on new discoveries. In other words, we take what we have already scientifically, and we just make a lot of progress in the hardware. That’s going to happen. It’s moving. We will make progress in bringing together the right data. Like medical data, we don’t do a good job yet. In lots of industries and sectors, the ingredients for applying that science are not there yet, but they will be there soon.

We have at least a decade to just reap the benefits of the science we already have. On top of that, there’s so much money being poured into research, both in industry and in academia, that it would be surprising if the science doesn’t move forward over the next decade. So it’s almost a sure gain. Now, of course, you know, commercial enterprises can fail for all kinds of reason. But at a high level, I think it’s a very safe bet.  

Q. Is China ahead in the race to be the leading AI country?

A. I don’t like to make like these kinds of comparisons. Silicon Valley is a very small place. The progress can come from anywhere in the world. China does have huge advantages in this race. One of the most important ones is that it’s the biggest market in the world, and has the volumes of data that go with that. So from the point of view of investing, this is a very appealing place to do AI. And in addition, there’s a huge enthusiasm for AI in China from all quarters. And lots and lots of students are jumping into this. It’s a worldwide phenomenon, but I think with all the enthusiasm behind it, China probably wins the race for now.

Q. Do you envision big companies and startups and small companies collaborating to advance AI?

A. There is room for many kinds of business models in this new world. Large companies have leadership strong enough to make the fast turns that are needed, and companies like Element AI  can help with that that. And big companies will be in competition with up and running small companies, building new products and new services which may not even exist now. New markets will be created. I’m also a big believer in the collaboration between startups and large companies. They have complimentary advantages. This is important from the point of view of a country with a national strategy because the startups are more agile. They can more easily recruit people who are excited about the fast pace of development. They can recruit talent more easily.  

But the large companies have the larger market where they can deploy. They have lots of cash to invest, and they have lots of data. Ideally companies, a little bit like researchers, learn to cooperate better with their strengths and weaknesses to build something stronger.  

Q. Are you concerned about the risk of jobs lost to AI automation?

A. Absolutely. The potential impact on the job market is very serious. It’s not going to happen in one day, but it will happen way too fast for our ability to handle those changes. Many people are likely to lose their jobs in the middle of their careers.

We have to rethink our social safety net. Most developed countries have a social safety net, but it’s been designed for a particular kind of economy. We will need to look into things like a universal basic income, and do more pilots. We may have to forget about our traditional values around work, such as if you don’t work, you don’t get money.  And that’s only one aspect of it. We need to rethink the education system so people can be rescaled in the middle of their career, while they are at a job.

The education system will need to train people in a way that is more appropriate for a fast-changing world, where human skills are going to be more important than they were in the past. Of course, we want to train more scientists and engineers; that’s a no-brainer. But we have to train people not for one job that’s very, very specialized, but rather how to think for themselves about how to be good citizens, and to rapidly learn the skills they need.

And we have to ask what is going to be the impact on society? Will AI be beneficial for the whole society or just a few people? I don’t have the answers but I think it’s important to ask the questions not and let the market by themselves figure out the answers. Those answers might not be in favor of ordinary people. Governments need to think about this and if necessary, find the right regulations.

Q. How is MILA progressing and can you describe your typical day?

A. MILA is the Montreal Institute for Learning Algorithms, a machine learning research lab with business collaboration as part of the mission. It’s growing very fast. It already has the highest concentration of deep learning researchers in academia in the world. We’re going to be doubling the number of professors over the next few years, thanks to the Canadian government.  

MILA is mostly academic in nature, a non-profit, but with the mandate to help companies, to guide them in their development of AI.

I love working at the university. It allows me to be a more neutral agent in the changes that are coming, and gives me a voice that can have an impact as we adapt to this changing world of AI. Also, I’m in a position to steer research in directions that I think are important, and to contribute to the training of the next generation. I think this is something really, really important. I just enjoy the research with all of my students, which I would lose if I went to private industry.

Learn more at MILA.

Custom DHS Facial Recognition AI to be Deployed at US/Mexico Border

The Department of Homeland Security will forge ahead with plans to implement its problematic Vehicle Face System (VFS), an AI-powered facial recognition system, at the US/Mexico border. After years of development, the Federal government will install the VFS system in Texas at the Anzalduas border crossing. Every person driving across the border will, at that […]

The Department of Homeland Security will forge ahead with plans to implement its problematic Vehicle Face System (VFS), an AI-powered facial recognition system, at the US/Mexico border.

After years of development, the Federal government will install the VFS system in Texas at the Anzalduas border crossing. Every person driving across the border will, at that time, have a photograph taken of their face in order to cross-reference their identity with various government databases, according to documents obtained by The Verge.

The VFS system was developed to capture images of vehicle occupants through windshields. It can disregard reflections and, using depth sensors and various other sophisticated hardware and AI components, identify the occupants of a vehicle.

The government intends to roll out VFS to process images of every passenger and driver in each vehicle crossing the border, in all lanes, in both directions.

We spoke with Brian Brackeen, CEO and Founder of Kairos, a facial recognition technology company, to see what he thought of the rollout. “The recent reports of face recognition surveillance at the US-Mexico border are troubling. And highlights again the human rights implications of selling facial recognition software to governments,” he said.

Despite the fact his company makes and sells facial recognition technology, Brackeen believes that large scale surveillance is unethical and has urged companies such as Amazon to cease providing the government with technology it can use to watch us. He continues, “The DHS’s mandate is clear. However, this goes beyond protecting our borders. This is a step closer to omniscient, ‘always on’ surveillance of society. The US government has deliberately designed camera technology for the purpose of peering into vehicles, through windows, to gather facial profiles of drivers and passengers. All without their permission or knowledge. This is HIGHLY intrusive and wrong.”

It appears as though the US government is absolutely determined to deploy AI solutions that will provide it with an ubiquitous surveillance state wherein citizens have no right to privacy. Any camera that can see through a windshield can see through a window (and facial recognition through walls is a reality).

We currently live in a world where billions of people walk around with a camera in their pocket, and we’re subject to having our pictures taken whenever we’re in public. Most people are fine with this, and most of us accept that we’re being recorded in stores, on the streets, and in our places of employment. The difference here, is that we know we get recorded at a gas station in case someone tries to rob it – they (the business, law enforcement, courtrooms, etc) can go back and check the tape.

It’s the same at the border. If law enforcement needs to check the footage from a certain time and date they have that option now – most border crossings have CCTV cameras. But adding AI and facial recognition to the mix means we have to trust the government. We have to have faith that the Federal government isn’t using biased data, imperfect algorithms, and/or using the data gained from such surveillance for unethical purposes.

Brackeen argues that’s a leap too far. “Now, introduce the very real shortcomings of facial recognition technology, its history of poor match rates in these scenarios, and the misidentification of individuals based on their appearance. Magnify all that by the prejudices and biases that exists in law enforcement and our security agencies—it’s a recipe for disaster” he said.

DHS didn’t ask permission when it developed the training data for VFS — it took thousands of pictures of people for the purpose of developing an AI for surveillance without informing the general public it was doing so. And it won’t ask your permission in August when it deploys VFS in Texas. It follows, then, that we won’t be in the loop as these technologies continue to pop up all over our country.

In 2014 Edward Snowden, speaking to The Guardian, said “No system of mass surveillance has existed in any society, that we know of to this point, that has not been abused.”

Read the source article in The NextWeb.

UK Report Urges Action to Combat AI Bias, Ensure Diversity in Data Sets

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament. “The main ways to address […]

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is it could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Read the source article at TechCrunch.

5 Truths About Artificial Intelligence Everyone Should Know

By Rana el Kaliouby, Co-founder and CEO, Affectiva  Last week, I was in LA for the premiere of a new AI documentary, “Do you trust this computer?” (See video link below.) It was a full house with a few hundred audience members. I was one of the AI scientists featured in the documentary along with big wigs […]

By Rana el Kaliouby, Co-founder and CEO, Affectiva
 Last week, I was in LA for the premiere of a new AI documentary, “Do you trust this computer?” (See video link below.) It was a full house with a few hundred audience members. I was one of the AI scientists featured in the documentary along with big

wigs like Elon Musk, Stuart Russell, Andrew NG and writers Jonathan Nolan and John Markoff. Elon Musk kicked off the evening with director Chris Paine, emphasizing how AI was an important topic that could very well determine the future of humanity.The excitement in the air was palpable. I was one of seven “AI experts” who were to be invited on stage after the screening for a Q&A session with the audience. Shivon Zilis, Project Director of OpenAI and myself were the only women.
The documentary did an excellent job surveying the research and applications of AI, from automation and robots, to medicine, automated weapons, social media and data, as well as the future of the relationship between humans and machines. The work my team and I are doing provided a clear example of the good that can come out of AI.

As I watched in my seat, I could hear the audience gasp at times, and I couldn’t help but notice a couple of things: for one, there was this foregone assumption that AI is out to get us, and two, this field is still so incredibly dominated by men – white men specifically. Other than myself, there were two other women featured–compared to about a dozen males. But it wasn’t just the numbers–it was the total air time. The majority of the time, the voice on screen was a male. I vowed that on stage that night, I would make my voice heard.

Here are some of my key thoughts coming out of the premiere and dialogue around it:
1. AI is in dire need of diversity.

The first question asked from the audience was, “Do you see an alternative narrative here–one that is more optimistic?” YES, I chimed in quoting Yann LeCun, head of AI research at Facebook and professor at NYU: “Intelligence is not correlated with the desire to dominate. Testosterone is!” I added that we need diversity in technology–gender diversity, ethnic diversity, and diversity of backgrounds and experiences. Perhaps if we did that, the rhetoric around AI would be more about compassion and collaboration, and less about taking over the world. The audience applauded.

2. Technology is neutral–we, as a society, decide whether we use it for good or bad.

That has been true throughout history. AI has so much potential for good. As thought leaders in the AI space, we need to advocate for these use cases and educate the world about the potentials for abuse so that the public is involved in a transparent discussion about these use cases. In a sense that’s what is so powerful about this documentary. It will not only educate the public but will spark a conversation with the public that is so desperately needed.

My company, Affectiva joined leading technology companies in the Partnership on AI–a consortium of companies the likes of Amazon, Google, Apple, and many more, that is working to set a standard for ethical uses of AI. Yes, regulation and legislation are important, but too often that lags, so it’s up to leaders in the industry to spearhead these discussions and action it accordingly. To that end, ethics also needs to become a mandatory component of AI education.

3. We need to ensure that AI is equitable, accountable, transparent and inclusive.

The real problem is not the existential threat of AI. Instead, it is in the development of ethical AI systems. Unfortunately today, many are accidentally building bias into AI systems that perpetuate the racial, gender, and ethnic biases existing in society today. In addition, it is not clear who is accountable for AI’s behavior as it is applied across industries. Take the recent tragic accident where a self-driving Uber vehicle killed a pedestrian. It so happens that in that case, there was a safety driver in the car. But who is responsible: the vehicle? The driver? The company? These are incredibly difficult questions, but we need to set standards around accountability for AI to ensure proper use.

4. It’s a partnership, not a war.

I don’t agree with the view that it’s humans vs. machines. With so much potential for AI to be harnessed for good (assuming we take the necessary steps outlined above), we need to shift the dialogue to see the relationship as a partnership between humans and machines. There are several areas where this is the case:

  • Medicine. For example, take mental health conditions such as autism or depression. It is estimated that we have a need for 15,000 mental health professionals in the United States alone. That number is huge, and it doesn’t even factor in countries around the world where the need is even greater. Virtual therapists and social robots can augment human clinicians using AI to build rapport with patients at home, being preemptive, and getting patients just-in-time help. AI alone is not enough, and will not take doctors’ place. But there’s potential for the technology, together with human professionals, to expand what’s possible with healthcare today.
  • Autonomous driving vehicles. While we are developing these systems, these systems will fail as they keep getting better. The role of the human co-pilot or safety driver is critical. For example, there are already cameras facing the driver in many vehicles, that monitor if a human driver is paying attention or distracted. This is key in ensuring that, in a case where a semi-autonomous vehicle must pass control back to a human driver, the person is actually ready and able to take over safely. This collaboration between AI and humans will be critical to ensure safety as autonomous vehicles continue to take the streets around us.
    5. AI needs Emotional intelligence.

    AI today has a high IQ but a low EQ, or emotional intelligence. But I do believe that the merger of EQ and IQ in technology is inevitable, as so many of our decisions, both personal and professional, are driven by emotions and relationships. That’s why we’re seeing a rise in relational and conversational technologies like Amazon Alexa and chatbots. Still, they’re lacking emotion. It’s inevitable that we will continue to spend more and more time with technology and devices, and while many (rightly) believe that this is degrading our humanity and ability to connect with one another, I see an opportunity. With Emotion AI, we can inject humanity back into our connections, enabling not only our devices to better understand us, but fostering a stronger connection between us as individuals.

    While I am an optimist, I am not naive.

    Following the panel, I received an incredible amount of positive feedback. The audience appreciated the optimistic point of view. But that doesn’t mean I am naive or disillusioned. I am part of the World Economic Forum Global Council on Robotics and AI, and we spend a fair amount of our time together as a group discussing ethics, best practices, and the like. I realize that not everyone is putting ethics in consideration. That is definitely a concern. I do worry that organizations and even governments who own AI and data will have a competitive advantage and power, and those who don’t will be left behind.

    The good news is: we, as a society, are designing those systems. We get to define the rules of the game.

    AI is not an existential threat. It’s potentially an existential benefit–if we make it that way. At the screening, there were so many young people in the audience watching. I am hopeful that the documentary renews our commitment to AI ethics and inspires us to apply AI for good.

    Link to video, Do you Trust this Computer?

    Learn more about Affectiva.