Executive Interview: Yoshua Bengio of MILA, University of Montreal
Executive Interview: Yoshua Bengio of MILA, University of Montreal

Executive Interview: Yoshua Bengio of MILA, University of Montreal

Combining AI Research,

Business Collaboration,

Thoughts on Impact of AI on Society

Yoshua Bengio is among the most cited Canadian computer scientists. He is the author of two books and more than 200 publications, the most cited being in the areas of deep learning, recurrent neural networks, probabilistic learning algorithms, natural language processing and manifold learning.

Yoshua Bengio of MILA, University of Montreal

He earned a PhD in Computer Science from McGill University in 1991 and worked at the Canadian Institute for Advanced Research (CIFAR) alongside with Yann LeCun (now at Facebook) and Geoffrey Hinton (now at Google). He has collaborated with IBM in work on the Watson supercomputer.

His current interests are centered around a quest for AI through machine learning, and include fundamental questions on deep learning and representation learning, the geometry of generalization in high-dimensional spaces, manifold learning, biologically inspired learning algorithms, and challenging applications of statistical machine learning. He recently participated in an interview with journalists learning about Canada’s AI initiative that included Eliot Weinman, Executive Editor of AI Trends.

Q. Why is it important for the Canadian government to engage in this AI initiative?

A. AI is not just another technology. It will have a big impact on our societies, and there are many ethical and social questions associated with how AI is being deployed and how it will be deployed. If we don’t think about these considerations, the public will eventually reject advanced technologies that they see as threatening and against their well-being. So governments have to really care about these questions, whether for moral reasons or for practical reasons.

Q. What would be your AI horror scenario?  

A. I am most concerned about the use of AI in the military and security arenas. I’m sure you’ve heard about killer robots, and you may have also heard of how the technology can be used to recognize people from their facial images. So there are Big Brother scenarios that could be upon us if we’re not careful. I also have concerns related to privacy issues when we are dealing with private data. Then we have economic issues. Automation will be accelerated with AI; that may create more inequality than we already suffer. And that is at the level of people, companies and countries. To have more countries involved will create a healthier playing field.  

Q. What is the role of universities in the evolution of AI?

A. I am a professor at University of Montreal; we have created MILA (Montreal Institute for Learning Algorithms), which is similar to the Vector Institute (collaboration of government and business in partnership with University of Toronto) which has similar goals and were both funded by the federal government and the provincial government. These institutes – there’s another in Alberta (Alberta Machine Intelligence Institute) — have been set up so they will have more agility than universities have, but they’re still academic research organizations. They also have a mandate to help the ecosystem through the startups and companies that are creating value with AI.

These institutes are in a better position to be neutral about how AI will be used and keep in mind the well-being of people, and to orient research in directions that will be good for people, and engage in the public dialogue in a credible way. I think it’s good that companies like Facebook and Google participate in that dialogue, but I’m not sure if they are neutral agents in those discussions. Universities, which care, first and foremost, about the public good, are really important agents in the discussions and in the kind of research that can be done.  

Q. What steps can government take to foster this dialogue?

A. Here in Montreal, we are creating an organization that will be focused on the social, economic and ethical questions around AI. It will sponsor research in the social sciences and humanities around AI, but also will participate in the public debate. I think we don’t have all the answers to how to do this right. Scholars and scientists need to really think through this and engage the public. We did something like this in the last six months in Montreal and in Quebec, and also in Ontario. After a forum of experts, we brought in ordinary people. We went to public libraries and places where people could comment and discuss the questions. We’re coming up with something that will be initiated by scholars and experts, and also have feedback and contributions from ordinary people. I think we have to continue in that direction.  

This observatory on AI will be in a good position to make recommendations to governments, which will be part of the mission both locally and in different countries. The questions are pretty much the same in most countries. I think there should be a global coordination about these questions. There are issues like military use which will obviously need to be international, and even questions about regulating companies, which are multinationals. It would be much better if we can agree on principles globally.  

Q. What do you see as the next evolution of the core technology that enables what we know of as AI today?

A. I’m a scientist. I don’t have a crystal ball. And I can make educated guesses like many people. But one thing for sure is that there are obstacles on our way towards smarter machines, and it’s always been like this when we make progress. We’ve achieved something important, and now we see that there are other challenges. We’ve made huge progress in industry using supervised learning where humans have to really teach machines by telling them what to do. A lot of the current emphasis in basic research is on supervised learning or reinforcement learning, where the machines have to learn in a more autonomous way. And we haven’t solved that in a satisfactory way yet. It will probably take years, or decades to really make big breakthroughs there. But given the exponential growth of research in these areas, I’m very optimistic that things will move very swiftly.

Q. Are you concerned that the massive investments in AI today are too risky?

A. One reason why companies are investing so much, and are so optimistic, is that a lot of future wealth growth from AI doesn’t depend on new discoveries. In other words, we take what we have already scientifically, and we just make a lot of progress in the hardware. That’s going to happen. It’s moving. We will make progress in bringing together the right data. Like medical data, we don’t do a good job yet. In lots of industries and sectors, the ingredients for applying that science are not there yet, but they will be there soon.

We have at least a decade to just reap the benefits of the science we already have. On top of that, there’s so much money being poured into research, both in industry and in academia, that it would be surprising if the science doesn’t move forward over the next decade. So it’s almost a sure gain. Now, of course, you know, commercial enterprises can fail for all kinds of reason. But at a high level, I think it’s a very safe bet.  

Q. Is China ahead in the race to be the leading AI country?

A. I don’t like to make like these kinds of comparisons. Silicon Valley is a very small place. The progress can come from anywhere in the world. China does have huge advantages in this race. One of the most important ones is that it’s the biggest market in the world, and has the volumes of data that go with that. So from the point of view of investing, this is a very appealing place to do AI. And in addition, there’s a huge enthusiasm for AI in China from all quarters. And lots and lots of students are jumping into this. It’s a worldwide phenomenon, but I think with all the enthusiasm behind it, China probably wins the race for now.

Q. Do you envision big companies and startups and small companies collaborating to advance AI?

A. There is room for many kinds of business models in this new world. Large companies have leadership strong enough to make the fast turns that are needed, and companies like Element AI  can help with that that. And big companies will be in competition with up and running small companies, building new products and new services which may not even exist now. New markets will be created. I’m also a big believer in the collaboration between startups and large companies. They have complimentary advantages. This is important from the point of view of a country with a national strategy because the startups are more agile. They can more easily recruit people who are excited about the fast pace of development. They can recruit talent more easily.  

But the large companies have the larger market where they can deploy. They have lots of cash to invest, and they have lots of data. Ideally companies, a little bit like researchers, learn to cooperate better with their strengths and weaknesses to build something stronger.  

Q. Are you concerned about the risk of jobs lost to AI automation?

A. Absolutely. The potential impact on the job market is very serious. It’s not going to happen in one day, but it will happen way too fast for our ability to handle those changes. Many people are likely to lose their jobs in the middle of their careers.

We have to rethink our social safety net. Most developed countries have a social safety net, but it’s been designed for a particular kind of economy. We will need to look into things like a universal basic income, and do more pilots. We may have to forget about our traditional values around work, such as if you don’t work, you don’t get money.  And that’s only one aspect of it. We need to rethink the education system so people can be rescaled in the middle of their career, while they are at a job.

The education system will need to train people in a way that is more appropriate for a fast-changing world, where human skills are going to be more important than they were in the past. Of course, we want to train more scientists and engineers; that’s a no-brainer. But we have to train people not for one job that’s very, very specialized, but rather how to think for themselves about how to be good citizens, and to rapidly learn the skills they need.

And we have to ask what is going to be the impact on society? Will AI be beneficial for the whole society or just a few people? I don’t have the answers but I think it’s important to ask the questions not and let the market by themselves figure out the answers. Those answers might not be in favor of ordinary people. Governments need to think about this and if necessary, find the right regulations.

Q. How is MILA progressing and can you describe your typical day?

A. MILA is the Montreal Institute for Learning Algorithms, a machine learning research lab with business collaboration as part of the mission. It’s growing very fast. It already has the highest concentration of deep learning researchers in academia in the world. We’re going to be doubling the number of professors over the next few years, thanks to the Canadian government.  

MILA is mostly academic in nature, a non-profit, but with the mandate to help companies, to guide them in their development of AI.

I love working at the university. It allows me to be a more neutral agent in the changes that are coming, and gives me a voice that can have an impact as we adapt to this changing world of AI. Also, I’m in a position to steer research in directions that I think are important, and to contribute to the training of the next generation. I think this is something really, really important. I just enjoy the research with all of my students, which I would lose if I went to private industry.

Learn more at MILA.