Entrepreneurs Taking on Bias in Artificial Intelligence
Entrepreneurs Taking on Bias in Artificial Intelligence

Entrepreneurs Taking on Bias in Artificial Intelligence

Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life.

“Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL.

AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.

Some of the examples of bias are blatant, such as Google’s facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of “winners” were light-skinned. Search Google for images of “unprofessional hair” and the results you see will mostly be pictures of black women (even searching for “man” or “woman” brings back images of mostly white individuals).

While more light has been shined on the problem recently, some feel it’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.

“Fundamentally, bias, if not addressed, becomes the Achilles’ heel that eventually kills artificial intelligence,” says Chad Steelberg, CEO of Veritone. “You can’t have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem.”

As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.

Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

Solution: Algorithm auditing

Back in the early 2010s, Cathy O’Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.

Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

However, O’Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O’Neil’s algorithms were discriminating against users of certain backgrounds, based on the other cues.

As O’Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren’t just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O’Neil and others.)

What’s more, in some industries — for example, housing — if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.

“I had left the finance [world] because I wanted to do better than take advantage of a system just because I could,” O’Neil says. “I’d entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place.”

O’Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracyabout the perils of letting algorithms run the world, and started consulting.

Eventually, she settled on a niche: auditing algorithms.

“I have to admit that it wasn’t until maybe 2014 or 2015 that I realized this is also a business opportunity,” O’Neil says.

Right before the election in 2016, that realization led her to found O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).

“I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn’t actually know how to do it,” O’Neil says. “I didn’t actually know. I didn’t have good advice to give them.” But, she wanted to figure it out.

So, what does it mean to audit an algorithm?