Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life.
“Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL.
AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.
Some of the examples of bias are blatant, such as Google’s facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of “winners” were light-skinned. Search Google for images of “unprofessional hair” and the results you see will mostly be pictures of black women (even searching for “man” or “woman” brings back images of mostly white individuals).
While more light has been shined on the problem recently, some feel it’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.
“Fundamentally, bias, if not addressed, becomes the Achilles’ heel that eventually kills artificial intelligence,” says Chad Steelberg, CEO of Veritone. “You can’t have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem.”
As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.
Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing
Solution: Algorithm auditing
Back in the early 2010s, Cathy O’Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.
However, O’Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O’Neil’s algorithms were discriminating against users of certain backgrounds, based on the other cues.
As O’Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren’t just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O’Neil and others.)
What’s more, in some industries — for example, housing — if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.
“I had left the finance [world] because I wanted to do better than take advantage of a system just because I could,” O’Neil says. “I’d entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place.”
O’Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, about the perils of letting algorithms run the world, and started consulting.
Eventually, she settled on a niche: auditing algorithms.
“I have to admit that it wasn’t until maybe 2014 or 2015 that I realized this is also a business opportunity,” O’Neil says.
Right before the election in 2016, that realization led her to found O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).
“I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn’t actually know how to do it,” O’Neil says. “I didn’t actually know. I didn’t have good advice to give them.” But, she wanted to figure it out.
So, what does it mean to audit an algorithm?
Often, companies will say an algorithm is working if it’s accurate, effective or increasing profits, but for O’Neil, that shouldn’t be enough.
“So, when I say I want to audit your algorithm, it means I want to delve into what it is doing to all the stakeholders in the system in which you work, in the context in which you work,” O’Neil says. “And the stakeholders aren’t just the company building it, aren’t just for the company deploying it. It includes the target for the algorithm, so the people that are being assessed. It might even include their children. I want to think bigger. I want to think more about externalities, unforeseen consequences. I want to think more about the future.”
For example, Facebook’s News Feed algorithm is very good at encouraging engagement and keeping users on its site. However, there’s also evidence it reinforces users’ beliefs, rather than promoting dialog, and has contributed to ethnic cleansing. While that may not be evidence of bias, it’s certainly not a net positive.
Right now, ORCAA’s clients are companies that ask for their algorithms to be audited because they want a third party — such as an investor, client or the general public — to trust it. For example, O’Neil has audited an internal Siemens project and New York-based Rentlogic’s landlord rating system algorithm. These types of clients are generally already on the right track and simply want a third-party stamp of approval.
Read the source article in Entrepreneur.
You must be logged in to post a comment.