Entrepreneurs Taking on Bias in Artificial Intelligence

Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life. “Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL. AI has also been touted as the new must-have […]

Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life.

“Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL.

AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.

Some of the examples of bias are blatant, such as Google’s facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of “winners” were light-skinned. Search Google for images of “unprofessional hair” and the results you see will mostly be pictures of black women (even searching for “man” or “woman” brings back images of mostly white individuals).

While more light has been shined on the problem recently, some feel it’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.

“Fundamentally, bias, if not addressed, becomes the Achilles’ heel that eventually kills artificial intelligence,” says Chad Steelberg, CEO of Veritone. “You can’t have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem.”

As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.

Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

Solution: Algorithm auditing

Back in the early 2010s, Cathy O’Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.

Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

However, O’Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O’Neil’s algorithms were discriminating against users of certain backgrounds, based on the other cues.

As O’Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren’t just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O’Neil and others.)

What’s more, in some industries — for example, housing — if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.

“I had left the finance [world] because I wanted to do better than take advantage of a system just because I could,” O’Neil says. “I’d entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place.”

O’Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracyabout the perils of letting algorithms run the world, and started consulting.

Eventually, she settled on a niche: auditing algorithms.

“I have to admit that it wasn’t until maybe 2014 or 2015 that I realized this is also a business opportunity,” O’Neil says.

Right before the election in 2016, that realization led her to found O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).

“I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn’t actually know how to do it,” O’Neil says. “I didn’t actually know. I didn’t have good advice to give them.” But, she wanted to figure it out.

So, what does it mean to audit an algorithm?

UK Report Urges Action to Combat AI Bias, Ensure Diversity in Data Sets

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament. “The main ways to address […]

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is it could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Read the source article at TechCrunch.

Bias in AI Increasingly Recognized; Progress Being Made

Bias in AI decision-making and in the algorithms of machine learning has been outed as a real issue in the march of AI progress. Here is an update on where we are and efforts being made to recognize bias and counteract it, including a discussion of selected AI startups. AI reflects the bias of its […]

Bias in AI decision-making and in the algorithms of machine learning has been outed as a real issue in the march of AI progress. Here is an update on where we are and efforts being made to recognize bias and counteract it, including a discussion of selected AI startups.

AI reflects the bias of its creators, notes Will Bryne, CEO of Groundswell in a recent article in Fast Company. Societal bias – the attribution of individuals or groups with distinct traits without any data to back it up – is a stubborn problem. AI has the potential to make it worse.

“The footprint of machine intelligence on critical decisions is often invisible, humming quietly beneath the surface,” he writes. AI is driving decision-making on loan-worthiness, medical diagnosis, job candidates, parole determination, criminal punishment and educator performance.

How will AI be fair and inclusive? How will it engage and support the marginalized and most vulnerable in society?

Courts across the US are using a software tool suspected to be biased against African-Americans, predicting future crimes at twice the rate of white people, and underestimating future crimes among white people, according to a recent report by ProPublica, the non-profit investigative journalism outfit. The software tool, developed by Northpointe, uses 137 questions including “was one of your parents ever sent to prison?” The tool is in widespread use; Northpooint has refused to make the algorithm transparent, citing its proprietary business value.

AI is only as effective as the data it is trained on, Byrne wrote in Fast Company. When Microsoft introduced Tay.ai to the world in 2016, the conversational chatbot was to use live interactions on Twitter to get “smarter” in real time. But Tay became horribly racist and misogynist and was shut down after 16 hours.

Trend Toward More Openness

The trend now is toward opening up the black box of AI decision-making algorithms. The AI Now Institute nonprofit is advocating for fair algorithms; they have proposed that if an algorithm providing services for people cannot explain its decision, it should not be used. Regulations requiring such transparency from AI systems are likely to be required in the near future. The General Data Protection Regulation standards of the European Union, to go into effect on May 25, 2018, push in this direction as well.

Within the data science community, OpenAI is a nonprofit developing open source code in the new field of explainable AI, focusing on systems that can explain the reasoning of their decisions to human users.

Some point to the importance of having teams with diverse backgrounds across race, gender, culture and socioeconomic background designing and building AI systems. The Ph.D. technologists and mathematicians who have advanced the AI field needs to expand. Sociologists, ethicists, psychologists and humanities experts need to join the ranks.

It may be that separate algorithms are needed for different groups. In job candidate software, predictors of successful women engineers and male engineers are not the same. Digital affirmative action may be able to correct for structural bias that might be invisible.

Efforts Underway to Address Bias in AI Include Startups

AI Now was launched at a conference at MIT in July 2017. The founders were Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google. In an email to MIT Technology Review, Crawford said, “It’s still early days for understanding algorithmic bias. Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”.

Cathy O’Neil is a mathematician and author of the book, “Weapons of Math Destruction,” which highlights the risk of algorithmic bias. “Algorithms replace human processes, but they are not held to the same standards,” she has said. “People trust them too much.”

O’Neil is now head of Online Risk Consulting & Algorithmic Auditing, a startup set up to help businesses identify and correct bias in the algorithms they use. The firm’s clients include Rentlogic, a company that grades apartments in New York City. The company is also engaged in several projects in industries such as manufacturing, banking and education.

Asked in an email interview with AI Trends about the outlook for addressing bias in AI algorithms, O’Neil said, “It’s an emerging field. I’m not sure how or exactly when but within the next two decades we will either have solved the problem of algorithmic accountability or we will have submitted our free will to stupid and flawed machines. I know which future I’d prefer.”

Also, “There’s increasing academic work on the topic (see FAT* conference discussion below) but of course the IP laws and licenses tilt the playing field towards the tech giants. Not to mention that they are the ones who own all our data. So there’s a limited amount that outside researchers can accomplish without regulations or subpoenas.”

O’Neil continued,  “But again I think the current state of affairs will end. I just don’t know exactly how much damage will take place before it does.“

FAT* Conference Gaining Steam

The conference on Fairness, Accountability, and Transparency (FAT*), which held its fifth annual event in February 2018, brings together researchers and practitioners interested in fairness, accountability and transparency in socio-technical systems.

This community sees progress being made to address bias in AI technologies and automated decision-making. The group has a multidisciplinary and computer science-focused perspective, said Joshua Kroll, program chair, in an email interview with AI Trends. “We’ve seen truly exponential growth in the interest in this area,” said Kroll, a computer scientist who is a Postdoctoral Research Scholar at the UC Berkeley School of Information.

“From our early workshops on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) starting in 2014 with a few dozen people, we’ve had yearly doubling in both the amount of contributed work and the number of event attendees. At this year’s conference, for example, we had over 500 people registered with a waiting list of over 400 people. And we’ve reached the selectivity of top-tier research venues in computer science to select the 17 research papers chosen for presentation as well as the six tutorial sessions,” Kroll said.

He added, “One important improvement is the way scholars and practitioners alike are starting to view these problems as cutting across different concerns and requiring solutions from many disciplines. The community, by and large, realizes that there will be no single “most fair” algorithm, but rather that fairness (or the elimination of bias) will be a process combining measurements and mitigations at the technical level with improvements in human-level processes for understanding what technology is doing.”

This year’s FAT* featured an interdisciplinary group of speakers on a range of topics, including how to deploy responsible models in life-critical situations. One session focused on the use of machine learning to support screening of referrals to a child protection agency in Pennsylvania.

Presentations on face recognition systems showed that while they have very good performance overall (88-93% accuracy), they had much worse performance for darker-skinned faces (77-87% accuracy), and women (79-89% accuracy). Performance was even worse for people in the intersection of those two subgroups (i.e., darker-skinned females) (65-79% accuracy), Kroll said.

“Nearly all of the work at FAT* is meant to change the way people design and build these systems to help them understand and avoid problems of bias or other unintended consequences,” he said. “The work on face recognition accuracy, for example, caused one of the companies whose systems were examined to replicate the study internally and make changes to their algorithms to reduce or eliminate the problem.”  The effect of those changes were not yet validated at the time of the conference.

“I think the most important takeaway from FAT* and the growth of this community has been the idea that we won’t make algorithms fair, accountable, or transparent if we only think about how to intervene purely at the technical level,” Kroll said. “That is, while it’s important and useful to develop technologies that explicitly mitigate bias, we still need to understand which biases need to be corrected or which parts of a population need extra protection. And even when we know that, such as when the law forbids discrimination on the basis of a protected attribute like race or gender, we still need to take a wide view to understand the ways in which a system causes negative impacts to those protected groups.”

Finally, he said, “It’s exciting to me that we’re starting to see ideas from this research community make the jump from the academic world into real practice. I’m excited to see companies thinking hard about these issues and sending top engineering leadership to engage with and learn from the research community on these problems.”

(For more information, go to FAT*.)

Google Sensitized to Bias

Google’s cloud-based machine learning systems aim to make AI more accessible; with that comes risk that bias will creep in.

John Giannandrea, AI chief at Google, was quoted in an October 2017 article in MIT Technology review as being seriously concerned about bias in AI algorithms. “If we give these systems biased data, they will be biased,” he stated. “It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it; otherwise, we are building biased systems. If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it,” he stated.

Google recently organized its own conference on the relationship between humans and AI systems, that included speakers on the subject of bias. Google researcher Maya Gupta described her efforts to make more transparent algorithms, as part of a project known internally as “GlassBox.”  A presentation on the difficulty of detecting bias in how Facebook selects articles for its News Feed was made by Karrie Karahalios, a professor of computer science at the University of Illinois.

Recruiting Software Firms Aim to Cut Down Bias

Recruiting software firms have a keen interest in reducing or eliminating bias in their approaches. Mya Systems  of San Francisco, founded in 2012, does this through reliance on a chatbot named Mya. Co-founder Eyal Grayevsky told Wired in a recent interview that Mya is programmed to interview and evaluate job candidates by asking objective, performance-based questions, avoiding the subconscious judgements that a human may unconsciously make. “We’re taking out bias from the process,” he stated.

Startup HireVue seeks to eliminate bias from recruiting through the use of video- and text-based software. The program extracts up to 25,000 data points from video interviews. Customers include Intel, Vodafone, Unilever and Nike. The assessments are based on factors including facial expressions, vocabulary and abstract qualities such as candidate empathy. HireVue CTO Loren Larsen was quoted as saying that candidates are “getting the same shot regardless of gender, ethnicity, age, employment gaps or college attended.”

The startup recruiting software suppliers are not blind to the possibility that bias can still occur in the AI system. Laura Matha, founder and CEO of AI recruitment platform Talent Sonar was quoted in Wired as seeing “a huge risk that using AI in the recruiting process is going to increase bias and not reduce it.” This is because AI depends on a training set generated by a human team which may not be diverse enough.

This risk is echoed by Y-Vonne Hutchinson, the executive director of ReadySet, a diversity consultancy based in Oakland. “We try not to see AI as a panacea,” she told Wired. “AI is a tool and AI has makers and sometimes AI can amplify the biases of its makers and the blind spots of its makers.” Diversity training helps the human recruiters to spot the bias in themselves and others, she argues.  

  • By John P. Desmond, AI Trends Editor

Algorithmic Bias is Real and Pervasive. Here’s How You Solve It.

By Robin Bordoli, CEO, CrowdFlower A few months back, I found myself in one of those big electronics stores that are rapidly becoming extinct. I don’t remember exactly why I was there, but I remember a very specific moment where a woman with a Chinese accent paced up to the cashier, plopped her home voice […]

By Robin Bordoli, CEO, CrowdFlower

A few months back, I found myself in one of those big electronics stores that are rapidly becoming extinct. I don’t remember exactly why I was there, but I remember a very specific moment where a woman with a Chinese accent paced up to the cashier, plopped her home voice assistant down on the counter and proclaimed: “I need to return this…thing.” The clerk nodded and asked why. “Because this damn thing doesn’t understand anything I say!”

You’d be surprised how often voice recognition systems have this problem. And why is that, exactly? It’s because they’re trained on the same kind of voices, namely engineers in the valley. That means that when an assistant hears a request from someone in a thick Southern drawl, a wicked hard Boston accent, or a Cajun N’awlins dialect, you name it, they simply won’t know what to make of those commands.

Now, while a voice assistant understanding your every request isn’t exactly a life or death problem, it’s evidence of something called algorithmic bias. And whether or not algorithmic bias is real is no longer up for debate. There are myriad examples, from ad networks showing high-paying jobs to men far more often than women and models that trumpet 1950 gender bias to bogus sentencing based on classifiers to AI-judged beauty pageants that demonstrate preference for white contestants. Hardly a month goes by where another, high-profile instance isn’t plastered across tech news.

Again, the question isn’t whether the problem exists; it’s how we solve the problem. Because it’s one we absolutely have to solve. Algorithms already control much more of our lives than most people realize. They’re responsible for whether you can get a mortgage, how much your insurance costs, what news you see, really, essentially, everything you do and see online–and increasingly offline as well. None of us should want to live in a society where these biases are codified and amplified.

So how do we solve this problem? First, the good news: almost none of the prominent examples of algorithmic bias aren’t due to malicious intent. In other words, it isn’t as if there’s a room full of sexist, racist programmers foisting these models on the public. It’s accidental, not purposeful. The bias, in fact, comes from the data itself. And that means we can often solve it with different–or more–data.

Take the example of a facial classifier that didn’t recognize black people as people. This is a glaring example of bias but it stems primarily from the original dataset. Namely, if an algorithm is trained on a set of white college students, it may have significant problems recognizing people with darker complexions, older people, or babies. They will be ignored. Fixing that means training that algorithm on an additional corpus of facial data, like the project the folks at Kiva are undertaking.

Kiva is a microlending platform focused predominantly on the developing world. As part of their application process, they ask prospective borrowers to include a photo of themselves, along with the other pertinent details to share with community of lenders. In doing so, Kiva has accrued a dataset of hundreds of thousands of highly diverse images, importantly captured in non-laboratory,  real-world, settings. If you take that original, biased facial classifier and retrain it with additional, labeled images from a dataset that is more representative of the full spectrum of human faces, suddenly, you have a model that recognizes a much wider population.

Most instances of algorithmic bias can be solved in the same way: retraining a classifier with tailored data. Those voice assistants that don’t understand accents? Once they hear enough of those accent, they will. The same is true with essentially every example I cited above. But this does beg a different question: if we know how to fix algorithmic bias, why are there so many instances of it?

This is where companies need to step up. Because the instances of bias we mentioned above really should have been caught. Think about it: why didn’t anyone consider making sure their facial recognizer could understand non-white faces? Odds are, they didn’t consider it at all. Or if they did, maybe they checked in stale, laboratory conditions, that don’t mirror the real world.

Here, companies need to consider two things. First, hiring. Diverse engineering teams ask the right questions. And by most measures, diverse teams perform better because they bring different experiences to their work. Second, companies aren’t thinking enough about their users. Or, to put it more directly, they aren’t thinking enough about their universe of potential users. Diverse teams, inherently, will help with this problem, but even then you can run into problems. Take a moment before you release an machine learning project and stress-test it in ways your team didn’t think off straight off the bat. Use empathy. Realize that different users will act different ways and that, although you can’t reasonably hope to foresee them all, by making a concerted effort, you can catch a great deal of these problems before your project goes live.

At this point, most of us are aware that artificial intelligence is going to transform business and society. It’s a definite at this point, though experts can quibble on the extent. We also know that AI can both amplify existing bias and even evidence bias where none was intended. But it’s solvable. It is. It’s just a matter of being conscientious. It means hiring smartly. It means testing smartly. And it means, above all, using the same data that makes AI work to make AI work more fairly. Algorithmic bias is pervasive, but it’s not intractable. We just need to admit it exists and take the smart steps to fix it.

For more information, go to CrowdFlower.com.