Bias in AI Increasingly Recognized; Progress Being Made
Bias in AI Increasingly Recognized; Progress Being Made

Bias in AI Increasingly Recognized; Progress Being Made

Bias in AI decision-making and in the algorithms of machine learning has been outed as a real issue in the march of AI progress. Here is an update on where we are and efforts being made to recognize bias and counteract it, including a discussion of selected AI startups.

AI reflects the bias of its creators, notes Will Bryne, CEO of Groundswell in a recent article in Fast Company. Societal bias – the attribution of individuals or groups with distinct traits without any data to back it up – is a stubborn problem. AI has the potential to make it worse.

“The footprint of machine intelligence on critical decisions is often invisible, humming quietly beneath the surface,” he writes. AI is driving decision-making on loan-worthiness, medical diagnosis, job candidates, parole determination, criminal punishment and educator performance.

How will AI be fair and inclusive? How will it engage and support the marginalized and most vulnerable in society?

Courts across the US are using a software tool suspected to be biased against African-Americans, predicting future crimes at twice the rate of white people, and underestimating future crimes among white people, according to a recent report by ProPublica, the non-profit investigative journalism outfit. The software tool, developed by Northpointe, uses 137 questions including “was one of your parents ever sent to prison?” The tool is in widespread use; Northpooint has refused to make the algorithm transparent, citing its proprietary business value.

AI is only as effective as the data it is trained on, Byrne wrote in Fast Company. When Microsoft introduced Tay.ai to the world in 2016, the conversational chatbot was to use live interactions on Twitter to get “smarter” in real time. But Tay became horribly racist and misogynist and was shut down after 16 hours.

Trend Toward More Openness

The trend now is toward opening up the black box of AI decision-making algorithms. The AI Now Institute nonprofit is advocating for fair algorithms; they have proposed that if an algorithm providing services for people cannot explain its decision, it should not be used. Regulations requiring such transparency from AI systems are likely to be required in the near future. The General Data Protection Regulation standards of the European Union, to go into effect on May 25, 2018, push in this direction as well.

Within the data science community, OpenAI is a nonprofit developing open source code in the new field of explainable AI, focusing on systems that can explain the reasoning of their decisions to human users.

Some point to the importance of having teams with diverse backgrounds across race, gender, culture and socioeconomic background designing and building AI systems. The Ph.D. technologists and mathematicians who have advanced the AI field needs to expand. Sociologists, ethicists, psychologists and humanities experts need to join the ranks.

It may be that separate algorithms are needed for different groups. In job candidate software, predictors of successful women engineers and male engineers are not the same. Digital affirmative action may be able to correct for structural bias that might be invisible.

Efforts Underway to Address Bias in AI Include Startups

AI Now was launched at a conference at MIT in July 2017. The founders were Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google. In an email to MIT Technology Review, Crawford said, “It’s still early days for understanding algorithmic bias. Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”.

Cathy O’Neil is a mathematician and author of the book, “Weapons of Math Destruction,” which highlights the risk of algorithmic bias. “Algorithms replace human processes, but they are not held to the same standards,” she has said. “People trust them too much.”

O’Neil is now head of Online Risk Consulting & Algorithmic Auditing, a startup set up to help businesses identify and correct bias in the algorithms they use. The firm’s clients include Rentlogic, a company that grades apartments in New York City. The company is also engaged in several projects in industries such as manufacturing, banking and education.

Asked in an email interview with AI Trends about the outlook for addressing bias in AI algorithms, O’Neil said, “It’s an emerging field. I’m not sure how or exactly when but within the next two decades we will either have solved the problem of algorithmic accountability or we will have submitted our free will to stupid and flawed machines. I know which future I’d prefer.”

Also, “There’s increasing academic work on the topic (see FAT* conference discussion below) but of course the IP laws and licenses tilt the playing field towards the tech giants. Not to mention that they are the ones who own all our data. So there’s a limited amount that outside researchers can accomplish without regulations or subpoenas.”

O’Neil continued,  “But again I think the current state of affairs will end. I just don’t know exactly how much damage will take place before it does.“

FAT* Conference Gaining Steam

The conference on Fairness, Accountability, and Transparency (FAT*), which held its fifth annual event in February 2018, brings together researchers and practitioners interested in fairness, accountability and transparency in socio-technical systems.

This community sees progress being made to address bias in AI technologies and automated decision-making. The group has a multidisciplinary and computer science-focused perspective, said Joshua Kroll, program chair, in an email interview with AI Trends. “We’ve seen truly exponential growth in the interest in this area,” said Kroll, a computer scientist who is a Postdoctoral Research Scholar at the UC Berkeley School of Information.

“From our early workshops on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) starting in 2014 with a few dozen people, we’ve had yearly doubling in both the amount of contributed work and the number of event attendees. At this year’s conference, for example, we had over 500 people registered with a waiting list of over 400 people. And we’ve reached the selectivity of top-tier research venues in computer science to select the 17 research papers chosen for presentation as well as the six tutorial sessions,” Kroll said.

He added, “One important improvement is the way scholars and practitioners alike are starting to view these problems as cutting across different concerns and requiring solutions from many disciplines. The community, by and large, realizes that there will be no single “most fair” algorithm, but rather that fairness (or the elimination of bias) will be a process combining measurements and mitigations at the technical level with improvements in human-level processes for understanding what technology is doing.”

This year’s FAT* featured an interdisciplinary group of speakers on a range of topics, including how to deploy responsible models in life-critical situations. One session focused on the use of machine learning to support screening of referrals to a child protection agency in Pennsylvania.

Presentations on face recognition systems showed that while they have very good performance overall (88-93% accuracy), they had much worse performance for darker-skinned faces (77-87% accuracy), and women (79-89% accuracy). Performance was even worse for people in the intersection of those two subgroups (i.e., darker-skinned females) (65-79% accuracy), Kroll said.

“Nearly all of the work at FAT* is meant to change the way people design and build these systems to help them understand and avoid problems of bias or other unintended consequences,” he said. “The work on face recognition accuracy, for example, caused one of the companies whose systems were examined to replicate the study internally and make changes to their algorithms to reduce or eliminate the problem.”  The effect of those changes were not yet validated at the time of the conference.

“I think the most important takeaway from FAT* and the growth of this community has been the idea that we won’t make algorithms fair, accountable, or transparent if we only think about how to intervene purely at the technical level,” Kroll said. “That is, while it’s important and useful to develop technologies that explicitly mitigate bias, we still need to understand which biases need to be corrected or which parts of a population need extra protection. And even when we know that, such as when the law forbids discrimination on the basis of a protected attribute like race or gender, we still need to take a wide view to understand the ways in which a system causes negative impacts to those protected groups.”

Finally, he said, “It’s exciting to me that we’re starting to see ideas from this research community make the jump from the academic world into real practice. I’m excited to see companies thinking hard about these issues and sending top engineering leadership to engage with and learn from the research community on these problems.”

(For more information, go to FAT*.)

Google Sensitized to Bias

Google’s cloud-based machine learning systems aim to make AI more accessible; with that comes risk that bias will creep in.

John Giannandrea, AI chief at Google, was quoted in an October 2017 article in MIT Technology review as being seriously concerned about bias in AI algorithms. “If we give these systems biased data, they will be biased,” he stated. “It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it; otherwise, we are building biased systems. If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it,” he stated.

Google recently organized its own conference on the relationship between humans and AI systems, that included speakers on the subject of bias. Google researcher Maya Gupta described her efforts to make more transparent algorithms, as part of a project known internally as “GlassBox.”  A presentation on the difficulty of detecting bias in how Facebook selects articles for its News Feed was made by Karrie Karahalios, a professor of computer science at the University of Illinois.

Recruiting Software Firms Aim to Cut Down Bias

Recruiting software firms have a keen interest in reducing or eliminating bias in their approaches. Mya Systems  of San Francisco, founded in 2012, does this through reliance on a chatbot named Mya. Co-founder Eyal Grayevsky told Wired in a recent interview that Mya is programmed to interview and evaluate job candidates by asking objective, performance-based questions, avoiding the subconscious judgements that a human may unconsciously make. “We’re taking out bias from the process,” he stated.

Startup HireVue seeks to eliminate bias from recruiting through the use of video- and text-based software. The program extracts up to 25,000 data points from video interviews. Customers include Intel, Vodafone, Unilever and Nike. The assessments are based on factors including facial expressions, vocabulary and abstract qualities such as candidate empathy. HireVue CTO Loren Larsen was quoted as saying that candidates are “getting the same shot regardless of gender, ethnicity, age, employment gaps or college attended.”

The startup recruiting software suppliers are not blind to the possibility that bias can still occur in the AI system. Laura Matha, founder and CEO of AI recruitment platform Talent Sonar was quoted in Wired as seeing “a huge risk that using AI in the recruiting process is going to increase bias and not reduce it.” This is because AI depends on a training set generated by a human team which may not be diverse enough.

This risk is echoed by Y-Vonne Hutchinson, the executive director of ReadySet, a diversity consultancy based in Oakland. “We try not to see AI as a panacea,” she told Wired. “AI is a tool and AI has makers and sometimes AI can amplify the biases of its makers and the blind spots of its makers.” Diversity training helps the human recruiters to spot the bias in themselves and others, she argues.  

  • By John P. Desmond, AI Trends Editor