Entrepreneurs Taking on Bias in Artificial Intelligence

Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life. “Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL. AI has also been touted as the new must-have […]

Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life.

“Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL.

AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.

Some of the examples of bias are blatant, such as Google’s facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of “winners” were light-skinned. Search Google for images of “unprofessional hair” and the results you see will mostly be pictures of black women (even searching for “man” or “woman” brings back images of mostly white individuals).

While more light has been shined on the problem recently, some feel it’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.

“Fundamentally, bias, if not addressed, becomes the Achilles’ heel that eventually kills artificial intelligence,” says Chad Steelberg, CEO of Veritone. “You can’t have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem.”

As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.

Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

Solution: Algorithm auditing

Back in the early 2010s, Cathy O’Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.

Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

However, O’Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O’Neil’s algorithms were discriminating against users of certain backgrounds, based on the other cues.

As O’Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren’t just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O’Neil and others.)

What’s more, in some industries — for example, housing — if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.

“I had left the finance [world] because I wanted to do better than take advantage of a system just because I could,” O’Neil says. “I’d entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place.”

O’Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracyabout the perils of letting algorithms run the world, and started consulting.

Eventually, she settled on a niche: auditing algorithms.

“I have to admit that it wasn’t until maybe 2014 or 2015 that I realized this is also a business opportunity,” O’Neil says.

Right before the election in 2016, that realization led her to found O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).

“I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn’t actually know how to do it,” O’Neil says. “I didn’t actually know. I didn’t have good advice to give them.” But, she wanted to figure it out.

So, what does it mean to audit an algorithm?

Unboxing Google’s 7 New Principles of Artificial Intelligence

By Ivan Rodriguez, founder, Geek on Record, and a software engineering manager at Microsoft How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant […]

By Ivan Rodriguez, founder, Geek on Record, and a software engineering manager at Microsoft

How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant that enables it to make phone calls on your behalf to book appointments with small businesses.

The root of the controversy lied on the fact that the Assistant successfully pretended to be a real human, never disclosing its true identity to the other side of the call. Many tech experts wondered if this is an ethical practice or if it’s necessary to hide the digital nature of the voice.

Google was also criticized last month by another sensitive topic: the company’s involvement in a Pentagon program that uses AI to interpret video imagery and could be used to improve the targeting of drone strikes. Thousands of employees signed a letter protesting the program and asking for change:

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

A “clear policy” around AI is a bold ask because none of the big players have ever done it before, and for good reasons. It is such a new and powerful technology that it’s still unclear how many areas of our life will we dare to infuse with it, and it’s difficult to set rules around the unknown. Google Duplex is a good example of this, it’s a technological development that we would have considered “magical” 10 years ago, that today scares many people.

Regardless, Sundar Pichai not only complied with the request, but took it a step further by creating 7 principles that the company will promote and enforce as one of the industry drivers of AI. Here are some remarks on each of them:

1. Be socially beneficial

For years, we have dealt with comfortable boundaries, creating increasingly intelligent entities in very focused areas. AI is now getting the ability to switch between different domain areas in a transparent way for the user. For example, having an AI that knows your habits at home is very convenient, especially when your home appliances are connected to the same network. When that same AI also knows your habits outside home, like your favorite restaurants, your friends, your calendar, etc., its influence in your life can become scary. It’s precisely this convenience that is pushing us out of our comfort zone.

This principle is the most important one since it bows to “respect cultural, social, and legal norms”. It’s a broad principle, but it’s intended to ease that uncomfortable feeling by adapting AI to our times and letting it evolve at the same pace as our social conventions do.

2. Avoid creating or reinforcing unfair bias

AI can become racist if we allow it. A good example of this happened in March 2016, when Microsoft unveiled an AI with a Twitter interface and in less than a day people taught it the worst aspects of our humanity. AI learns by example, so ensuring that safeguards are in place to avoid this type of situations is critical. Our kids are going to grow in a world increasingly assisted by AI, so we need to educate the system before it’s exposed to internet trolls and other bad players.

3. Be built and tested for safety

This point goes hand in hand with the previous one. In fact, Microsoft’s response to the Tai fiasco was to take it down and admit an oversight on the type of scenarios that the AI was tested against. Safety should always be one of the first considerations when designing an AI.

4. Be accountable to people

The biggest criticism Google Duplex received was whether or not it was ethical to mimic a real human without letting other humans know. I’m glad that this principle just states that “technologies will be subject to appropriate human direction and control”, since it doesn’t discount the possibility of building human-like AIs in the future.

An AI that makes a phone call on our behalf must sound as human as possible, since it’s the best way of ensuring a smooth interaction with the person on the other side. Human-like AIs shall be designed with respect, patience and empathy in mind, but also with human monitoring and control capabilities.

5. Incorporate privacy design principles

When the convenience created by AI intersects with our personal feelings or private data, a new concern is revealed: our personal data can be used against us. Cambridge Analytica’s incident, where personal data was shared with unauthorized third parties, magnified the problem by jeopardizing user’s trust in technology.

Google didn’t use many words on this principle, probably because it’s the most difficult one to clarify without directly impacting their business model. However, it represents the biggest tech challenge of the decade, to find the balance between giving up your privacy and getting a reasonable benefit in return. Providing “appropriate transparency and control over the use of data” is the right mitigation, but it won’t make us less uncomfortable when an AI knows the most intimate details about our lives.

Read the source post at Geek on record.

Here are 3 Tips to Reduce Bias in AI-Powered Chatbots

AI-powered chatbots that use natural language processing are on the rise across all industries. A practical application is providing dynamic customer support that allows users to ask questions and receive highly relevant responses. In health care, for example, one customer may ask “What’s my copay for an annual check-up?” and another may ask “How much does seeing […]

AI-powered chatbots that use natural language processing are on the rise across all industries. A practical application is providing dynamic customer support that allows users to ask questions and receive highly relevant responses. In health care, for example, one customer may ask “What’s my copay for an annual check-up?” and another may ask “How much does seeing the doctor cost?” A smartly trained chatbot will understand that both questions have the same intent and provide a contextually relevant answer based on available data.

What many people don’t realize is that AI-powered chatbots are like children: They learn by example. Just like a child’s brain in early development, AI systems are designed to process huge amounts of data in order to form predictions about the world and act accordingly. AI solutions are trained by humans and synthesize patterns from experience. However, there are many patterns inherent in human societies that we don’t want to reinforce — for example, social biases. How do we design machine learning systems that are not only intelligent but also egalitarian?

Social bias is an increasingly important conversation in the AI community, and we still have a lot of work to do. Researchers from the University of Massachusetts recently found that the accuracy of several common NLP tools was dramatically lower for speakers of “non-standard” varieties of English, such as African American Vernacular English (AAVE). Another research group, from MIT and Stanford, reported that three commercial face-recognition programs demonstrated both skin-type and gender biases, with significantly higher error rates for females and for individuals with darker skin. In both of these cases, we see the negative impact of training a system on a non-representational data set. AI can learn only as much as the examples it is exposed to — if the data is biased, the machine will be as well.

Bots and other AI solutions now assist humans with thousands of tasks across every industry, and bias can limit a consumer’s access to critical information and resources. In the field of health care, eradicating bias is critical. We must ensure that all people, including those in minority and underrepresented populations, can take advantage of tools that we’ve created to save them money, keep them healthy, and help them find care when they need it most.

So, what’s the solution? Based on our experience of training with IBM Watson for more than four years, you can minimize bias in AI applications by considering the following suggestions:

  • Be thoughtful about your data strategy;
  • Encourage a representational set of users; and
  • Create a diverse development team.
1. Be thoughtful about your data strategy

When it comes to training, AI architects have choices to make. The decisions are not only technical, but ethical. If our training examples aren’t representative of our users, we’re going to have low system accuracy when our application makes it to the real world.

It may sound simple to create a training set that includes a diverse set of examples, but it’s easy to overlook if you aren’t careful. You may need to go out of your way to find or create datasets with examples from a variety of demographics. At some point, we will also want to train our bot on data examples from real usage, rather than relying on scraped or manufactured datasets. But what do we do if even our real users don’t represent all the populations we’d like to include?

We can take a laissez-faire approach, allowing natural trends to guide development without editing the data at all. The benefit of this approach is that you can optimize performance to your general population of users. However, that may come at the expense of an underrepresented population that we don’t want to ignore. For example, if the majority of users interacting with a chatbot are under the age of 65, the bot will see very few questions about medical services that apply only to an over-65 population, such as osteoporosis screenings and fall prevention counseling. If bots are only trained on real interactions, with no additional guidance, it may not perform as well on questions about those services, which disadvantages older adults who need that information.

In order to combat this at my company, we create synthetic training questions or seek another data source for questions about osteoporosis screenings and fall prevention counseling. By strategically enforcing more distribution and representativeness in our training data, we allow our bot to learn a wider range of topics, without unfair preference for the interests of the majority user demographic.

Read the source article in VentureBeat.

High Quality Data Key to Eliminating Bias in AI

Biases are an incurable symptom of the human decision-making process. We make assumptions, judgements and decisions on imperfect information as our brains are wired to take the path of least resistance and draw quick conclusions which affect us socially as well as financially. The inherent human “negative bias” is a byproduct of our evolution. For […]

Biases are an incurable symptom of the human decision-making process. We make assumptions, judgements and decisions on imperfect information as our brains are wired to take the path of least resistance and draw quick conclusions which affect us socially as well as financially.

The inherent human “negative bias” is a byproduct of our evolution. For our survival it was of primal importance to be able to quickly assess the danger posed by a situation, an animal or another human. However our discerning inclinations have evolved into more pernicious biases over the years as cultures become enmeshed and our discrimination is exacerbated by religion, caste, social status and skin color.

Human bias and machine learning

In traditional computer programming people hand code a solution to a problem. With machine learning (a subset of AI) computers learn to find the solution by finding patterns in the data they are fed, ultimately, by humans. As it is impossible to separate ourselves from our own human biases and that naturally feeds into the technology we create.

Examples of AI gone awry proliferate technology products. In an unfortunate example, Google had to apologise for tagging a photo of black people as gorillas in its Photos app, which is supposed to auto-categorise photos by image recognition of its subjects (cars, planes, etc). This was caused by the heuristic know as “selection bias”. Nikon had a similar incident with its cameras when pointed at Asian subjects, when focused on their face it prompted the question “is someone blinking?”

Potential biases in machine learning:
  • Interaction bias: If we are teaching a computer to learn to recognize what an object looks like, say a shoe, what we teach it to recognize is skewed by our interpretation of a shoe (mans/womans or sports/casual) and the algorithm will only learn and build upon that basis.

  • Latent bias: If you’re training your programme to recognize a doctor and your data sample is of previous famous physicists, the programme will be highly skewed towards males.

  • Similarity bias: Just what it sounds like. When choosing a team, for example, we would favor those most similar to us than as opposed to those we view as “different”.

  • Selection bias: The data used to train the algorithm over represents one population, making it operate better for them at the expense of others.

Algorithms and artificial intelligence (AI) are intended to minimize human emotion and involvement in data processing that can be skewed by human error and many would think this sanitizes the data completely. However, any human bias or error collecting the data going into the algorithm will actually be exaggerated in the AI output.

Gender bias in Fintech

Every industry has its own gender and race skews and the technology industry, like the financial industry, is dominated by white males. Silicon Valley has earned the reputation as a Brotopia due to its boy club culture.

UK Report Urges Action to Combat AI Bias, Ensure Diversity in Data Sets

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament. “The main ways to address […]

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is it could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Read the source article at TechCrunch.

5 Truths About Artificial Intelligence Everyone Should Know

By Rana el Kaliouby, Co-founder and CEO, Affectiva  Last week, I was in LA for the premiere of a new AI documentary, “Do you trust this computer?” (See video link below.) It was a full house with a few hundred audience members. I was one of the AI scientists featured in the documentary along with big wigs […]

By Rana el Kaliouby, Co-founder and CEO, Affectiva
 Last week, I was in LA for the premiere of a new AI documentary, “Do you trust this computer?” (See video link below.) It was a full house with a few hundred audience members. I was one of the AI scientists featured in the documentary along with big

wigs like Elon Musk, Stuart Russell, Andrew NG and writers Jonathan Nolan and John Markoff. Elon Musk kicked off the evening with director Chris Paine, emphasizing how AI was an important topic that could very well determine the future of humanity.The excitement in the air was palpable. I was one of seven “AI experts” who were to be invited on stage after the screening for a Q&A session with the audience. Shivon Zilis, Project Director of OpenAI and myself were the only women.
The documentary did an excellent job surveying the research and applications of AI, from automation and robots, to medicine, automated weapons, social media and data, as well as the future of the relationship between humans and machines. The work my team and I are doing provided a clear example of the good that can come out of AI.

As I watched in my seat, I could hear the audience gasp at times, and I couldn’t help but notice a couple of things: for one, there was this foregone assumption that AI is out to get us, and two, this field is still so incredibly dominated by men – white men specifically. Other than myself, there were two other women featured–compared to about a dozen males. But it wasn’t just the numbers–it was the total air time. The majority of the time, the voice on screen was a male. I vowed that on stage that night, I would make my voice heard.

Here are some of my key thoughts coming out of the premiere and dialogue around it:
1. AI is in dire need of diversity.

The first question asked from the audience was, “Do you see an alternative narrative here–one that is more optimistic?” YES, I chimed in quoting Yann LeCun, head of AI research at Facebook and professor at NYU: “Intelligence is not correlated with the desire to dominate. Testosterone is!” I added that we need diversity in technology–gender diversity, ethnic diversity, and diversity of backgrounds and experiences. Perhaps if we did that, the rhetoric around AI would be more about compassion and collaboration, and less about taking over the world. The audience applauded.

2. Technology is neutral–we, as a society, decide whether we use it for good or bad.

That has been true throughout history. AI has so much potential for good. As thought leaders in the AI space, we need to advocate for these use cases and educate the world about the potentials for abuse so that the public is involved in a transparent discussion about these use cases. In a sense that’s what is so powerful about this documentary. It will not only educate the public but will spark a conversation with the public that is so desperately needed.

My company, Affectiva joined leading technology companies in the Partnership on AI–a consortium of companies the likes of Amazon, Google, Apple, and many more, that is working to set a standard for ethical uses of AI. Yes, regulation and legislation are important, but too often that lags, so it’s up to leaders in the industry to spearhead these discussions and action it accordingly. To that end, ethics also needs to become a mandatory component of AI education.

3. We need to ensure that AI is equitable, accountable, transparent and inclusive.

The real problem is not the existential threat of AI. Instead, it is in the development of ethical AI systems. Unfortunately today, many are accidentally building bias into AI systems that perpetuate the racial, gender, and ethnic biases existing in society today. In addition, it is not clear who is accountable for AI’s behavior as it is applied across industries. Take the recent tragic accident where a self-driving Uber vehicle killed a pedestrian. It so happens that in that case, there was a safety driver in the car. But who is responsible: the vehicle? The driver? The company? These are incredibly difficult questions, but we need to set standards around accountability for AI to ensure proper use.

4. It’s a partnership, not a war.

I don’t agree with the view that it’s humans vs. machines. With so much potential for AI to be harnessed for good (assuming we take the necessary steps outlined above), we need to shift the dialogue to see the relationship as a partnership between humans and machines. There are several areas where this is the case:

  • Medicine. For example, take mental health conditions such as autism or depression. It is estimated that we have a need for 15,000 mental health professionals in the United States alone. That number is huge, and it doesn’t even factor in countries around the world where the need is even greater. Virtual therapists and social robots can augment human clinicians using AI to build rapport with patients at home, being preemptive, and getting patients just-in-time help. AI alone is not enough, and will not take doctors’ place. But there’s potential for the technology, together with human professionals, to expand what’s possible with healthcare today.
  • Autonomous driving vehicles. While we are developing these systems, these systems will fail as they keep getting better. The role of the human co-pilot or safety driver is critical. For example, there are already cameras facing the driver in many vehicles, that monitor if a human driver is paying attention or distracted. This is key in ensuring that, in a case where a semi-autonomous vehicle must pass control back to a human driver, the person is actually ready and able to take over safely. This collaboration between AI and humans will be critical to ensure safety as autonomous vehicles continue to take the streets around us.
    5. AI needs Emotional intelligence.

    AI today has a high IQ but a low EQ, or emotional intelligence. But I do believe that the merger of EQ and IQ in technology is inevitable, as so many of our decisions, both personal and professional, are driven by emotions and relationships. That’s why we’re seeing a rise in relational and conversational technologies like Amazon Alexa and chatbots. Still, they’re lacking emotion. It’s inevitable that we will continue to spend more and more time with technology and devices, and while many (rightly) believe that this is degrading our humanity and ability to connect with one another, I see an opportunity. With Emotion AI, we can inject humanity back into our connections, enabling not only our devices to better understand us, but fostering a stronger connection between us as individuals.

    While I am an optimist, I am not naive.

    Following the panel, I received an incredible amount of positive feedback. The audience appreciated the optimistic point of view. But that doesn’t mean I am naive or disillusioned. I am part of the World Economic Forum Global Council on Robotics and AI, and we spend a fair amount of our time together as a group discussing ethics, best practices, and the like. I realize that not everyone is putting ethics in consideration. That is definitely a concern. I do worry that organizations and even governments who own AI and data will have a competitive advantage and power, and those who don’t will be left behind.

    The good news is: we, as a society, are designing those systems. We get to define the rules of the game.

    AI is not an existential threat. It’s potentially an existential benefit–if we make it that way. At the screening, there were so many young people in the audience watching. I am hopeful that the documentary renews our commitment to AI ethics and inspires us to apply AI for good.

    Link to video, Do you Trust this Computer?

    Learn more about Affectiva.

 

Bias in AI Increasingly Recognized; Progress Being Made

Bias in AI decision-making and in the algorithms of machine learning has been outed as a real issue in the march of AI progress. Here is an update on where we are and efforts being made to recognize bias and counteract it, including a discussion of selected AI startups. AI reflects the bias of its […]

Bias in AI decision-making and in the algorithms of machine learning has been outed as a real issue in the march of AI progress. Here is an update on where we are and efforts being made to recognize bias and counteract it, including a discussion of selected AI startups.

AI reflects the bias of its creators, notes Will Bryne, CEO of Groundswell in a recent article in Fast Company. Societal bias – the attribution of individuals or groups with distinct traits without any data to back it up – is a stubborn problem. AI has the potential to make it worse.

“The footprint of machine intelligence on critical decisions is often invisible, humming quietly beneath the surface,” he writes. AI is driving decision-making on loan-worthiness, medical diagnosis, job candidates, parole determination, criminal punishment and educator performance.

How will AI be fair and inclusive? How will it engage and support the marginalized and most vulnerable in society?

Courts across the US are using a software tool suspected to be biased against African-Americans, predicting future crimes at twice the rate of white people, and underestimating future crimes among white people, according to a recent report by ProPublica, the non-profit investigative journalism outfit. The software tool, developed by Northpointe, uses 137 questions including “was one of your parents ever sent to prison?” The tool is in widespread use; Northpooint has refused to make the algorithm transparent, citing its proprietary business value.

AI is only as effective as the data it is trained on, Byrne wrote in Fast Company. When Microsoft introduced Tay.ai to the world in 2016, the conversational chatbot was to use live interactions on Twitter to get “smarter” in real time. But Tay became horribly racist and misogynist and was shut down after 16 hours.

Trend Toward More Openness

The trend now is toward opening up the black box of AI decision-making algorithms. The AI Now Institute nonprofit is advocating for fair algorithms; they have proposed that if an algorithm providing services for people cannot explain its decision, it should not be used. Regulations requiring such transparency from AI systems are likely to be required in the near future. The General Data Protection Regulation standards of the European Union, to go into effect on May 25, 2018, push in this direction as well.

Within the data science community, OpenAI is a nonprofit developing open source code in the new field of explainable AI, focusing on systems that can explain the reasoning of their decisions to human users.

Some point to the importance of having teams with diverse backgrounds across race, gender, culture and socioeconomic background designing and building AI systems. The Ph.D. technologists and mathematicians who have advanced the AI field needs to expand. Sociologists, ethicists, psychologists and humanities experts need to join the ranks.

It may be that separate algorithms are needed for different groups. In job candidate software, predictors of successful women engineers and male engineers are not the same. Digital affirmative action may be able to correct for structural bias that might be invisible.

Efforts Underway to Address Bias in AI Include Startups

AI Now was launched at a conference at MIT in July 2017. The founders were Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google. In an email to MIT Technology Review, Crawford said, “It’s still early days for understanding algorithmic bias. Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”.

Cathy O’Neil is a mathematician and author of the book, “Weapons of Math Destruction,” which highlights the risk of algorithmic bias. “Algorithms replace human processes, but they are not held to the same standards,” she has said. “People trust them too much.”

O’Neil is now head of Online Risk Consulting & Algorithmic Auditing, a startup set up to help businesses identify and correct bias in the algorithms they use. The firm’s clients include Rentlogic, a company that grades apartments in New York City. The company is also engaged in several projects in industries such as manufacturing, banking and education.

Asked in an email interview with AI Trends about the outlook for addressing bias in AI algorithms, O’Neil said, “It’s an emerging field. I’m not sure how or exactly when but within the next two decades we will either have solved the problem of algorithmic accountability or we will have submitted our free will to stupid and flawed machines. I know which future I’d prefer.”

Also, “There’s increasing academic work on the topic (see FAT* conference discussion below) but of course the IP laws and licenses tilt the playing field towards the tech giants. Not to mention that they are the ones who own all our data. So there’s a limited amount that outside researchers can accomplish without regulations or subpoenas.”

O’Neil continued,  “But again I think the current state of affairs will end. I just don’t know exactly how much damage will take place before it does.“

FAT* Conference Gaining Steam

The conference on Fairness, Accountability, and Transparency (FAT*), which held its fifth annual event in February 2018, brings together researchers and practitioners interested in fairness, accountability and transparency in socio-technical systems.

This community sees progress being made to address bias in AI technologies and automated decision-making. The group has a multidisciplinary and computer science-focused perspective, said Joshua Kroll, program chair, in an email interview with AI Trends. “We’ve seen truly exponential growth in the interest in this area,” said Kroll, a computer scientist who is a Postdoctoral Research Scholar at the UC Berkeley School of Information.

“From our early workshops on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) starting in 2014 with a few dozen people, we’ve had yearly doubling in both the amount of contributed work and the number of event attendees. At this year’s conference, for example, we had over 500 people registered with a waiting list of over 400 people. And we’ve reached the selectivity of top-tier research venues in computer science to select the 17 research papers chosen for presentation as well as the six tutorial sessions,” Kroll said.

He added, “One important improvement is the way scholars and practitioners alike are starting to view these problems as cutting across different concerns and requiring solutions from many disciplines. The community, by and large, realizes that there will be no single “most fair” algorithm, but rather that fairness (or the elimination of bias) will be a process combining measurements and mitigations at the technical level with improvements in human-level processes for understanding what technology is doing.”

This year’s FAT* featured an interdisciplinary group of speakers on a range of topics, including how to deploy responsible models in life-critical situations. One session focused on the use of machine learning to support screening of referrals to a child protection agency in Pennsylvania.

Presentations on face recognition systems showed that while they have very good performance overall (88-93% accuracy), they had much worse performance for darker-skinned faces (77-87% accuracy), and women (79-89% accuracy). Performance was even worse for people in the intersection of those two subgroups (i.e., darker-skinned females) (65-79% accuracy), Kroll said.

“Nearly all of the work at FAT* is meant to change the way people design and build these systems to help them understand and avoid problems of bias or other unintended consequences,” he said. “The work on face recognition accuracy, for example, caused one of the companies whose systems were examined to replicate the study internally and make changes to their algorithms to reduce or eliminate the problem.”  The effect of those changes were not yet validated at the time of the conference.

“I think the most important takeaway from FAT* and the growth of this community has been the idea that we won’t make algorithms fair, accountable, or transparent if we only think about how to intervene purely at the technical level,” Kroll said. “That is, while it’s important and useful to develop technologies that explicitly mitigate bias, we still need to understand which biases need to be corrected or which parts of a population need extra protection. And even when we know that, such as when the law forbids discrimination on the basis of a protected attribute like race or gender, we still need to take a wide view to understand the ways in which a system causes negative impacts to those protected groups.”

Finally, he said, “It’s exciting to me that we’re starting to see ideas from this research community make the jump from the academic world into real practice. I’m excited to see companies thinking hard about these issues and sending top engineering leadership to engage with and learn from the research community on these problems.”

(For more information, go to FAT*.)

Google Sensitized to Bias

Google’s cloud-based machine learning systems aim to make AI more accessible; with that comes risk that bias will creep in.

John Giannandrea, AI chief at Google, was quoted in an October 2017 article in MIT Technology review as being seriously concerned about bias in AI algorithms. “If we give these systems biased data, they will be biased,” he stated. “It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it; otherwise, we are building biased systems. If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it,” he stated.

Google recently organized its own conference on the relationship between humans and AI systems, that included speakers on the subject of bias. Google researcher Maya Gupta described her efforts to make more transparent algorithms, as part of a project known internally as “GlassBox.”  A presentation on the difficulty of detecting bias in how Facebook selects articles for its News Feed was made by Karrie Karahalios, a professor of computer science at the University of Illinois.

Recruiting Software Firms Aim to Cut Down Bias

Recruiting software firms have a keen interest in reducing or eliminating bias in their approaches. Mya Systems  of San Francisco, founded in 2012, does this through reliance on a chatbot named Mya. Co-founder Eyal Grayevsky told Wired in a recent interview that Mya is programmed to interview and evaluate job candidates by asking objective, performance-based questions, avoiding the subconscious judgements that a human may unconsciously make. “We’re taking out bias from the process,” he stated.

Startup HireVue seeks to eliminate bias from recruiting through the use of video- and text-based software. The program extracts up to 25,000 data points from video interviews. Customers include Intel, Vodafone, Unilever and Nike. The assessments are based on factors including facial expressions, vocabulary and abstract qualities such as candidate empathy. HireVue CTO Loren Larsen was quoted as saying that candidates are “getting the same shot regardless of gender, ethnicity, age, employment gaps or college attended.”

The startup recruiting software suppliers are not blind to the possibility that bias can still occur in the AI system. Laura Matha, founder and CEO of AI recruitment platform Talent Sonar was quoted in Wired as seeing “a huge risk that using AI in the recruiting process is going to increase bias and not reduce it.” This is because AI depends on a training set generated by a human team which may not be diverse enough.

This risk is echoed by Y-Vonne Hutchinson, the executive director of ReadySet, a diversity consultancy based in Oakland. “We try not to see AI as a panacea,” she told Wired. “AI is a tool and AI has makers and sometimes AI can amplify the biases of its makers and the blind spots of its makers.” Diversity training helps the human recruiters to spot the bias in themselves and others, she argues.  

  • By John P. Desmond, AI Trends Editor

Algorithmic Bias is Real and Pervasive. Here’s How You Solve It.

By Robin Bordoli, CEO, CrowdFlower A few months back, I found myself in one of those big electronics stores that are rapidly becoming extinct. I don’t remember exactly why I was there, but I remember a very specific moment where a woman with a Chinese accent paced up to the cashier, plopped her home voice […]

By Robin Bordoli, CEO, CrowdFlower

A few months back, I found myself in one of those big electronics stores that are rapidly becoming extinct. I don’t remember exactly why I was there, but I remember a very specific moment where a woman with a Chinese accent paced up to the cashier, plopped her home voice assistant down on the counter and proclaimed: “I need to return this…thing.” The clerk nodded and asked why. “Because this damn thing doesn’t understand anything I say!”

You’d be surprised how often voice recognition systems have this problem. And why is that, exactly? It’s because they’re trained on the same kind of voices, namely engineers in the valley. That means that when an assistant hears a request from someone in a thick Southern drawl, a wicked hard Boston accent, or a Cajun N’awlins dialect, you name it, they simply won’t know what to make of those commands.

Now, while a voice assistant understanding your every request isn’t exactly a life or death problem, it’s evidence of something called algorithmic bias. And whether or not algorithmic bias is real is no longer up for debate. There are myriad examples, from ad networks showing high-paying jobs to men far more often than women and models that trumpet 1950 gender bias to bogus sentencing based on classifiers to AI-judged beauty pageants that demonstrate preference for white contestants. Hardly a month goes by where another, high-profile instance isn’t plastered across tech news.

Again, the question isn’t whether the problem exists; it’s how we solve the problem. Because it’s one we absolutely have to solve. Algorithms already control much more of our lives than most people realize. They’re responsible for whether you can get a mortgage, how much your insurance costs, what news you see, really, essentially, everything you do and see online–and increasingly offline as well. None of us should want to live in a society where these biases are codified and amplified.

So how do we solve this problem? First, the good news: almost none of the prominent examples of algorithmic bias aren’t due to malicious intent. In other words, it isn’t as if there’s a room full of sexist, racist programmers foisting these models on the public. It’s accidental, not purposeful. The bias, in fact, comes from the data itself. And that means we can often solve it with different–or more–data.

Take the example of a facial classifier that didn’t recognize black people as people. This is a glaring example of bias but it stems primarily from the original dataset. Namely, if an algorithm is trained on a set of white college students, it may have significant problems recognizing people with darker complexions, older people, or babies. They will be ignored. Fixing that means training that algorithm on an additional corpus of facial data, like the project the folks at Kiva are undertaking.

Kiva is a microlending platform focused predominantly on the developing world. As part of their application process, they ask prospective borrowers to include a photo of themselves, along with the other pertinent details to share with community of lenders. In doing so, Kiva has accrued a dataset of hundreds of thousands of highly diverse images, importantly captured in non-laboratory,  real-world, settings. If you take that original, biased facial classifier and retrain it with additional, labeled images from a dataset that is more representative of the full spectrum of human faces, suddenly, you have a model that recognizes a much wider population.

Most instances of algorithmic bias can be solved in the same way: retraining a classifier with tailored data. Those voice assistants that don’t understand accents? Once they hear enough of those accent, they will. The same is true with essentially every example I cited above. But this does beg a different question: if we know how to fix algorithmic bias, why are there so many instances of it?

This is where companies need to step up. Because the instances of bias we mentioned above really should have been caught. Think about it: why didn’t anyone consider making sure their facial recognizer could understand non-white faces? Odds are, they didn’t consider it at all. Or if they did, maybe they checked in stale, laboratory conditions, that don’t mirror the real world.

Here, companies need to consider two things. First, hiring. Diverse engineering teams ask the right questions. And by most measures, diverse teams perform better because they bring different experiences to their work. Second, companies aren’t thinking enough about their users. Or, to put it more directly, they aren’t thinking enough about their universe of potential users. Diverse teams, inherently, will help with this problem, but even then you can run into problems. Take a moment before you release an machine learning project and stress-test it in ways your team didn’t think off straight off the bat. Use empathy. Realize that different users will act different ways and that, although you can’t reasonably hope to foresee them all, by making a concerted effort, you can catch a great deal of these problems before your project goes live.

At this point, most of us are aware that artificial intelligence is going to transform business and society. It’s a definite at this point, though experts can quibble on the extent. We also know that AI can both amplify existing bias and even evidence bias where none was intended. But it’s solvable. It is. It’s just a matter of being conscientious. It means hiring smartly. It means testing smartly. And it means, above all, using the same data that makes AI work to make AI work more fairly. Algorithmic bias is pervasive, but it’s not intractable. We just need to admit it exists and take the smart steps to fix it.

For more information, go to CrowdFlower.com.

 

Researchers Combat Gender and Racial Bias in AI with Teams

When Timnit Gebru was a student at Stanford University’s prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the U.S.  While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible […]

When Timnit Gebru was a student at Stanford University’s prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the U.S.  While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible to bias—racial, gender, socio-economic. She was also horrified by a ProPublica report that found a computer program widely used to predict whether a criminal will re-offend discriminated against people of color.

So earlier this year, Gebru, 34, joined a Microsoft Corp. team called FATE—for Fairness, Accountability, Transparency and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.

“I started to realize that I have to start thinking about things like bias,” says Gebru, who co-founded Black in AI, a group set up to encourage people of color to join the artificial intelligence field. “Even my own Phd work suffers from whatever issues you’d have with dataset bias.”

In the popular imagination, the threat from AI tends to the alarmist: self-aware computers turning on their creators and taking over the planet. The reality (at least for now) turns out to be a lot more insidious but no less concerning to the people working in AI labs around the world. Companies, government agencies and hospitals are increasingly turning to machine learning, image recognition and other AI tools to help predict everything from the credit worthiness of a loan applicant to the preferred treatment for a person suffering from cancer. The tools have big blind spots that particularly effect women and minorities.

“The worry is if we don’t get this right, we could be making wrong decisions that have critical consequences to someone’s life, health or financial stability,” says Jeannette Wing, director of Columbia University’s Data Sciences Institute.

Researchers at Microsoft, International Business Machines Corp. and the University of Toronto identified the need for fairness in AI systems back in 2011. Now in the wake of several high-profile incidents—including an AI beauty contest that chose predominantly white faces as winners—some of the best minds in the business are working on the bias problem. The issue will be a key topic at the Conference on Neural Information Processing Systems, an annual confab that starts today in Long Beach, California, and brings together AI scientists from around the world.

Read the source article at Bloomberg Technology.