Artificial Intelligence Tackles Intersectional Bias

Building intelligent natural language processing (NLP) systems to understand text and speech requires teaching computers notions of what individual words mean. Machine learning approaches capture these meanings through word embeddings learned from large text data sets. But these approaches can learn unhealthy biases from training on biased text sources, potentially leading to artificial intelligence systems that discriminate against specific groups.

In order to tackle this issue, Professor of the and his students and have developed WordBias, a tool to identify intersectional bias in word embeddings. 

Intersectional bias is a form of discrimination which targets a population with different, sensitive identity types with respect to: gender, race, age, sexuality, religion, disability, etc. This type of bias is riddled throughout a variety of media platforms. The new tool WordBias can measure, investigate, and unveil such destructive biases. WordBias has the capacity to assess large volumes of data in a coherent manner. To make this interpretable, WordBias generates bias scores, which are available in its visual interface. These scores assess word associations with the subgroup or intersectional group that they are correlated with.

The inspiration for this application is rooted in the goal to achieve fairness in machine learning. “The existing literature has mostly focused on exploring individual social biases based on race, gender, ethnicity, etc. in word embeddings. Through WordBias, we have taken a small step towards filling that gap by helping explore biases against individuals as well as intersectional groups,” says PhD candidate Bhavya Ghai.

No programming knowledge is required for users to be able to run and interpret the findings from WordBias. To make it even more user-friendly, eventually “we’d like to extend support for word embeddings trained over different languages,” says the team.

The accessibility and accuracy of WordBias speaks to its usefulness. A variety of communities can benefit from this web application. Expert evaluators deemed the interface is intuitive and easy-to-use. In addition, these evaluators have concluded that WordBias can be especially useful for those working in Natural Language Processing and linguists, in addition to sociologists and psychologists. For researchers, the use of this application eliminates the need to write code, as it expeditiously discovers bias. Overall, WordBias can be employed as an eye-opening, educational tool.                                                                                                                     

In addition to its research-driven benefits, WordBias can be used as an auditing tool. For those working in data science, WordBias proves to be abundantly useful. Over trained static word embeddings, WordBias can evaluate the presence of positive and negative intersectional bias. In order to dismantle this form of discrimination, it must first be identified. After discovering problems, users/companies can find avenues to erase biases from their own word embeddings.

“Our hope is that this tool will raise awareness among the general audience about the fact that AI is not free from biases that are prevalent in our society,” says PhD Candidate Md Naimul Hoque. WordBias is addressing the real, global issue of intersectional bias. With the web application’s unfiltered presentation of intersectional bias throughout word embeddings, the visual tool actively promotes positive intersectionality. Ultimately, WordBias helps to achieve fairness and justice.

“In a future where AI technologies will play an ever-increasing role in our lives, it is important to keep a check on such technologies to ensure a fair inclusive world for everyone,” says Bhavya Ghai.

The paper, “WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings,” authored by Bhavya Ghai, Md Naimul Hoque, and Klaus Mueller, can be found here: https://arxiv.org/pdf/2103.03598.pdf