AI algorithms can reproduce the bias of developers

AI Reflects The Biases Of Its Creators


The way in which AI could potentially reduce the need for specific segments of the human workforce is indeed a massive concern, but it falls within a larger issue related to the way artificial intelligence will interact with various human groups and categories: bias. As explained by Theodoros Evgeniou, one of the most urgent AI issues to be looked into today is fairness; how algorithms can reproduce the bias of developers, or even create their own biases due to flawed original datasets.

For example, a research conducted by the university of Virginia in 2016 showed that two large image collections used in machine learning, one of which is backed by Microsoft and Facebook, demonstrated severe gender biases: images of shopping and cooking were linked to women, while visuals of coaching and shooting were associated with men.

Even worse, machine learning doesn’t just mirror biases; it amplifies them, as explains Harvard PhD and data scientist Cathy O’Neil in her book Weapons of Math Destruction. O’Neil looked at how biases can manipulate mathematical models and ultimately reinforce discriminations. “Models are opinions embedded in mathematics,” she writes, warning that algorithms can affect our lives in aspects we wouldn’t even imagine, from finance to health, education, justice and recruitment.

Moreover, developing and testing standards for AI to identify biases early on is yet another strategy scientists are exploring, even though “the question is: what will be included in the algorithm to adjust for such biases?”, as asks Abeer El Tantawy, an educational specialist working for a chemoinformatics company.

MIT researcher and founder of the Algorithmic Justice League (a collective aiming to fight bias in algorithms) Joy Buolamwini’s thesis uncovered major racial and gender bias in AI services from multinationals such as Microsoft, IBM and Amazon. In her view, who, how and why we code matter. By answering these three questions, organizations can identify biases and curate training sets inclusively, taking into consideration the social impact of technology on people.

The search for solutions continues as AI evolves and increasingly becomes a reality, along with the challenges it brings.

This article has been part of Communicate’s June print edition.

Image Source: World Economic Forum