Article  | 

Algorithmic gender discrimination: where does it come from, what is the impact and how can we tackle it?

SHARE

Reading Time: 4 minutes

Where does AI gender bias come from?

To the vast majority of us, artificial intelligence is a complex, obscure and inscrutable world. We blindly trust in the mathematicians and data scientists who develop and understand it. Science fiction is probably the first thing that springs to mind, but the essence of AI is developing computer systems that can perform tasks previously thought to require human intelligence. We need to separate the hype from the reality.

 

AI and machine learning have historically faced criticism over inherent bias. Machine learning algorithms process vast quantities of data and rely on accurate information just as human intelligence does. It is humans who generate, collect, and label the data that goes into datasets, and it is humans who determine which datasets, variables, and rules the algorithms learn from to make predictions. Therein lies the problem. According to an article in the Stanford Social Innovation Review, “both of these stages can introduce biases that become embedded in AI systems.”

 

The digital gender divide

Algorithmic gender bias is a reflection of the world we live in. The Mobile Gender Gap Report 2021 from GSMA highlights  that women are 7% less likely than men to own a mobile phone and 15% less likely to use mobile internet. There are also 234 million fewer women than men accessing mobile internet. The fact that women have less access to these technologies means they aren’t not generating the same amount of data as male users which inherently skews datasets. There is also a chronic lack of women opting for careers in data science, which has multiple adverse effects. According to the European Institute for Gender Equality, “in the EU and the UK, just 16 % of those working on AI are women and women with 10+ years’ experience make up just 12 % of AI professionals.”

 

“Women are missing out on the enormous opportunities for well-paid work in an exciting and fast growing field; tech is being developed without women’s input, which means it all too often fails to meet women’s needs or inadvertently exacerbates existing gender divides; and women’s absence from the tech sector is resulting in a lack of qualified tech professionals.” Doreen Bogdan-Martin, ITU Director, on the Digital Gender Gap

 

What are the potential impacts of algorithmic gender bias?

 

AI systems based on incomplete or biased data can lead to inaccurate outcomes that infringe on people’s fundamental rights, including discrimination. This article from the Stanford Social Innovation Review lists a number of ‘real life’ examples that highlight the pervasiveness of gender bias in AI and the negative societal impacts it can produce:

 

  • Healthcare. Male bodies have been the standard for medical testing for centuries. With female bodies deemed too complex and variable, women are often missing from medical trials which can lead to health risks.
  • Infrastructure. Few urban datasets track and analyse data on gender, so buildings are often not designed to factor in women’s needs.
  • Consumer credit. Early processes used marital status and gender to determine creditworthiness. When AI systems learn from historical data, they pick up on patterns of women receiving lower credit limits than men and reproduce the same inequalities.
  • Most demographic data are labelled on the basis of binary female-male categories. When gender is simplified in this way, it reduces the potential for AI to reflect gender fluidity and nuanced gender identities and can lead to offensive treatment or erasure of already marginalised groups.
  • Facial recognition. Commercial facial-recognition systems often use image data sets that lack diverse and representative samples. Using the gender binary in gender classification builds in an inaccurate, simplistic view of gender.
  • Algorithms trained with biased data can perpetuate discriminatory hiring practices, leading to women’s CVs being automatically discarded because their profile doesn’t fit that of previous employees, a problem in a male dominated industry.
  • Gender-biased systems used in welfare can pose detriments to the physical and mental well-being of women and non-binary individuals, automatically and unfairly discriminating against and excluding them from essential support mechanisims.
  • Voice-recognition systems, increasingly used in the automotive industries, for example, often perform worse for women and non-binary individuals.
  • Translation software that learns from online text, has historically taken gender-neutral terms (such as “the doctor” or “the nurse” in English) and returned gendered translations (such as “el doctor” and “la enfermera,” respectively, in Spanish). This reinforces existing, harmful stereotypes and prejudices.

 

How can we tackle gender bias in AI?

For such a ubiquitous issue, we need a comprehensive approach from both the public and private sectors. Below are four basic principles, compiled from our report ​​Gender bias in data: Towards gender equality in digital welfare.

Gender-relevant datasets and statistics

  • Use feminist data practices and gender relevant datasets to challenge patriarchal power structures and binary gender classification.
  • Value multiple forms of knowledge and multiple perspectives.
  • Contextualise data within broader cultural and socio-economic realities.

Gender mainstreaming

  • Ensure diversity is a priority when developing AI ethics governance structures.
  • Incorporate gender perspectives into policies, practises, and structures.
  • Advocate for AI literacy training among gender experts and vice versa.
  • Prioritise gender equity as a primary goal for AI systems.

Co-design, oversight, and feedback

  • Embed gender diversity, equity, and inclusion among teams developing AI systems.
  • Include women and marginalised groups in design and feedback.
  • Assess datasets for under-representation of different gender identities and inequalities.
  • Practise gender analysis to capture the status quo and gender impact assessment to evaluate possible impacts.

Equality by default

  • Resolve digital gender barriers

 

You can also check out the following key resources for more information:

https://www.gsma.com/betterfuture/resources/ethicsplaybook

https://en.unesco.org/system/files/artificial_intelligence_and_gender_equality.pdf