The COVID-19 pandemic highlighted the ubiquitous use of AI in our everyday lives. Sweeping across every sector, AI has radically transformed the world of work, bringing companies and governments new opportunities as well as challenges. One of the major risks we now face is AI systems creating or reinforcing patterns of gender inequality that lead to gender-specific impacts. This is a particular concern when it comes to the rise of platform work.
“AI and gig work are parts of the economy of the future, and we need to make sure they’re designed and regulated in a way that protects people,” EIGE Director Carlien Scheele.
Men and women face different risks
According to a recent report from the European Institute for Gender Equality (EIGE), Artificial intelligence, platform work and gender equality, women and men face different risks from the economy of the future. One such example is job loss due to automation. Likely to affect both men and women, there is a slightly higher risk of job loss for women, who, “are more likely to work in occupations that involve a high degree of routine and repetitive tasks.”
Data collected from almost 5,000 platform workers showed that some of the main attractions of gig work, such as flexibility, are often disadvantageous to women. The report states that “a higher share of women do platform work as they can combine it with household chores and family commitments (women: 36 %, men: 28 %),” and yet many end up working nights, at the weekend, and at hours they cannot choose.
AI systems trained with biassed datasets often reinforce gender stereotypes by reflecting societal norms. The adoption of AI systems for workforce management processes has surged, transforming routine tasks like hiring, task assignment and performance evaluation. This brings major risks of gender discriminaton, from automated hiring technologies that prioritise male applications, through to time-tracking software that deducts ‘low productivity time’ from pay, which poses a particular threat to those caring for small children – a responsibility that predominantly falls on women. According to the report, these “gender-specific risks … remain largely unaddressed.”
Where are all the women?
Only 16% of AI professionals in the EU and UK are women, a percentage which decreases with career progression. The EIGE report states that there are several key factors involved: gender stereotypes, the gender divide in digital skills and education, and gendered career barriers, including male-dominated work environments, sexual harassment and lack of access to funding. AI systems tend to reflect the opinions of those who designed them, so his can have a damaging knock on effect.
“AI systems and platform work models have exacerbated inequities in the labour market, including relating to gender discrimination but also by systematically surveilling, monitoring and taking decisions impacting working-class and racialised people.” Sarah Chander, senior policy advisor at European Digital Rights (EDRi)
Often defined as self-employed by default, many platform workers miss out on the benefits of full-time employment such as minimum wage, sick leave and social protection. This includes protection against discrimination and unfair treatment by algorithms. In addition, the essential contribution of so-called “ghost workers” – the low-paid, outsourced and invisible workforce central to the gig economy – continues to be “profoundly undervalued in proportion to the knowledge they help to create.”
A multi-level approach
AI has the potential to improve equality in the labour economy, but there is a also risk it could reinforce sexism and discrimination if the necessary safeguards are not in place. The EIGE report concludes that in order to address these issues, action is needed on multiple levels, including de-biasing datasets, strengthening regulatory safeguards and increasing diversity within the teams designing, developing and running AI systems.
The European Commission has launched a number of initiatives to address the risks posed by AI. The proposed Artificial Intelligence Act proposal is the world’s first comprehensive attempt to regulate AI and minimise the risk of bias and discrimination. The EU is also committed to training more specialists in AI in order to attract more women and people from diverse backgrounds into the sector, and to improving the working conditions of platform workers, bringing more of them into formal employment.
“This is our opportunity to edit out the age-old stereotypes, sexism and discrimination of the labour market, and to create a modern reality that serves the needs of both women and men.” EIGE Director Carlien Scheele
Beyond the platform economy, there are many organisations and initiatives targeting key problem areas in order to challenge the risks of algorithmic gender discrimination. Below are two key examples:
Digital gender divide
Girls are disadvantaged when it comes to digital adoption. They have lower levels of access to and use of digital technologies than boys and the result is that they are not benefiting from them in the same way. Digital products and services need to be designed with and for girls to meet their needs and reflect their realities.
UNICEF EAPRO Gender and Innovation team is developing a toolkit with best practices, designed to support innovators, designers and implementers integrate gender into digital products and services and help close the digital gender divide.
Gender and privacy
Surveillance and data exploitation are key issues in understanding the mechanisms of oppression over women and gender queer people. Patriarchy relies on the rigid categorisation of ID systems to impose a binary perspective of gender, welfare programmes participate in the control and constant monitoring of populations in vulnerable situations, data exploitation perpetuates traditional gender roles in society, and social surveillance limits the opportunities of women, trans and gender diverse people. It is essential that those fighting for gender equality reclaim the right to privacy.
Privacy International exposes abuses, campaigns for solutions, and demands change. Their objective is that governments reform intrusive surveillance and companies to change harmful business models that rely on personal data.