video  | 

DFS Voices “We need a feminist and intersectional perspective to improve inequalities”, with Àtia Cortés

Tags: '#digitalfuturesociety' 'Inclusion'

SHARE

Reading Time: 3 minutes
Interview with Àtia Cortés, postdoctoral researcher at the Barcelona Supercomputing Center, about developing inclusive research for a better world.
“We need a feminist and intersectional perspective to improve inequalities”

PhD Àtia Cortés is member of the Social Link Analytics Group-Life Science Department at the Barcelona Supercomputing Center (BSC), and co-director of the AI4EU Observatory of Society and AI, aimed at bringing together the AI community while promoting European values and becoming a knowledge catalyst. Her interests lay in the fields of AI ethics, gender equality and gastronomy, and she approaches her research on the ethical, legal, social and cultural aspects of AI from a feminist and cross-sectorial perspective and a deep belief for reliable technology and a fairer world. Àtia holds a Master’s Degree in AI and a BS in ICT Systems Engineering from the Polytechnic University of Catalonia.

You are a recognised researcher of the Social Link Analytics Group at the Barcelona Supercomputing Center. What is your primary goal as a research group?

Our unit aims to create some kind of hub between the scientific community and the society. We are aligned with the Sustainable Development Goals of the United Nations, in order to contribute to key research areas such as healthcare and wellbeing, gender equality, or reduction of inequalities. We are particularly focused on two main topics: the study of disinformation patterns, especially in social media with the topic of healthcare; and the inclusion of sex and gender perspective in biomedical research.

Why is it key for science to ensure a better understanding of the social impact of certain projects?

As any new technology, personalized medicine and artificial intelligence are still kind of unknown by society. It is essential to involve citizens through participatory methods and to create multidisciplinary teams in order to bring together the scientific and a more humanistic community, and to achieve social inclusiveness, fairness and trustworthiness.

Your interests include AI Ethics. How has AI evolved, from a social perspective, in the last decade, especially after the pandemic?

In the case of AI, and in particular machine learning, there has been a turn towards an ethical perspective. Before, we were assessing systems focusing on performance metrics like accuracy, precision, or error measures. Over the last years we have seen that this is not always enough. Especially when AI has a direct impact on humans, we must understand what the consequences of having wrong predictions or classifications are, even if the percentage is very low.

Do you see hope in the years to come for AI Ethics?

I do not know. What I know is that there has been a spread of publications or releases of AI and ethics, documents, guidelines, national strategies… So everyone now is trying to contribute to this, from public and private sectors, governments, the European Commission. It is an important step to start having this kind of reflection, to somehow try to translate this to the society in order to make them understand which are the capabilities, but also the limitations of artificial intelligence, the consequences of the interaction.

Are we on track for a more trustworthy and inclusive technology in Europe?

Yes, but we need to take into account that most of the big technology companies are not in Europe. We consume a lot of this technology and produce millions of data every day for them. So we need to see how this European human-centric, trustworthy vision of AI will be aligned and combined with other technologies in the rest of the world. To achieve trustworthiness and in genera, responsible development of technology, education and awareness towards the decision makers are key. They need to adopt all this knowledge and put it into practice from the very early stage of the AI system’s development. We as society must start taking responsibility for our use and interaction with technology as well.

Your research approach is feminist and cross-sectorial. Why is this crucial?

Women were excluded from clinical trials for decades. When we were creating new drugs and learning from any kind of disease, this knowledge was not complete because we were leaving out half of the population. This has proved to have a severe impact on women's health. We have poor knowledge of disease symptoms and of treatment effects in comparison to men. This is even more amplified if we think of women of color and other social or ethnic minorities. Nowadays, with the use of artificial intelligence, this could also be even more amplified if we think of machine learning or autonomous decision-making systems, since they will be learning from training data sets that are already biased. It is important to make people aware of that first, the consumers, the society, but also the ones making decisions and creating policies to make them understand that this has been traditionally built by and for white, cis, middle age, rich men. We need a feminist and intersectional perspective to improve inequalities.