interview  | 

Is technology shaping your behaviour?

Interview with Lorena Jaume-Palasí, Founder and Executive Director of the Ethical Tech Society

Tags: 'Accountability' 'Algorithms' 'Artificial intelligence' 'Bias' 'Big data' 'Ethics' 'Privacy'

SHARE

Reading Time: 5 minutes

Lorena Jaume-Palasí is Executive Director of the Ethical Tech Society. She was co-founder and executive director of the organization AlgorithmWatch, based in Berlin, which analyzes the ethics of automation and digitalization in the use of socially relevant algorithms. Specialized in philosophy of law, she is part of the group of advisers on artificial intelligence and big data for the Government. In addition, she leads the secretariat of the Forum for Internet Governance (IGF) in Germany and is part of the 100 of COTEC. In short, she is one of the people who ensures that machines and algorithms contribute to the common well-being with a fair use of their abilities.

What is The Ethical Tech Society and what are your immediate goals and challenges?

The Ethical Tech Society is a new initiative that I am founding based on my experience working for former NGOs. I felt the need to create something focused on the social impact, but also the social bias of technology, because when we talk about technology and discrimination, for example, we only seem to concentrate on the inherent, statistical and mathematical bias of technology.

The social usage of technology tends to be different to the initial idea because society evolves. It’s not static.

So user behavior will be changed by technology, but it will in the course of time also co-shape technology and there will be an organic evolution of both human behavior and this type of technology. 

If you only have a team of developers and no social scientists the ethical dimension is not at the center of discussions. Sometimes people might hide behind algorithmic decisions so that they don’t have to explain their own bias is complicated. We wanted to have a more utopic conversation that puts people as the subjects of technology and does not objectify them.

The world of Big Data and AI is pretty opaque to the majority of the global population. Could you explain the relationship between these?

When we talk about AI, it’s a cultural projection. What we do with AI technologies is try to formalize these processes by creating or compartmentalizing different steps of the process and then reformulating those steps into mathematics. Nowadays the technologies that we use are very heavily data dependent. 

When we talk about big data or about AI, we project a lot of expectations but it’s not actually that magic. We’re talking about computing in an inductive way.

There are other ways of programming AI, but this is not the technology that we’re using right now. We are still talking about automatization and we’re still talking about technologies that depend on mathematics rather than deductive approaches. Being deductive is what we human beings are good at. Deduction is what we use to contextualize. Deduction is what we use to understand and put ourselves from a logical perspective into the position of someone else. And deduction is mainly the strategy that we need as human beings to call ourselves intelligent and to confirm whether someone has agency or not.

Algorithms are increasingly present in all areas of our lives. Who is responsible for making sure they are fair and that people are held accountable?

There will not be a single person or a single entity that is responsible for everything that happens with that technology but there will be different layers of responsibility depending on the dimension and the specific risks or challenges.

And that leads us to the challenge of creating new models of governance where both the developers or data scientists, but also the managers who decide that they want to automatise a specific part of their work process, are involved. It’s about understanding that it’s not just how customers interact with that technology or the benefits of that technology, it’s also about the different layers of co-shaping and co-creation because all those actors are going to have an influence and an impact on the way these technologies are being deployed.

Data scientists cannot provide knowledge about this type of thing because they have not been trained for that. This is not their job. This is the job of a sociologist or a social anthropologist. We need them to create fair systems.

How can we find a balance between privacy and the open nature of the internet?

Controlling data is a specific thing that concerns security and that concerns the governance of those types of technologies. Data in principle is something that happens because we are social beings. It is created as part of our language and part of our social interactions. There is an inherent social nature in the concept of data and this is why you cannot categorize data or assign this type of thing to a single entity. We think we can assign specific data to someone and then capitalize on that. I think that’s a dangerous thing to do because it means you reject the idea that data protection is a fundamental right because fundamental rights cannot be capitalized or commercialized.

Good AI requires a lot of manual work and that manual work cannot be performed by another algorithm, it is performed by human beings that have a very tedious,  monotonous way of working and it can be quite dehumanizing.

I think we have to be honest about this new market evolving in Europe because it’s throwing us back to the industrialization times where we would objectify specific parts of society just to create work that others can use to make themselves better off.

What would be the effects of the notion of privacy issues in terms of trust?

I think it’s very much on us to decide that we want to be subjects using that technology and not objects of it.

It will depend on the politicians and the media and the way they communicate how those technologies are being used. I don’t mean that it’s not important to address social inequality issues that are being magnified by this type of technology, but also to be constructive about it in terms of making an effort to create the values that you want this technology to have as a guiding principle.

I think this technology can be helpful to address many of the big questions that we will have in the next years to come. What type of society do we want to live in? What is the society of the future? What does social cohesion mean? What does it mean to use technologies that have an environmental impact? How can we create markets that are sustainable, based not only on innovation, but also on wellbeing? We can decide to use technology to help us implement those ideas or we can use technology to magnify existing problems. But I don’t think that that’s a question of technology, it’s a social question.

In what specific situations should we allow algorithms to act independently of human control?

When we automatize something we need to always think about it as a system that needs additional oversight. Humans and machines make different types of mistakes. Machines are consistent, machines are good at induction, probabilistic. Human beings are bad at probabilistic and we’re not consistent at all, but we are very good at contextualizing and we’re good at understanding whether this correlation is because of one reason or the other.

 

Read our previous interview with Lorena Jaume-Palasí.