Q&A  | 

Understanding the ethical dimensions of technology with Lorena Jaume Palasí, Ethical Tech Society

“Having less data is good for privacy but it’s not good for discriminatory issues”

Tags: 'Blockchain' 'gobierno electrónico' 'Innovación pública' 'Inteligencia artificial'

SHARE

Reading Time: 11 minutes

Lorena Jaume-Palasí is Executive Director of the Ethical Tech Society. She was Co-founder and Executive Director of the organization AlgorithmWatch, based in Berlin, which analyses the ethics of automation and digitalization in the use of socially relevant algorithms. Specialized in philosophy of law, she is part of the group of advisers on artificial intelligence and big data for the Government. In addition, she leads the secretariat of the Forum for Internet Governance (IGF) in Germany and is part of the 100 of COTEC. In short, she is one of the people who ensures that machines and algorithms contribute to the common well-being with a fair use of their abilities.

Can you explain a little bit about the Ethical Tech Society? What are your goals/challenges?

Ethical Tech Society is a non-profit organisation. The main aim is to understand the social dimensions of technology.We create studies to generate more knowledge about what the real social consequences and benefits are that happen as a result of the usage of technology. We also try to create theory. It’s a fact and theory based non-profit. We are currently working in Berlin with a very diverse group of scientists, artists, sociologists, and private and public sector figures, creating utopias by ideating specific AI technologies and we’ll be launching our first campaign soon. We’re critical about the specific usage of technology that is non-beneficial for humanity and we are creating evaluation systems to understand where the ethical gaps in technology lie. The community has been very proficient in creating systems to test software for security and integrity gaps, but trying to understand where the ethical gaps are is still in development and this is one of the things we’re focusing on: understanding the technology from the outside. 

Is there enough being done in terms of studying the ethics of algorithms, particularly in countries with less experience in internet governance?

One of the things that strikes me the most is that there is no real interaction between trained ethicists. I mean ethics is a discipline. Equal to law, it’s not only about identifying principles, it’s also very much about how to apply those principles. Even though ethics is something that is being developed in a dialogical way within society, this doesn’t mean it shouldn’t be applied in the same way that the law is applied. We have a lot of people that have an ethical sensibility and sensitivity, but they don’t have the proper tech training so the conversations that we’re having right now are with a very rudimentary understanding of ethics, people don’t really understand what the mechanism of ethics looks like.

I hope that we get more ethicists into the conversation so that we can start a more productive and systematic way of trying to understand and also trying to create a position towards the usage of technology.

Right now there is a lot of misunderstanding, people think that you can program ethics in code. There are also people who think that ethics is simply about identifying principles but don’t understand what an ethical tool is. For example transparency can be both a tool and a goal in itself.

On the one side people think that you can code ethics which is wrong because ethics requires a process and a way of thinking that machines cannot perform. On the other side there is expectation from social scientists when they interact with machines and software, because being laymen they expect more from technology because it looks like magic to them.

There is a lot of expectation from the machine, but it is only computing and predicting from probabilities. Out of probabilities you won’t get certainties, you won’t get logical thinking and contextualisation, causality, all the things that are necessary to understand a situation. What does someone else know, what do I know, how can I get into the mind of the other person, how can I lead the other person, teach them, lie to them? These are all things that belong to the complex world of intelligence that machines cannot perform, but many people expect AI to be able to perform this way. They just see the outputs. What we’re doing is trying to turn on the lights, showing them how the supermarket doors open, what is behind the mechanism.
We have a lot of technicians with an ethical sensitivity but they’re not really aware of what ethics really means.

Ethics is like law. You need people that have been educated and trained to channel ethical sensitivities into a system that can be seen as a form of ethical code.

Right now we see a lot of scientists referring to things that an ethicist would never call value as an ethical principle. It could be transparency, explainability, human rights, these are just duplications of what is already enshrined in law. By simply duplicating what is already enshrined in law and has a different functionality to ethics into the ethical realm we are causing a lot of confusion. It will take a while I think until we get diverse expertise being heard. 

Who creates an algorithm?

I don’t like to talk about algorithms. We are talking about automation systems, about automating a specific process. Automating means that you substitute what before was done manually with technology. The whole automation system already starts with a manager deciding whether they want a specific step in the process being automatised. That starts to shape the type of technology you are going to have. So if you decide that you want to automatize the incoming calls that come from unsatisfied customers in your system, instead of automatizing how you proceed with complaints which could be partially automatized, that already plays a role in what type of software you need to use. Is it going to be concentrated on audio, biometrics, or data categorization.

People tend to assume that the algorithm is the software, but software is simply a translation of an algorithm into a formulation.

The algorithm is only a very thin mathematical formula, just a few lines. It could be developed or formulated into a recipe, a move in a card game, a chess move, or into software. The algorithm is a skeletal format of the system. It is so underdetermined that you could use the very same algorithm for film curation and breast cancer research, it just depends on the training and the data. First, you don’t have specific data so you start training it with training data to shape it for a specific context. Next, you need training algorithms, then you implement it in real life, then you have people in the sector who have to interpret the output and then you have the so-called ‘feedback loops’ because when you implement AI it’s always a self-learning process. It reviews the outputs and tweaks in the inputs to better itself and optimise efficiency. The feedback loop is part of the system but a different programming based on reflection about what is going in and what is coming out. There’s a lot of people involved in this training. It’s top-down to a certain extent when it comes to formulation. The formula decides which part of the process is being automatized, but there is so much dependence on the data.

Everyone that is producing data for this system has an opportunity to shape the system.

For instance in chatbots or small speech robots, when people start interacting with them and speaking to them, the robots learn the language used by those people. So you can turn a chatbot into a racist or a sexist. And you cannot foresee that as a programmer. It’s a very simple example of how people can also co-shape these type of technologies. There’s a lot of technology where you don’t know what’s going to happen. 

How can we ensure algorithm designers are held accountable? Is any public administration controlling what they do or decide?

People have been researching the social impact of automation but nobody has been paying much attention. In the health sector we’ve seen some progress being made. Reflecting on the social impact, understanding who is behind the system, why we are excluding people from the system and creating specific biases, asymmetric data presence in the data banks that are running the algorithms, people are starting to pay attention to these things.

One of the problems that we have is that people are trying to understand where the problems happen and the problem is that you can never identify the specific place. It’s such a complex process with many steps.

Sometimes the problem is not with the algorithm or the data, but in the conceptualization. Just deciding that you want to automate a specific part of the process in a specific way might be already conceptually wrong, so everything that is going to come out of that is irrelevant or wrong. It could be the data. Some HR automatization is being trained with Asian data because we lack European data and then the software is being run through Europe and we already know that these training sets show a different social reality because the working ethics and social dynamics in Asia are totally different to Europe. It depends on so many assumptions. For example, we assume that the past is a good approximation for the future when it comes to services for human beings. Whereas when we create models for climate change we don’t assume that because the past is not a good approximation for the future.

We assume, for instance, that people like similar things in the commerce world. We make these assumptions, particularly in customer services and e-commerce, that we don’t make for agriculture or climate change where there is less human interaction. And this is interesting because it shows the complexity of our assumptions. This is something that as an engineer you have not been trained to look out for and it implies that you need a lot of people in the system that are not data scientists or engineers but anthropologists or sociologists that can look for different questions, different aspects that also play a role.

Liability is a big discussion. There are already mechanisms in play with these type of systems, we can log every change that has been made, by whom and at what time, better than ever before. But right now we are working at a very individualistic level, so is this algorithm going to harm someone’s privacy or human rights.

But this technology doesn’t work like that. We’re trying to understand the forest by analysing every single tree. If you want to understand the forest you have to look at it like an ecosystem, by analysing every single tree you won’t get a grasp of the forest dynamics.

The problem with algorithms is that most people don’t know what they are. How can we increase transparency? Could they be made more open to the public?

The thing with transparency is it translates responsibility to the one it is being transparent towards. At least in our legal culture. For example with food. If we had to be responsible for the food’s quality whenever we went to a supermarket we would have to carry a rucksack full of biochemistry and tests to analyse all the food we want to buy. This is clearly way too much to demand from a consumer and the same applies to this kind of technology. What we need is transparency about the governance of the system. It’s about feedback mechanisms, how can I say no this is wrong, it is putting me in the wrong drawer, I actually need something else. Who has oversight, who can I address if something is going wrong? Many of the systems that we have are really bad on feedback design.

Instead of transparency, we need to talk about what information citizens need in order to interact with technology in a dignified way. It’s not about understanding how the data or the algorithm works.

In a similar way to buying products, if they are not functioning or if there is something that I don’t like there needs to be a system which allows the customer to complain.

ublic debate, Cambridge Analytica, reports of government surveillance in China. The public has become more aware of the risks of AI. What will be the effect of this in the long term?

People are more aware than they realise. If you look at how young people are playing with the algorithms behind their timelines on social media. People are creating their own intuitions about how to navigate an automatized world. You can also see it happening with Uber drivers or Deliveroo workers too. In Philadelphia there is a specific hour where all the Uber drivers log out of the app together so that the algorithm thinks that there’s no one in the surrounding area of the airport so that they can push for higher prices. People are trying to fool or manipulate algorithms, even though they may not be able to give you a definition of an algorithm. You don’t even think about the camera in your phone or traffic lights as automatized gadgets. Once people start interacting with that they perhaps don’t reflect on automation but they create their own world around it and develop their own strategies.

 

With automation, comes the risk of manipulation. How are these risks being mitigated?

Mechanisms designed to influence people’s decisions are way more complicated than just simply computing something and this is something that we still don’t understand. It’s also part of this whole Cambridge Analytica debate, the fears and projections that we’re putting into Cambridge Analytica is a clear example of how societies work. Your perception and fear are actually way more complex than a simple interaction with your screen on your smartphone. It depends on the opinions of your family, what you hear in the workplace, walking down the street, interacting with people you don’t know, overhearing conversations. All of these help you decide how to see the world and what you think is right or wrong, what you want to believe is plausible or not.

Thinking that just by using social media you can manipulate a whole society is quite a simplistic idea. If you think about how propaganda has been created in the last century, it’s a very complex process to permeate the whole of society with a specific message.

Big tech companies decide what we see. Filter bubbles of personalised content can cause unity of thought, polarization etc. How might this evolve? 

First of all, the whole theory of filter bubbles is a non-scientific theory. People naturally look for confirmation, for validation, for people with similar opinions. That is a normal reaction. When historians and sociologists look at propaganda they don’t look at people trying to find validation, because it’s normal. The Nazis would never read a leftist or communist newspaper.
What counts when scholars try to understand how propaganda works is to understand to what extent can you perceive diverse opinions, hear dissident opinions and analyze views. That is what makes the difference between, for example, Spain and China. You can hear a lot of very different positions within Spain, but you won’t hear that much in China. That is what makes a difference when it comes to propaganda and to what extent it is permeating society. What you see in society rather than filter bubbles is polarization. We are seeing many more opinions than ever before. Ultra right-wing parties are consuming even more media than those in the more ideologically centered positions. It’s turned politics from a conversation into more of an identity debate. When we talk about politics it’s not always about a specific political programme, it’s a question of identity. I think that is one of the key points that hasn’t had much attention.

The whole design is of social media is programmed to give the impression that everything is very personal. We navigate always within one identity, within one account, rather than having a professional identity , a public one, a private one. When you comment or react to content it’s always in a very personal way and these type of radical conversations – or personal debates – are based more on aggressiveness and less on coming together and exchanging arguments. It’s an epidemic situation because on the one side we don’t deem social media companies to be ethical actors but on the other we are demanding them to make ethical decisions. Taking down hate speech content or demanding them to take decisions and action over the right to be forgotten. These are ethical decisions.

With social media, ID, registration and voice recognition, people are being data profiled from an early age. Google has received thousands of requests for ‘the right to be forgotten’ which challenges the free flow of information. How can we find a balance between data privacy and the open nature of the internet?

We need to understand that there’s a fundamental tension there. There is no single technology principle that applies to all ethical contexts.

Having less data is good for privacy but it’s not good for discriminatory issues. If you have less data you are more likely to discriminate.

That’s the first tension. So what do we want to do? Do we want to create a private system that is used by private people for private services or do we want to create a public system for public services? Our understanding of public services is eroding, because public services are not just something provided by the state, private individuals also have a public function and a public dimension to the way they live. Many of the conversations that happen in private are what constitutes public opinion.

Public opinion is created through the exchange of beliefs, moral ideas and political expectation, so this cannot be seen as something private. We’re sort of losing the nuance.

Public opinion is really important to evaluate, inquire and protest public power, and we seem to privatize it just because people think private people shouldn’t be public. So we have less data. We lose the focus on what is it that we want to protest by focusing so much on data privacy rather than human values. There is legitimate data, there is nothing wrong with that data, but because we fear discrimination based on that data people feel that it should not be made public. The result is inherently ethically wrong because we’re not trying to address the reason for discrimination but suppress information to avoid people behaving badly instead of going to the core of the mechanisms of why someone would behave badly. Simply assuming that ignorance will prevent people from doing wrong is fundamentally wrong.

Creating a veil of ignorance or obscuring information is not a way to address issues or the wrongs that people might do to each other. It is not the right process in both an ethical and digital sense for a democratic digital society. The way we police information to avoid other things is a very twisted way to cope with potential risk.