Q&A  | 

Gry Hasselbach

“We don’t have to give up our privacy to have a digital life”

Tags: 'Ética de datos' 'gobierno electrónico'

SHARE

Reading Time: 3 minutes

Gry Hasselbalch is the cofounder of DataEthics.eu. For over a decade, she has worked with the industry, policymakers, NGOs, and end-users in the cross-field of tech, ethics, human rights, and society. She is an independent advisor and has been tasked by international institutions and governments on multiple occasions as an independent expert. Most recently, she was called upon to become a member of the Danish Expert Group on Data Ethics, which has the objective of providing the Danish government with recommendations on issues related to data ethics ultimo 2018. She is also appointed member of the European Commission’s high-level group on AI and ethics developing ethics and policy guidelines for AI in Europe, and co-chair of the P7006 standard within the IEEE’s Global Initiative for Ethical Considerations in AI and Autonomous Systems. Gry is behind several studies, reports and articles, writing a PhD at the University of Copenhagen on the interests in Data Ethics.

How do you see the digital revolution developing over the next decade?

The answer to this question depends on what we do today. Normally predictions about technological developments are shaped as dichotomies between the good and the bad. As either risks or opportunities. But actually, they hold the potential for both – most of the time at the same time. If we, for example, consider digital technologies and privacy, we don’t have to think about it as mutually exclusive.

We don’t have to give up our privacy to have a digital life. We can build privacy by design technologies.

We can build personal data stores where we are in control of our data and choose how to share our data and for what purposes. It all depends on how we in society choose to shape this development considering what type of digital infrastructure we want.

Is the future going to be more or less inclusive?

This depends on the values we embed in technological development.

Inclusion is a core human centric value that is emphasised in all the ethics principles and guidelines that are published worldwide at the moment. However, it’s a value that is not always considered in the design phase of technologies, nor in their adaptation.

For several reasons. Because it is not profitable to design for inclusion, because it doesn’t serve the purpose of efficiency, and most importantly because discrimination and bias is already a social problem that technologies just reproduce and reinforce. We’ve already seen over the last couple of years how technologies can amplify already existing powers in society with stories about discriminatory algorithms, big data voter manipulation, fake news and so on.

Technologies are like the infrastructures we normally think of as roads and buildings, now they are just data and signals that direct us in specific directions. We don’t think of them until the day they break down. But we need to consider them and their social force much earlier than that.

What is the key to building a more equitable digital society? 

Society and technologies evolve in a continuous process of negotiation, innovation, experimentation and adaption. And then there are the moments of consensus-making and standardisation.

We are in a moment like that now. Right now, for example, we have ongoing discussions in companies and among governments worldwide on data ethics and AI ethics. It is popular to say that ethics is the new green these days. Over the last year I have for example been part of the European discussion about AI ethics as a member of EUs high-level expert group on AI where we just a few months ago published ethics guidelines and policy recommendations on AI. The OECD also just published guidelines and all over the world nations and governments are presenting their version of “technology ethics”.

We are revisiting our values systems, so to speak, to understand how they apply in this data intensive technological environment.

How do democratic principles fit in? Who or what is accountable? Are there new formations of responsibility? How do we transfer original standardised values such as our human rights into this age? How do we embed them in the technological infrastructure of intelligent things? Do we need law reforms? Do we need to look into the design of alternative ethical design technologies, just as for a while now we have been building alternative green tech?

The key to building an equitable digital society is embedded in all of these questions.

Which technology will be the most radical in the near future?

I don’t see one specific technology as radical. I would rather consider technologies in contexts.

Generally, any technology forms part of a process of negotiations between different actors in society and their values and interests. Their radical effect in society depends on how these interests are negotiated and which values win over the other.

For example, right now we are in a negotiation process regarding which role AI should play in society – Who or for what purpose does it serve? Is it just a business adventure or a state’s mode of controlling its citizens? Or does it serve human beings and society in general?

Radical changes in society caused by technologies will depend on how the interests and conflicts between core actors are negotiated and resolved.