Q&A  | 

Robots, AI and human dignity, John Tasioulas, Director of the Institute for Ethics in Artificial Intelligence, Oxford University

"In a world of anthropomorphic robots powered by AI the danger exists that we begin to lose our already tenuous grip on the all-important idea that great value attaches to every human being - the idea of human dignity".

Tags: 'Artificial Intelligence' 'Derechos humanos digitales' 'Digital Rigths' 'Digital transformation' 'John Tasioulas'

SHARE

Which are, following your opinion, the most worrying potential side-effects of RAIs as far as human ethics is concerned?

To what extent have art and science fiction influenced how artificial intelligence is perceived and discussed today? And how realistic is that perception?

Is the fear that artificial intelligences will soon rival the abilities of the human brain also encouraged by the fact that current vendors try to make their products seem as human as possible because it increases the attachment between customer and product, and also reduces their obligation towards a product seen to be taking care of itself?

There is also the underlying concern that AI will change the way our society is built, with the current foundation of work being the main source of both income and for many self definition. If work were to disappear, would that be the biggest disruption in the shortest period of time in humanity ever and would we be able to adapt?

What do we have to do to preserve our ability to choose the future that will benefit us all?

In your paper First Steps Towards an Ethics of Robots and Artificial Intelligence you argue that “the more that our lives revolve around interactions with machines that are fashioned to service our desires, the more we risk succumbing to the temptation to extend the same instrumentalizing attitude towards our fellow humans”. A recent study in Spain showed that this was already happening among male teenagers in Spain as far as sex was related. Under your opinion, how important is delaying the age at which children are introduced to the Internet and AI in their development and thus the development of society?

Would you agree with AI researcher Rodney Brooks’ conclusion in his “Artificial Intelligence is a tool, not a threat” blog-post that “it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years”?

Reading Time: 4 minutes

John Tasioulas is the inaugural Director of the Institute for Ethics in Artificial Intelligence at Oxford University and a moral and legal philosopher. “Trying to make sense of human rights has been a major preoccupation of mine in the last few decades”, he says. It is from this perspective how he came to the ethics of Artificial Intelligence, a field in which he is one of the most highly respected scholars worldwide.

On February 16. the launch event for Oxford's Institute for Ethics in AI will take place and be live-streamed. Details are available here.

Which are, following your opinion, the most worrying potential side-effects of RAIs as far as human ethics is concerned?

In terms of side-effects, I would point to two large scale dangers. Both arise from the fact that AI is a technology that can replicate functions that typically require the exercise of rational faculties on the part of humans. 

The first is the gradual attrition and loss of various competences for rational self-direction if we become unduly dependent on AI in determining such things as what music we listen to, whom we socialise with, or which political party we vote for. The other is the worry that, in a world of anthropomorphic robots powered by AI, we begin to lose our already tenuous grip on the all-important idea that great value attaches to every human being – the idea of human dignity.

To what extent have art and science fiction influenced how artificial intelligence is perceived and discussed today? And how realistic is that perception?

I certainly think that there is a genuine worry that public engagement with AI is excessively influenced by sensationalised ‘apocalyptic’ yet highly speculative possibilities. A cynic might suggest that this only serves to distract from the pervasive ways in which AI shapes our lives in the here and now – for example, its use in sorting job applicants, determining who gets a loan, facial recognition, or even the sentencing of criminals.

Is the fear that artificial intelligences will soon rival the abilities of the human brain also encouraged by the fact that current vendors try to make their products seem as human as possible because it increases the attachment between customer and product, and also reduces their obligation towards a product seen to be taking care of itself?

This is an interesting point that echoes my response to question 2).

I think there are profound reasons to be cautious about creating anthropomorphic AI.

There is also the underlying concern that AI will change the way our society is built, with the current foundation of work being the main source of both income and for many self definition. If work were to disappear, would that be the biggest disruption in the shortest period of time in humanity ever and would we be able to adapt?

This is an important question, and one on which there needs to be greater interdisciplinary reflection, and indeed democratic debate. 

Work is not only a source of income and self-definition, as you rightly point out, it has traditionally also been a basis for citizen dignity in a democracy. One’s status as a free and equal citizen is partly a function of one’s willingness to engage in productive economic activity. 

So the idea of a world without work could be an alarming prospect if we have no sense of what might replace work as a source of personal accomplishment and civic dignity. We need to think harder about whether other activities can take over the role formerly played by work – such as play, aesthetic experience, friendship, political participation, and so on. AI forces us to confront the Aristotelian question of the nature of the good life, a question that is all too often set to one side because we take it as given that the shape of our lives will be largely determined by the occupations we hold.

What do we have to do to preserve our ability to choose the future that will benefit us all?

I think the most important factor here is the preservation, and strengthening, of democratic self-government. Democracy is the best institutional mechanism for protecting human dignity and the rights that flow from it. Democracy requires the participation of all citizens in deliberation and decision-making that bears on the public good. 

But democracy is continually under threat – for example, from the concentration of massive power in the hands of transnational corporations, the rise of exclusionary forms of populism, ideological polarization that leads to the demonization of one’s political opponents, the increasing assertion of authority by unaccountable technocrats, and gross socio-economic inequalities that marginalize so many of our fellow citizens. 

AI is often seen as a threat to democracy, e.g. the way in which AI-enabled political micro-targetting in social media is thought to manipulate people, often by spreading misinformation. One urgent question is whether AI technology can be deployed to foster more meaningful and more direct political participation by ordinary citizens. It would be good to see more resources directed to addressing that question.

In your paper First Steps Towards an Ethics of Robots and Artificial Intelligence you argue that “the more that our lives revolve around interactions with machines that are fashioned to service our desires, the more we risk succumbing to the temptation to extend the same instrumentalizing attitude towards our fellow humans”. A recent study in Spain showed that this was already happening among male teenagers in Spain as far as sex was related. Under your opinion, how important is delaying the age at which children are introduced to the Internet and AI in their development and thus the development of society?

It may not be so much a matter of delaying their introduction to the Internet and AI, but rather two other things. First, educating them in a way that they can best use these tools in order to enhance the quality of their lives. Second, regulating these tools so that they are better suited to help us discharge our greatest duty – which is to do what we can to enable our children to become good citizens and decent and happy adults.

Would you agree with AI researcher Rodney Brooks’ conclusion in his “Artificial Intelligence is a tool, not a threat” blog-post that “it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years”?

I’m broadly sympathetic to that statement. Of course, a tool can itself be a threat if it’s in the hands of a bad actor, or if it poses unintended risks in the hands of a well-intentioned actor. As for the malevolent AI question, I believe that this is a genuine issue worthy of serious consideration, even if it is a rather longer-term question. 

But many other issues also deserve serious consideration, such as the bearing of AI on human rights, access to health care, access to justice, and so on. So, it’s a matter of priorities. 

Different people will focus on different questions in line with their varying interests and abilities. But my own temperament leads me to prioritize issues of the here and now – or near future – in my own work on the ethics of AI.