Q&A  | 

Aimee van Wynsberghe

“Governments must create a context which forces technology companies to take ethics seriously”

Tags: 'Data ethics' 'Ética de datos'

SHARE

Reading Time: 4 minutes

Aimee van Wynsberghe has been working in ICT and robotics since 2004. She began her career as part of a research team working with surgical robots in Canada at CSTAR (Canadian Surgical Technologies and Advance Robotics). Co-founder and co-director of the Foundation for Responsible Robotics and on the board of the Institute for Accountability in a Digital Age, Aimee also serves as a member of the European Commission’s High-Level Expert Group on AI and is a founding board member of the Netherlands AI Alliance. Also, she has been named one of the Netherlands top 400 influential women under 38 by VIVA and was named one of the 25 ‘women in robotics you need to know about’.

What is data ethics?

It is the study of ethical issues surrounding the collection, structuring, labeling, use, and potential future use of data. Digital Ethics is the study of the impact of digital technologies on society.

 

 

What are the main ethical issues when it comes to Big Data?

Collection without meaningful consent, institutionalization of bias and prejudice, asymmetry in power between industry (the data holders) and citizens (the data creators); dual use of the data, and risk to this data being stolen or used for unethical purposes in the future.

Ethics is a branch of philosophy, developed in a dialogical way, but it is not yet being readily applied to computer science. Why is this? How can we seed data ethics into the curriculum more effectively?

Three things must be done to get ethics applied to computer science in a systematic way.

First, computer science projects in university should have an ethics component which stimulates students to think about the ethical risks specific to their project and what design choices they make to mitigate them.

Second, ethicists must be trained to interact with computer scientists and engineers in a way that is productive – there needs to be a situation where computer scientists know that there are ethical issues with regard to their work and know that there is someone with the expertise necessary to help them think through these ethical issues and come up with design choices which will result in a better project. Third, governments must create a context which forces technology companies to take ethics seriously. This will allow the computer scientists who had ethics in their curriculum to speak up about ethical issues, and may convince companies to look to ethicists for a critical point of view regarding new products and features.

Algorithm bias often leads to discrimination. Sometimes the problem is not with the algorithm or the data, but in the conceptualization. Given the whole process of ADM is so complex, with many different parties involved, how can we hold someone accountable and what can we do avoid discrimination?

One thing which can greatly help with this problem of accountability is to require algorithms to come with data hygiene certificates.

The idea being that the data used to train the algorithm is analyzed and evaluated for issues of bias (as well as ensuring that the data was collected in an ethical manner). This would prevent the use of algorithms that are clearly biased due to the nature of the training data. This would also allow those thinking about using an algorithm to see if the training data included groups of people which will be subject to the algorithm in the context they will be using it in. Training an algorithm on a large, diverse, data set as responsibly as you can in the United States probably won’t make for an algorithm that could be responsibly used in a Syrian refugee camp.

There has been a proliferation of ethics declarations, guidelines, and committees over the past year. Do you see these as mere lip service or a genuine first step toward ethical data governance?

It’s difficult to say unequivocally that all efforts are lip service or not. In general, we see that when it comes from the AI companies themselves that there is a push to protect IP and/or there is no clear role or mandate for the ethicists and this makes it look like lip service. Many civil society groups, or multistakeholder groups, however, are creating guidelines which do serve as a necessary first step towards AI governance instruments.

It’s important to remember that the question isn’t about ethics OR regulation; rather, it’s about ethics AND regulation. Ethics tries to help us get to the kinds of regulation (and the scope) that we might need.

How can tech companies operationalise ethical guidelines?

It will be exceptionally difficult for tech companies to do this with their current expertise. Either they need to hire the expertise necessary to guide this process or they will need to get outside help.

Getting outside independent help to operationalize these guidelines, in the current context, will make the operationalization more legitimate – and legitimacy with regard to technology companies is a serious concern right now.

I am doing a project with Deloitte to help companies do exactly this (provide outside expertise on what companies can do to realize digital ethics in their organization) and the European Commission is working on this now with a piloting study; how to realize the Guidelines for Trustworthy AI in organizations. A final note on this is that this operationalization must be, to some degree, transparent. The public at large must know that these ethics guidelines are having some kind of effect if trust is to be re-established with tech companies.

Is it acceptable to waive ethical considerations, for example collecting personal data without giving people the option to opt out, in pursuit of the greater good?

It is never acceptable to waive ethical considerations. The collection of personal data without giving people the option to opt out can, however, be done in an ethical manner.

Most of us are not allowed to opt-out of giving personal financial details to the government because the government needs taxes to pursue a better society. However, this type of data collection is generally done with strict oversight and transparency – with specific laws guiding the practice. No tech company should get to decide what purpose(and the likelihood that the data achieves the stated purpose) legitimizes such data collection practices.