Q&A  | 

Aviv Ovadya, researcher on information ecosystems and misinformation

“We need massive investment across industry, civil society and government, to understand and mitigate threats to our information ecosystems.”

Tags: 'Deep fake'

SHARE

Reading Time: 4 minutes

Aviv Ovadya has B.S. and M.Eng degrees in computer science at MIT. His work focuses on ensuring that our information ecosystem has a positive impact on society—that it can facilitate the understanding, trust, and problem solving needed for a well-functioning civilization. He has been doing this in a variety of ways, ranging from fellowships with Columbia, UMSI, and TED; as Chief Technologist at the Center for Social Media Responsibility; and as a consulting for Snopes, among others.

Now he's focused on the Thoughtful Technology Project, he is a non-resident fellow at the German Marshall Fund, and consults to support projects focused on bolstering our civic discourse and ensuring positive societal impacts for technology.

Can you give us an overview of your work?

The goal of my work is to ensure that our information ecosystem has a positive impact on society—that it facilitates the understanding, trust, and problem-solving needed for a well-functioning civilisation. In particular, I focus on ensuring that new technology is supporting our information ecosystem instead of harming it through misinformation, harassment, and polarisation. This means working with and for a wide variety of organisations across civil society, including my Thoughtful Technology Project, raising awareness about these challenges and working to help develop solutions.

 

 

 

 

How is deepfake impacting our everyday life?

Deepfake has many meanings to different people. But manipulation of images, video, and audio, on a day-to-day may just mean that you have fancier Snapchat filters and clearer audio on your internet calls. Of course, I know that what you are really asking is what the negative impacts are going forward.

Right now, deepfakes are already being used to harass and defame people, especially women. The existing deepfake technologies are already being used to threaten the authenticity of real recordings. And there are many other potential impacts of fakes for politics, business, bullying, etc.

 

 

How can we protect ourselves from it?

There is no silver bullet here. As I wrote in 2018,

“we need massive investment across industry, civil society and government, to understand and mitigate threats to our information ecosystems.”

This means infrastructure for monitoring, detection and tracking authenticity, none of which will work perfectly. It means creating better incentives overall across our media environment. And it means ensuring that the people building tools which allow others to create deepfakes do so responsibly.

 

How will machine learning reshape the way we live?

In essence, machine learning is the ability to teach a computer to do tasks that are difficult to program directly. This will impact many aspects of our lives, but I’ll focus on one impact to our information ecosystem. As computers gain more and more capabilities, they can be used to mimic humans more and more effectively.

This is very useful when you want an automated “magical” personal assistant—but it is disturbing when you think of people creating millions of such “assistants” that are used not just to help you with daily tasks, but instead to persuade you of whatever their controllers would like. We already see simplistic versions of these “persuasion machines” in the algorithms of sites like YouTube (which just want you to watch more) and in the rare sophisticated bot—but as technology, profitability, and effectiveness improves, they may become ever more pervasive.

You have said that language models can write articles that appear "at least superficially realistic". Should journalists start to worry? Is deepfake writing on its way?

Not exactly. There are a few simple cases where automated journalism has made sense, for example around sports games that have structured data. But in general, journalists need to actually report and write truly coherently, and that’s not something which automated systems can help with yet. That said, there are potentially ways to bring down the costs of truly important journalism like fact-checking through automation — and that will likely increase the overall profitability and impact of such journalism instead of decreasing it.

 

Should the machine learning community somehow regulate research as to reduce the harmful impact of its mal-use?

Regulate is a strong word.

The machine learning community should definitely better understand the threats around mal-use. It’s fascinating to see the transformation that people  experience as they engage more directly with the risks, instead of just assuming they are negligible.

The working paper that I co-authored with Jess Whittlestone goes into the threat model in more detail, and proposes some next steps to help mitigate harms. For example, we suggest supporting optional expert evaluations of research proposals impacts so that the burden does not fall entirely on individual researchers (who may not have the relevant expertise). There are also a number of recommendations I have written about for minimising the negative impacts of deployed systems, which I expect to publish shortly. One obvious example of this is ensuring that generated imagery is watermarked in a fairly robust way.

 And how about governments?

There are some fairly limited areas where laws makes sense, but this mostly applies to simply ensuring that existing laws also apply to online spaces and generated imagery. But most importantly, we need more funding to create all of the needed infrastructure and to measure its effectiveness.

Governments should be much more aggressively funding research and deployment of mitigations.

In the process of adjusting our lives to machine learning, what should change in education models to mitigate the harmful impact it might have?

Beyond just machine learning, as our world changes faster and faster, we need better lifelong education to keep our civic systems functioning. Ongoing web literacy education is a start, but it’s just the beginning.

 

Will machine learning be able to fake friendship one day?

It already has. It just depends on what sort of friend you are looking for!