video  | 

DFS Voices – The Centre for Research & Technology, Hellas. Winner of Tech Against Disinformation

SHARE

Reading Time: 4 minutes
Symeon Papadopoulos, researcher at The Centre for Research and Technology-Hellas (CERTH), founded in 2000, is one of Greece’s leading research centres and also one of the winners of the call for solutions by Digital Future Society: “How can technology help fight disinformation optimising the fact-checking processes?”.
Know more about the call Tech Against Disinformation

Their proposal, the DeepFake Detection Lab (DFD Lab), aims to help fact-checkers tackle the challenge of identifying AI-manipulated faces in images and videos with practical examples and a detection tool.

Symeon, would you explain to us a bit of your proposal and how it works? What is the objective for your solution, and how is it supposed to be used?

First of all, thank you very much for having me, and I'm honoured to receive this grant to perform this project. Our goal is to tackle the significant problem of deep fakes. Deep fakes refer to content that is partly or wholly generated by artificial intelligence algorithms for those unfamiliar with the technology. And there are already many well-known cases on the web— like when people swap the faces of celebrities with another face. They change the face of an unknown person with a celebrity’s face to make it look as if the celebrity is speaking or involved in some kind of incident.

Another very harmful application is the so-called revenge porn, which can target either celebrities or citizens, and it has obvious negative implications for everyone. So our project and our proposal try to help fact-checkers and journalists analyse online videos and images and decide whether a video is fake or not. The problem is complex from a scientific point of view and a technical point of view. And for that reason, we don't just propose a simple detection service, the standard approach.

We also want to allow fact-checkers and journalists to keep track of many existing cases they have already flagged. And we do this through a repository or database of known deep fakes. We also support a kind of reverse video search to insert video as an input; they can then find out if it is a known case from this database of deep fakes. And of course, we also give a deep fake detection service to help them in this process. So, in a nutshell, that is about it.

 

Since you've been working around the topic, how worrying do you think misinformation is nowadays?

Well, first of all, in a broader area, online misinformation is much more threatening. In our proposal, we are talking about a specific form of misinformation. If you see the latest trends, the most prevalent types of disinformation are not deep fakes. There are much simpler ways to spread propaganda and to spread confusion among people. But we foresee that deep fakes will become much more critical and risky as the technology matures and become more convincing and realistic because people are by nature programmed to believe in image and sound.

Nowadays, we spend so much time teleconferencing and watching videos over the internet due to the pandemic. We have kind of lost touch with the real world for someone to be able to create compelling fake videos. You understand that this possibility gives them the power to manipulate opinions, even on enormous scales. I think it is a concerning development and we should be paying attention.

 

Where do you think technology stands in all this information issue? Why do you find it essential to use technology to tackle these issues?

Well, technology, of course, is only one part of the solution. We cannot entirely rely on technology because the problem is technology-based, and all these phenomena and all these risks are facilitated by technology. This means that we also need a kind of defence that is similarly technology powered. It's a kind of arms race, unfortunately. So we cannot hope that simply by manual analysis on behalf of the journalists and fact-checkers, we will be sufficiently ready to address this issue. So we think that beyond the societal, political, and regulatory solutions that are also in motion, this issue needs to be complemented and assisted by appropriate technological tools and other kinds of solutions.

 

According to your experience, how would an optimal fact-checking process work?

Well, that's an excellent question. It's not a solved one. Well, at least according to our proposal, the fact-checker and the journalist should always be in control of the process, and they should have a range of powerful tools in their arsenal. They should be able to also give feedback to the system with their own experiences.

They should be able to contribute with existing cases because these cases will help build a repository that we can check against, but they are also helpful in retraining AI models in the future. And it's also even beneficial in training young journalists and fact-checkers because you can show them past cases and be better prepared. And, of course, we need defect detection services that are easy to use, easy to understand, and the results are transparent.

You need to help journalists with some additional explanations and some additional resources.

 

 

And the last question, who do you think is the final responsible actor who needs to do something to avoid becoming a misinformed society?

Yes, there is not a single actor who can solve the problem; multiple responsibilities and multiple stakeholders need to come together. We saw how this complexity became challenging during the pandemic because it was not simply putting the facts together and communicating them to the citizens. You need a whole set of measures, a complete set of scientific fields to study the problem. So I believe, of course, you need technology tools.

We need a regulation that the European Commission is trying to put forward with the AI act. There's also the Digital Services Act, which also tries to do something about it. There are also, of course, aspects related to education. I've been recently doing some presentations to high schools in Greece, and I was surprised that children are familiar with this problem. They can very easily understand the implications and underlying solutions that we try to build.

So I think even from a young age, this problem needs to be brought to students’ attention, and to educating older people who are more vulnerable to this type of campaign and risks.

Of course, we need, I think, further research on the problem because it's a highly evolving field, new technologies are presented every day. We also need to study better, for instance, how the human brain processes this type of information and see how we can tweak it to be better prepared. So there are so many things that we need to do, and we are still, I think, in the beginning.