Q&A  | 

New markets of Fact checking, with Peter Stano

“The European Digital Media Observatory [...] will increase the scientific knowledge available on online disinformation but also advance the development of an EU market of fact-checking services”.


Reading Time: 4 minutes

Peter Stano is the EU Spokesperson for Foreign Affairs and Security Policy.

Since 2015, The European External Action Service has been at the centre of the EU efforts to address disinformation and information manipulation threats, in particular emanating from the external actors.

There are now three specific strategic communications task forces within the EEAS, focusing on the Eastern Partnership and Russia, Western Balkans, and MENA regions.

How is disinformation affecting our everyday lives?

In at least two ways.

First, disinformation gradually erodes the secure foundation of facts, and with it, the level of trust needed for society to establish the truth. These are the very basis for the functioning of European democracies.

Second, with the COVID-19 pandemic, it became evident that accurate and truthful information is, in some instances, also a vital necessity. Disinformation can affect our wellbeing or even our lives.

Following Facebook and Twitter’s public announcements, between 2018 and 2020 both social media platforms had taken down 147 influence or “state-backed disinformation operations”. From the Rohingya genocide to Libya targeted Russian campaigns, through the Capitol attack, how big a role did social media play?

It is clear that coordinated inauthentic behaviour that facilitates the spread of disinformation persists on the platforms. The takedowns confirm this, but also show the platform’s capabilities to take countermeasures.

An important part of the EU’s efforts in this regard has been the self-regulatory Code of Practice on Disinformation, which is in force since October 2018 and for which the Commission has just issued additional guidelines.

Even if the Code of Practice has led to very tangible improvements, it is clear that more has to be done. The flood of misleading and harmful information on social media platforms during the pandemic has posed substantial risks to personal and public health and governments’ responses. The situation illustrated that despite efforts taken to date, there is an urgent need for the platforms to further step up their efforts to fight disinformation.

Therefore, the EU is reinforcing the Code of Practice to make it more robust and is guiding efforts to develop the co-regulatory framework of obligations and accountability of social media platforms.

It includes calls to improve the transparency of political and issues-based advertising, to step up measures to empower users by providing access to tools to better understand and safely navigate the online environment, to improve access to platforms’ data for researchers and to reinforce the integrity of platforms services by covering the full range of impermissible manipulative behaviour, including evolving techniques (e.g. deepfakes, the purchase of fake engagements), with effective responses to counter them.

Does social media as it is now pose a threat for democracy?

Certain large online platforms have acquired a scale and a social and economic importance, which cannot be ignored.

This makes them particularly impactful – which needs to be reflected in the level of responsibility that is required from them.

European norms and values, including human rights and democracy, should be enforced equally offline and online. In order to achieve this, the platforms and other relevant stakeholders should step up their measures to address gaps and shortcomings and create a more transparent, safe and trustworthy online environment.

That’s why the Commission proposed in December last year the Digital Services Act, a regulation aiming at bringing in one set of rules throughout the entire EU, setting a high global benchmark by defining clearer responsibilities and accountability for online platforms such as social media and marketplaces.

It will ensure greater accountability on how platforms moderate content, on advertising and on algorithmic processes including against disinformation. Regular reporting and transparency measures will ensure that the societal risks and impact can be independently evaluated.

The regulator will be able to investigate concrete concerns and can require services to open up their ‘black box’ of data around disinformation.

What are the chances of states being able to combat disinformation as long as emotional disinformation and clickbait content are the base of social media’s business model?

We have good cooperation with social media platforms, which is widely acknowledged, to ensure greater transparency and accountability, as well as a useful framework to monitor and improve platforms’ policies on disinformation.

However, we do call for stronger commitments by signatories to ensure a more effective response to the spread of disinformation, a more consistent application of the Code across platforms and EU countries, a strengthened monitoring system with clear Key Performance Indicators (KPIs) and an adequate mechanism for the Code’s regular adaptation.

What other digital environments are prone to host and promote -even if not willingly- disinformation?

The tactics and techniques of information manipulation and disinformation continue to evolve, exploiting vulnerabilities wherever they might occur.

One might encounter false and misleading information not just on social media, but also on websites, comment sections, even messaging apps.

Is there any specific technology to combat state-backed disinformation campaigns?

Guided by the European Democracy Action Plan, the EU is working to improve the existing toolbox for countering foreign interference, including new instruments that would allow imposing costs on the perpetrators.

But the most important means we have is the good cooperation between government institutions, academia, media, civil society, as well as social media platforms to tackle disinformation.

Only by working together, sharing knowledge, can we make sure our learning curve is steeper than that of the perpetrators of disinformation.

A concrete example is the work of the EU-funded European Digital Media Observatory (EDMO) project which started its activities in June. Led by the European University Institute of Florence, it will support the creation and work of a multidisciplinary community composed of fact-checkers, academic researchers and other relevant stakeholders with expertise in the field of online disinformation.

It will contribute to a deeper understanding of disinformation relevant actors, vectors, tools, methods, dissemination dynamics, prioritised targets and impact on society. This independent collaboration hub will increase the scientific knowledge available on online disinformation but also advance the development of an EU market of fact-checking services, and support public authorities in charge of monitoring digital media and developing new policies.