Q&A  | 

Should big tech be held accountable for their content? By Joan Barata, @Stanford

“There are cases where the connection between violence and other objectionable speech and the business of private data seems to exists, but this situation is not the only possible one”.

Tags: 'disinformation' 'Fake news'

SHARE

Joan Barata works on freedom of expression, media regulation and intermediary liability issues. He is Research Fellow at the Program on Platform Regulation at the Cyber Policy Center of Stanford University. The program focuses on current or emerging law governing Internet platforms, with an emphasis on laws’ consequences for the rights and interests of Internet users and the public.

He teaches at various universities in different parts of the world and has published a large number of articles and books on these subjects, both in academic and popular press.

His work has taken him in most regions of the world, and he is regularly involved in projects with international organizations such as UNESCO and the Council of Europe.

Should big technology companies be held accountable for how they spread disinformation or police hate speech?

First of all, we need to take into account that disinformation and hate speech are two completely different categories. The latter is clearly forbidden by international human rights law and punished under criminal law in most States in the world. Disinformation, by contrast, can have negative effects within societies and represent a clearly unethical exercise of journalist activities, but is not illegal per se. In fact, most forms of disinformation are protected under the freedom of expression clause (both at the international and national level). 

Apart from this, technology companies and intermediaries in particular shall not be held responsible for the dissemination of content they did not directly authored. In addition, laws that oblige intermediaries to show some results in terms of eliminating hate speech of disinformation have the effect of pushing these companies to over-remove speech and therefore affect the right to freedom of expression.

All this being said, there are current discussions regarding imposing on intermediaries what can be called systemic obligations when it comes to content moderation (having clear internal rules, flagging mechanisms, transparent, procedures, etc.) that in any case seem to appear as the most balanced option when it comes to dealing with the mentioned problems. 

Section 230 is now under scrutiny after critics say its liability protection extends to fringe sites known for hosting hate speech, anti-Semitic content and racist tropes like 8chan, the internet message board where the suspect in the El Paso shooting massacre last August posted his manifesto. Do you agree on this?

Derogating Section 230 would not make sites like 8chan disappear, nor it would prevent this negative content from being disseminated. Moreover, it would disentivise platforms like Facebook or Twitter to moderate their users’ content in any way. 

Supporters argue without Section 230 – or with a modified version of it - online communication would be stifled and social media as we know it would cease to exist. To which extent would more content liability affect the business model of social media?

It appears that absence of intermediary liability restrictions would have an important impact on current business model of social media. Looking into the Internet pre and post Section 230 could give good proof of that. In any case, repealing Section 230 would mostly have a tremendous negative impact on the right to freedom of expression of users and on the values and principles that currently platforms aim at protecting via their moderation policies.

Does the GDPR or any other law limit in any form the power of that section in Europe or are we de facto under the control of section 230 too?

The European Union has created, in the course of the last years, its own legal model for platform regulation, including provisions as important as the GDPR, the e-commerce Directive, the Audiovisual Media Services Directive, or the Copyright Directive. It is also important to note the ongoing discussions about the soon-to-be-approved Regulation on tackling terrorist content online. These laws shape the behavior of platforms in the European market and even beyond that. Inasmuch as European obligations are higher and more restrictive than the ones established it the US, platforms have the tendency to apply the former on a global scale.  

Can Europe regulate on its own in order to prevent hate, violence and other forms of harmful content from spreading through social media if all big techs are based in the US?

See the previous answer. EU law has affirmed the jurisdiction over US-based platforms inasmuch as they also have subsidiaries in EU countries, established according to EU law. Moreover, many EU member States have adopted their own platform regulations, which are regularly applied to global platforms operating in their respective territories (see for example the German NetzDG).

How are hate, violence and other objectionable speech intertwined with the business of private data?

This strictly depends on the internal functioning of each platform. There are cases where this connection seems to exists, but this situation is not the only possible one. 

Do you consider this to be a legal or ethical problem? Can we build a legal framework without an ethical one?

Law and ethics are generally connected. This being said, when we talk about speech, a clear separation must be established in order to restrict the use of the law (and its limitative nature) to the cases where it is absolutely necessary to protect certain rights and values within a democratic society. Ethical, self-regulatory (or perhaps co-regulatory) schemes need to be applied to the rest of cases.