Q&A  | 

Can algorithms pose a threat to free speech?

“[...] a handful of companies and countries moderate and regulate speech for most of the world.”

Tags: 'activismo digital' 'Big data' 'Big Tech' 'Digital transformation' 'Freedom of Speech' 'state surveillance' 'Tecno ética'

SHARE

Reading Time: 4 minutes

Danny O'Brien is director of strategy at the Electronic Frontier Foundation, one of the longest serving and leading nonprofit organizations defending civil liberties in the digital world.

EFF has been around since 1990 working on where law, technology and human rights intersect, and so does O’Brien: “If you really want to change things in the right direction, you need to anticipate the technological and societal changes before they happen. EFF was briefing judges on what email was in 1990, how the Web worked in 1994, the importance of encryption in 1998, the challenges that the music and media industries would face in the early 2000s, and the privacy disaster that our current big tech giants like Facebook, Google have brought about since before they were giants”.

Does the internet increase polarization and lead to extremism?

If you look at independent metrics that track polarisation in the United States, for  instance, there’s no big tick upward around the introduction of the Internet — it’s mostly a steady line upwards from the Eighties on. That implies that whatever is increasing polarization here was around before the Net.

It also doesn’t seem to apply to every country. You can definitely see authoritarian or popularist leaders taking advantage of the Internet across the world, just as they did with radio and television when they were new, but, again, there are plenty of states that show no such reaction.

My working theory is that the general broadening of potential sources of information has led to a widening of the range of expressed opinions. People can easily get stuck when they’re getting so much information thinking they’re seeing the whole world, when they’re just seeing a distorted part of it.

More tentatively, I do think that social media company policies and business models may have accelerated this tendency: again, it makes sense to me (and many other critics) but I wish I had better data to go on.

Governments around the world are increasingly turning to private internet platforms as de facto regulators of internet users' speech. As a matter of fact, Germany’s justice ministry plans to force platforms to proactively report serious cases of hate speech to law enforcement. The first draft of France’s Loi Avia created an obligation for platforms to take down ‘explicitly’ hateful content flagged by users within 24 hours risking fines if they didn’t, but this was later deemed unconstitutional. What is your opinion on this kind of regulation?

Despite the challenges we face, I think it’s a terrible idea. We’re at the point where these large tech companies occupy such a central place online, and in the minds of politicians, that even regulations supposed to limit their damage reflect the companies’ own vision of how the internet works.

Mark Zuckerberg consistently emphasises that Facebook works very well at moderating billions of users all over the world, when all the evidence indicates that they make mistakes constantly, sometimes with terrible consequences. 

Germany and France’s approach is to say “do what you’re doing, faster”. All it does is cement these companies into a permanent place within the democratic states, where politicians will prop up the social media companies that claim to be doing their will, and excluding potential competition and policies that have different, better ideas about how we can deal with unwanted speech online. 

As a response to the pressure they have been subjected to, platforms have started removing political speech, videos posted by human rights organizations, and users’ discussions of Islamic religious topics in order to combat violent extremism. What can this lead to?

Nothing good. Let’s be clear, when you’re trying to police speech at the scale of billions of people, you have to automate it (or else create a Stasi-like economy where some sizable percentage of the population is somehow moderating the rest). 

But algorithms just aren’t capable of determining the subtleties of political speech — the contextual difference between, say, images of violent extremism, and someone documenting violent extremism, or artistic nudity and obscene nudity or educational nudity. Given the tiny percentage of content online and off that is produced by extremists, even the smallest chance of a false positive will implicate thousands of innocent speakers.

The fundamental problem is that big tech’s poor estimation of their own ability to moderate speech is pretty much what got us into this mess. Relying on it for the solution is quite backward. At the very least, applying the same filters to the entire world’s speech is going to have a distorting effect, no matter how it is executed.

How does the current approach of content flagging impact online activism?

One of my earliest interactions with Facebook was around 2008 or 2009, when they shut down a Facebook group supporting the democratic movement in Hong Kong, with hundreds of thousands of members. 

I’m sure it was flagged erroneously because of a co-ordinated series of fake complaints from Chinese nationalists. A few years later, Facebook almost affected the path of the Arab Spring in Egypt when they deleted the account of the leading opposition organizing group just before a major protest: I’m sure that was accidental, which makes it no better.

Activists are particularly vulnerable to errors in these systems, because they have opponents who are willing to game and manipulate Facebook’s flagging system to silence them. And their work often walks the line between what is acceptable, and what is too shocking or provoking to comply with social media companies terms of service.

Where do we draw the line between online activism and online terrorism?

Different countries, and different cultures draw that line very differently. I think it’s an error to assume there’s a clear standard that should be followed by everyone.

How can governments regulate and balance the fight against disinformation, incitement and slander on one side with the right to free speech?

I do think it’s the wrong question to either ask how governments might regulate these problems away, when they are issues that society constantly seeks to balance, and which we have dealt with for a very long time. 

I think that there’s been this framing of the internet as a lawless place for so long, that the natural instinct is to compensate by introducing shiny new, Internet-specific laws. 

It often seems like what is needed is rather the fair and effective enforcement of existing law, using all its features of due process, checks and balances. I think if governments spent as much time concentrating on updating the judicial and criminal investigatory system so that they were fit for the digital age, we might find that our old laws have enough wisdom and vigour in them to tackle these new iterations of age-old problems.

 

Why is it important to protect online freedom of expression?

I think one of the consequences of the growth of the digital environment is that protecting freedom of expression has become far more important, rather than more problematic. 

Such a huge part of our life is now spent in conversation, in reading, and writing, in scrolling, in audio chats, video rooms and message boxes: all of it through our powers of expression. 

Free speech, and limits on it, affect far more of our life than they have ever done. I think this means that we need to tackle the consequences of such an expansion at the human level: creating and supporting a digital world that is as varied and complex as the conversations we’re having on it. 

One size does not fit all: especially when that size seems to be a handful of companies and countries, moderating and regulating speech for most of the world.