Q&A  | 

The influence of social media on democracies, with Harvard researcher Laura Manley

“Social media companies have a profoundly negative effect on democracies”.

SHARE

Reading Time: 4 minutes

Laura Manley is the Director of the Technology and Public Purpose (TAPP) Project at Harvard Kennedy School’s Belfer Center for Science and International Affairs. She leads a team of researchers, fellows, and affiliates to ensure emerging technologies are developed in ways that protect the public good. The TAPP Project was founded in 2018 by former Secretary of Defense Ash Carter to uphold public purpose in a world were recent technological innovations pose new and unforeseen risks to society.

TAPP leverages a network of experts from Harvard University, MIT, and the Greater Boston Area, along with leaders in technology, government, business, and civil society.

Facebook has admitted to having played a role in the Rohingya genocide. According to The New York Times, “The campaign […] included hundreds of military personnel who created troll accounts and news and celebrity pages on Facebook”. How are social media platforms impacting democracy around the world?

Social media companies have a profoundly negative effect on democracies.    

Democracies depend on the demos—the people—accessing, understanding, and using information to make decisions about the society they want to live in.

But as countless behavioural science studies show, we simply aren’t wired to act in a perfectly rational way: we respond most strongly to information that scares us, outrages us, and conforms to our existing biases and beliefs.

Social media companies are incentivised and designed to exploit these natural human tendencies for profit.


Social media companies become valuable by having users spend a lot of time on their sites; users spend more time on their sites when they are scared, outraged, or constantly seeing content that reinforces their preexisting beliefs—whether it is true or not. 

 

How does this impact trust in the institutions?

The results are predictable. When users see information targeted to them, they view contrary information as lies.

When social media platforms enable politicians to pay them to spread lies—as Facebook does with its political advertisement policy—they sow distrust and discord. Training users to distrust politicians, and by extension the governments they serve in, hollows out faith in governing institutions. 

To be sure, social media networks do allow marginalised, dispersed individuals to connect with one another and to create virtual communities that are difficult, dangerous, or impossible to create in real life.

They have been used to coordinate and spread positive social movements around the world, from democratic movements in Africa and the Middle East to the Black Lives Matter movement in the United States. 

But on balance, it is hard to argue that social media platforms have furthered the cause of democracy, in democracies or in non-democracies.

 

Are social media platforms making enough effort to tackle this issue?

They aren’t—not even close. Because they have grown so big, it isn’t even clear that they could if they were serious about it. 

Facebook also harms democracies by allowing extremist content and conspiracy theories to flourish on its site. Look at “Facebook’s Top 10” – their daily top performing posts. Mark Zuckerberg talks about free speech while creating a powerful vector to allow the spread of hateful, dangerous, and false content available to a third of the world’s population. 

Facebook was a vital vector for Russia to spread disinformation to interfere in the 2016 presidential election; something like 100 Russian operatives may have reached 150 million Americans with content designed to inflame, distract, and misinform. Other countries are likely to try to do the same in the run-up to this year’s election, but Facebook has not acted powerfully enough to prevent it from happening again. One of its actions, the creation of its Oversight Board, won’t occur until after the election. 

Facebook makes decisions that have deadly consequences; it doesn’t always act like it understands the stakes of bad decisions. When it comes to fixing its many issues, it may be too big to succeed. 

 

Social media platforms have assumed the role of news distribution sources, but have largely rejected the affiliated gatekeeper role of fact-checking the content they allow on their sites. Should they be held accountable for their content, specifically when it comes to hate speech, harassment and incitement of violence?

Yes, social media companies should be held accountable for the content they deliver to users. 

Social media networks are conduits of news to their users; according to the Pew Research Center, 62% of adults in the United States report getting news from social media.  

Other conduits of news, such as newspapers, can be held accountable for publishing false information. Social media networks should be held to a similar standard.  

Predictably, social media companies would rather be viewed as platforms that host information than as publishers—a view that protects them from liability. But the algorithms that social media companies use to serve up content are rulesets that ‘publish’ information to users. 

Social media is a relatively new technology, and legislators have not yet developed regulations specific to the sector. Instead, social media platforms have been viewed through the lens of Section 230 of the Communications Decency Act, passed in 1996, which says an “interactive computer service” can’t be treated as the publisher or speaker of third-party content. This protects websites from lawsuits if a user posts something illegal, although there are exceptions for copyright violations, sex work-related material, and violations of federal criminal law.

 

Should platforms be regulated or even governed?

Of course!  

Regulation is why the food you eat and the water you drink are safe; why you can generally trust your local news outlet to give you fact-based information about your community; and why kids don’t see countless advertisements for cigarettes and sugary cereals while they watch Sesame Street.  

Right now, users spent nearly 150 minutes per day on social networking sites. Social networking is expected to reach 3.43 billion monthly active users in 2023—about a third of the world’s estimated population. Given the unparalleled reach and influence that these platforms have on our global society, it is unfathomable that they would, could, or should remain completely untouched by regulation or governance.

 

Are antitrust rulings justified?

In recent years, Facebook has increasingly been viewed as a monopolist taking improper action to maintain its competitive position and economic power—putting it in a position to be potentially broken up. However, judicial antitrust rulings in recent decades have relied on the consumer welfare standard—are consumers being harmed by higher prices?—to adjudicate antitrust claims. For users, Facebook’s price is zero, complicating this approach. 

Big Tech, as it is today, is our generation’s Standard Oil. It will eventually need government intervention to protect the general public.

As we’ve seen throughout history, it is possible to create new legislation that is responsive to Big Tech’s unique factors and can protect consumers, promote competition, and prevent companies from becoming too powerful. 

You can read more about our work on Big Tech and Democracy in our recent report and learn more about the TAPP Project at BelferCenter.org/TAPP