Q&A  | 

Why do we speak of infodemics, by Whitney Phillips

"Anxiety, depression, panic, trauma: all are amplified by the COVID-19 infodemic".

Tags: 'Coronavirus' 'COVID-19' 'Desinformación' 'disinformation' 'Fake news' 'infodemics' 'Misinformation' 'Noticias falsas' 'Pandemia de información' 'Pandemics' 'Rumores' 'Rumors' 'Syracuse University' 'Whitney Phillips'


Reading Time: 6 minutes

Whitney Phillips is assistant professor of communication and rhetorical studies at Syracuse University. She researches and teaches class on media literacy, mis-and disinformation, political communication, and digital ethics. She’s the author of several books including "You Are Here: A Field Guide for Navigating Polluted Information", coauthored with Ryan Milner. Phillips’s book "This Is Why We Can’t Have Nice Things" won the Association of Internet Researchers 2015 Nancy Baym best book award. She is regularly featured as an expert commentator in US and global news outlets, and her work on the ethics of journalistic amplification has been profiled by the Columbia Journalism Review, Niemen Journalism Lab, and Knight Commission on Trust, Media, and Democracy.

En 2015 Phillips ganó el premio Nancy Baym de la Association of Internet Researchers 2015 con "This Is Why We Can’t Have Nice Things". Aparece con asiduidad en medios de comunicación de todo el mundo como experta en fake news y desinformación, entre otros temas.

You recently said that curbing rumors is as important as curbing germs in the fight against coronavirus. Why is that?

There are, of course, some obvious and important differences between viral spread in an epidemiological sense and viral spread in an informational sense. But when false and misleading information cascades unchecked across social media, it can be as harmful as the virus itself–not because the information infects people, but because that information can impact how people protect themselves (or not) from the virus. Those dangers are twofold. First of all, so much confusing, conflicting, unverified information online can send people into an absolute panic, triggering them to behave in harmful ways offline. Toilet paper hoarding is one example. The same deluge of online rumors can make other people shut down entirely, maybe because the stress is too much, or because the volume is too loud. Either way, those people won’t have access to the information they need to keep themselves and their families safe.

That possibility speaks to the second danger of false and misleading COVID-19 information: that true information could be drowned out in a sea of unconfirmed rumor–again risking the possibility that the public won’t know what they need to know, threatening the health of the entire community.

What are rumors and why do we share them?

Rumors can include all kinds of information, from second-hand accounts of hospital conditions to stories about neighbors who have fallen ill to details about state assistance. They’re anything that people have heard from other people, and can map onto or overlap with legends. The trickiest thing about rumors is that sometimes they turn out to be true! So, just because something is a rumor doesn’t necessarily mean it’s false. It means, at the time the information is spread, it’s not confirmed. It could be right. It could be wrong. Not knowing one way or the other, yet spreading the information anyway, can inadvertently trigger the two dangers described above.

As for why people spread them, sometimes people share rumors for malicious or underhanded reasons–but very often, particularly in times of crisis (Kate Starbird breaks this history down here), people spread rumors because they’re desperately trying to help their friends and neighbors make sense of traumatic and confusing circumstances, particularly when official information is lacking or difficult to find (or trust).  

Is the drive to do so bigger in the digital world than it was before?

People have always shared rumors. As Kate Starbird (linked above) explains, rumors function as “collective sensemaking” efforts. Digital spaces make it much easier to spread rumors, not just because of tools designed specifically for sharing, but because of tools designed specifically for archiving and searching. Before social media, the rumors you encountered were “your” rumors, typically confined to a specific location or affinity group. Now, we can encounter anybody’s rumor with just a few clicks. So, logistically, there are just more rumors to sift through online.

Beyond that, not knowing where a rumor originated, or what the poster’s intentions were (since by the time we encounter the rumor, it could have passed through ten million hands), makes assessing information increasingly difficult. So the impulse to share rumors isn’t new. But the consequences and complications definitely are.  

Which are the consequences of sharing information which is not true?

The consequences of sharing false information can vary greatly depending on the circumstance.

In the case of a public health emergency like the COVID-19 pandemic, those consequences can be dire, even deadly. For instance, the insistence among some groups in the U.S. that the coronavirus is a “hoax”–or at the very least isn’t as bad as everyone on television is saying–undermines physical distancing efforts, which is the only chance we have to flatten the curve.

People who have heard the virus isn’t that bad, or that only old people with pre-existing conditions will die from it, or whatever other falsehood about the virus, are the most likely to engage in dangerous behaviors like continuing to visit with friends or not taking the proper precautions when going to the grocery store. That doesn’t just put that person at risk, but all the people they come into contact with over the next 14 days. The careful physical distancing of some can be unraveled by the physical proximity of others; any information that encourages people to take those risks poses an immediate, widespread threat to public health. 

What are the intentions behind misinformation?

Misinformation is false information shared unintentionally. That’s very different from disinformation, which is false information shared deliberately. In other words, people who spread misinformation aren’t trying to share false information. It’s important to recognize that there’s no ill-intent (the people who share misinformation aren’t villains).

At the same time, the distinction between mis- and disinformation doesn’t really matter! The false information still spreads regardless, and in the worst cases, can be intercepted and weaponized by people who do seek to sow chaos and confusion. That’s why I like to use the term “polluted information” rather than trying to parse motivations.

What matters most is the fact that the information spreads, and the impact it has on the information landscape. Why someone spreads the information matters, of course, but is something we often can’t know as a story pings back and forth between audiences. So the better questions to ask are: what does this information do, and why has it been allowed to do it?      

Can misinformation cause health issues?

Certainly related to COVID-19, polluted information can lead to further epidemiological spread. Again, information doesn’t infect people in the same way the actual virus does, but it can create the conditions for the virus to infect more people.

The health effects of polluted information are more widespread than that, however; around the globe, people find themselves gripped with significant trauma and mental health struggles. Much of that stems from the effects of the virus itself, and all its social and economic ramifications–but polluted information about the virus contributes to those mental health concerns as well. Anxiety, depression, panic, trauma: all are amplified by the COVID-19 infodemic. And all can lower immunity! Which is even more reason to take polluted information seriously; it can make you emotionally and physically sick. 

You've also said that a communitarian focused approach could help navigating the current information crisis. Could you give us some details about that?

Communitarianism is an ethical approach that seeks to secure the health, safety, and future for the collective. This ethos is built into public health models; you wash your hands not just to protect yourself from germs, but to avoid spreading your germs to other people. Unfortunately, communitarianism is not built into public discourse models. Particularly in the US, there’s much more emphasis on a person’s right to say whatever they want without censorship.

It’s important to protect individual rights, of course, but it’s equally important to protect the health of the collective–for one thing, when the group is doing well, the individual is much more likely to do well also! We need to apply what’s already common sense within a public health context to how we respond to information online. The healthier our shared information spaces, the healthier we will be as individuals, feeding back into the overall health of the community.  

Can you give us some tips to discern false information from true information?

This can be extremely difficult to do! In times of crisis, stories are still unfolding; details have yet to be confirmed; information is often woefully incomplete. The things that look true at breakfast might be disproven by lunch.

The most important tip is for people to remember that just because they think they’re helping, doesn’t mean they actually are. We can all contribute to the infodemic, regardless of our motivations.

The goal is to cultivate the healthiest online communities possible–and the way we do that is to think about the well-being and safety of all the other people we share our spaces with. We thrive together and we suffer together. Our relationship to information should reflect those connections.