Q&A  | 

This is how advertising tech promotes hate and disinformation, with Nandini Jammi

“Thousands of brands are inadvertently advertising on disinformation efforts across the web.”

Tags: 'Hate' 'nandini jammi' 'Sleeping giants'

SHARE

Reading Time: 3 minutes

Nandini Jammi is the co-founder of Check My Ads, a brand safety consultancy that helps advertisers keep their ads away from online hate and disinformation thus funding toxic content and extremism.

Jammi considers her mission to solve the root of the problem that she first brought attention to through Sleeping Giants, the campaign that notifies brands on social media that their ads are appearing on U.S.-based propaganda sites; this has led to several companies bringing their ads down from different platforms, but thousands are still inadvertently advertising on countless disinformation, hate and fraudulent efforts across the web.

Together with her partner Claire Atkin , they aim to put disinformation out of business and restore direct relationships between advertisers and media “providing training and education, so marketers can be intentional around what they support through their ad dollars”.

How does the business model of social media work? Is it true that their profits thrive with racist and other objectionable speech?

Social media platforms mine their users’ data so they can sell ads. The more time users spend on their platforms, the more profitable it is for these companies.

Hateful and divisive posts make people angry and drive clicks – they’re effectively the best clickbait around. This kind of content enjoys some of the highest engagement numbers, which is one reason why tech companies are so reluctant to enforce their rules. If they did, they would risk losing those high levels of engagement and therefore, profits.

Is content liability necessary in social media and would their business model still be viable in that case?

Social media platforms like Twitter and Facebook already have Community Guidelines and Acceptable Use Policies in place. They could enforce them and still have a viable business model. 

This is a core strategy for Pinterest, another major social media platform. According to Pinterest’s former public policy lead Ifeoma Ozoma, Pinterest has used its willingness to enforce its policies as a key marketing tactic.

Yes, It does cost more to maintain a global army of human content moderators compared to an automated process. But even if they were to hire these teams, these social media businesses would still be wildly profitable.

 

Check My Ads website says that it was founded “in response to the most pervasive problem in the advertising industry: marketers are in the dark about where their ads end up online”. Are companies supporting hate speech & racist websites with their digital ad buy without knowing it?

We have found that despite their brand safety claims, nearly all adtech vendors that advertisers rely on are partnering with bad faith publishers. This means that if you’re advertising on the open web, you are likely to be supporting hate speech and disinformation.

Can you give us some examples?

Sure. In one audit, we found that France-based retargeting company Criteo was placing their client’s ads (Headphones.com) on dozens of disinformation and hate sites. They were not only wasting a significant percentage of the client’s budget, they were endangering the client’s brand without their knowledge or consent.

In our newsletter BRANDED, we explored the disconnect between what adtech vendors say and what they actually do.

I focus mostly on ad-funded hate speech here in the U.S., but this is a global problem too. I have found that adtech vendors are failing to identify bad faith publishers in other countries and cultures.  

For example, I flagged up an Indian hate site for Rubicon Project (now Magnite) and they quickly dropped it (and thanked me for bringing it to their attention). Until then, many of their clients (advertisers) were likely funding the site.

 

Are ad placements based on algorithmic decisions and how do these work?

Yes, it’s a completely automated system. Brands set their parameters, including the segments or audiences they want to target and how much they’re willing to spend per impression. The algorithms then place their ads across the internet accordingly. 

Many brands employ brand safety technology to ensure they don’t appear on hate speech. The problem is, the technology doesn’t work.

Following the advertising boycott battering Facebook, its policy and communications chief, former British Deputy Prime Minister Nick Clegg, issued an open letter, “Facebook does not benefit from hate,” touting its efforts to police its content. What's your opinion about this policy?

Nick Clegg should look into the enormous body of evidence that points to the fact that Facebook does, in fact, benefit from hateful content on their platform. 

Why is Twitter taking other approach?

I can’t speak for Twitter, but I can guess that Twitter does a better job of listening to users.

Why is brand safety necessary?

Originally, brand safety technology was adopted by marketers who wanted to keep their ads away from ISIS propaganda and beheading videos.

But the idea of brand safety has become more complex, now that the public has become more aware of how ad dollars fund hate speech and disinformation across the web. They are also willing to call attention to it on social media, which presents a major risk to brands.

Campaigns like Sleeping Giants and March for Our Lives have given consumers a voice and a way to speak out when they find out that your marketing dollars are negatively impacting their lives and society at large. 

This puts brands in a tough place, because there are so many channels through which to advertise – from programmatic to TikTok – but they still need to ensure they’re appearing alongside brand safe content.

There is a way out though. Brands can start making intentional decisions around which organizations, outlets and creators they want to support based on their company values. That’s true brand safety.