What is Computational Propaganda and How Do Hate Groups Use It?
Photo Credit: Guangzhou. China - December 1, 2016: Apple iPhone 7 screen with social media applications icons Facebook, Instagram, Snapchat, Twitter and more other

What is Computational Propaganda and How Do Hate Groups Use It?

With the 2020 election coming up, social media companies are facing increased pressure to tackle election interference on their platforms. Ahead of the November 2018 midterms, Facebook established a “War Room” that could possibly return for the presidential elections. And last month, Twitter introduced a new “misleading about voting” report feature.

However, there’s another problem looming. Eight case studies commissioned by the Institute for the Future’s Digital Intelligence Lab show that extremists co-opt the conversations of vulnerable groups — including Latino, Muslim, and Jewish communities. The studies were first reported by Buzzfeed News.

Researchers focused on computational propaganda or the “assemblage of social media platforms, autonomous agents, and big data tasked with the manipulation of public opinion” — i.e., digital propaganda.”

The groups chosen for studies were Muslim Americans, Latino Americans, moderate republicans, immigration activists, Black women gun owners, environmental activists, anti-abortion and abortion rights activists, and Jewish Americans.

Researchers found multiple occurrences of discussions in vulnerable groups being taken over by extremists. An immigration activist told researchers that a “know your rights” flyer — designed to let people know what to do if stopped by ICE — was photoshopped with false information and spread on social media. Images meant to mis-portray activists as ICE officers were also found online.

Even national organizations were targeted by these efforts to antagonize vulnerable communities. The Council on American-Islam Relations has had the hashtag #CAIR “taken over by haters” has been used to harass Muslims, a member told researchers.

Samuel Woolley, the director of the Digital Intelligence Lab, told Buzzfeed News, “We think that the general goal of this [activity] is to create a spiral of silence to prevent people from participating in politics online, or to prevent them from using these platforms to organize or communication.”

Buzzfeed News also reported that most complaints to the platforms haven’t been met with any action, leaving vulnerable communities distrustful of big social media companies to help them.

This research helps prove what a lot of people of color across the Internet already: vulnerable communities are not simply being dramatic when they discuss the effects of online harassment or targeted campaigns. It’s also important to remember that members of vulnerable groups have previously raised these concerns on their own and no one listened.

Black women as a whole are particularly vulnerable online. A December 2018 study by Amnesty International declared the Twitter trolling of women as a human rights issue. That same study found Black women were disproportionately targeted, “being 84% more likely than white women to be mentioned in abusive or problematic tweets.”

Long before the 2016 election and this most recent study, Black feminists on Twitter launched a campaign against trolls masquerading themselves as Black women. As reported by Slate’s Rachelle Hampton, Shafiqah Hudson created the hashtag #YourSlipIsShowing to expose those accounts.

“But despite the evidence that harassment campaigns fueled by a noxious mixture of misogyny and racism spelled out a threat to users from vulnerable groups, Hudson and [I’Nasah ] Crockett felt that Twitter basically did nothing,” Hampton wrote.

In addition to vulnerable communities feeling that social media companies do nothing to respond to targeted harassment, disinformation, etc, there’s another problem.

Researchers conducting the study on Muslim Americans wrote, “all our interviewees noted that the more pressing issue for them is when their own content is incorrectly removed and it takes a long time to contest and get it posted again.”

“Their experiences accord with a recent Amnesty International report on online abuse against women that also found serious problems with reporting mechanisms and automated content moderation,” the researchers added.

For now, Woolley and other researchers hypothesize that “social groups, religious groups, and issue voting groups will be the primary target” of this kind of activity in 2020.

“What we’ve come to understand is that it’s oftentimes the most vulnerable social groups and minority communities that are the targets of computational propaganda,” Woolley told BuzzFeed News.

If social media companies claim to tackle election interference, that means they need to begin taking the concerns and safety of vulnerable groups seriously.