Twitter Has Updated Its Rules To Ban Dehumanizing Speech Around Religion
Photo Credit: The Twitter Inc. logo is displayed outside the company's headquarters in San Francisco, California, U.S., on Thursday, Feb. 8, 2018. Twitter Inc. soared the most since its market debut in 2013 after it posted the first revenue growth in four quarters, driven by improvements to its app and added video content that are persuading advertisers to boost spending on the social network. Photographer: David Paul Morris/Bloomberg via Getty Images

Twitter Has Updated Its Rules To Ban Dehumanizing Speech Around Religion

On Tuesday, Twitter announced updates to its rules against hate speech that would now expand the policy to include language dehumanizing others on the basis of religion. The change was a result of a call Twitter put out last year to help the company rewrite its policies around dehumanizing language in acknowledgment of “how it can lead to real world harm.”

Initially, Twitter proposed a policy in 2018 against “content that dehumanizes others based on their membership in an identifiable group.” However, the company received responses from people across the world who criticized the policy for being too broad.

It’s a fair point because policies against dehumanization that fail to take into account the power structures behind it — such as xenophobia, anti-Blackness, etc. — further contribute to the problem. By only broadly discussing “identifiable groups,” Twitter failed to consider who is most often being attacked, and who that has direct, real-world consequences for. 

The company gave examples of tweets that would not be tolerated, such as referring to religious groups as “rats,” “viruses,” “filthy animals,” and “maggots.” On the surface, it seems like a step in the right direction, but there are short-comings with Twitter’s new policy. Namely, that it already existed.

Twitter’s “Hateful conduct policy” states: 

“We prohibit targeting individuals with content intended to incite fear or spread fearful stereotypes about a protected category, including asserting that members of a protected category are more likely to take part in dangerous or illegal activities, e.g., “all [religious group] are terrorists”.

Twitter’s rules against hateful conduct included a ban on promoting violence or directly attacking people on the basis of religious affiliation and the company also banned hateful imagery.

Anybody who uses Twitter is well aware of the Islamophobia and antisemitism that run rampant on the platform. According to a recent survey by the Anti-Defamation League, about 35 percent of Muslims and 16 percent of Jews experienced harassment online due to their religion.

Considering Twitter had a ban on targeting religious groups before, it’s hard to imagine what this new policy will actually do. A big part of Twitter’s problem is the fact that it doesn’t actually implement the policies that it claims to have.

This has been clear with the number of white supremacists on the site. Although Twitter has the capability to essentially get rid of them all, the site refuses to do so because it’s afraid that machine learning algorithms would end up banning conservatives, including elected officials.

Besides, it’s 2019 — the fact that Twitter didn’t consider policies on dehumanizing language (a problem that has existed for centuries) shows how far behind the company is.

Right now, Twitter is in a scramble to catch up to a problem that it allowed to fester. As a result, members of marginalized communities are the ones put at risk.