In early March, a shooter massacred 51 Muslims at mosques in Christchurch, New Zealand during Friday prayers. The horrifying nature of the act became worse when people learned that the shooter had live streamed 17 minutes of the attack through Facebook Live. Tech companies scrambled to delete videos of the shooting — which could still be found on both Facebook and Instagram in early May. The shooting eventually posed deeper questions around the dangers of live streaming. 

Live streaming was introduced to the world as a way to help show people experiences in real-time. Despite its original intentions, it has become yet another way that people can use the internet to spread hate. Following the Christchurch shooting, Facebook came under fire for its lack of regulations, which allowed the video to both stream and spread. The company ended up imposing tighter restrictions on live streaming to prevent future abuse. 

A key aspect of Facebook’s restrictions was its new “one-strike” policy. If a user posts content that violates Facebook’s community standards anywhere on the site, they will now be restricted from using the live streaming service for set periods. The restrictions would also apply to Facebook’s Dangerous Individuals and Organizations policy.

“In an effort to prevent and disrupt real-world harm, we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence, from having a presence on Facebook,” the policy states. 

However, the issue with Facebook Live and similar services is that moderation is left up to platforms who have already failed before. It’s important to remember that the Christchurch shooter didn’t appear out of anywhere, but instead echoed white supremacist rhetoric. In addition, the shooter released a manifesto where he identified some YouTubers as inspiration. At the beginning of the live stream, the shooter even name-dropped PewDiePie, whose real name is Felix Kjellberg.  

While Christchurch may be one notable example of  live streaming being used to spread hate, it’s not the first time that social media has been used to do so. Even while Facebook attempts to place restrictions around its live streaming service, the platform continues to struggle with white supremacist content and other forms of hate. That includes multiple private groups, such as one composed of law enforcement in a group promoting hate speech or a border patrol group that laughing about migrant deaths

There is no single, easy solution to tackling this problem unless platforms decide to completely disable their live streaming features. Currently, Facebook is working on building AI to help it spot violent content, but even that has its problems. As noted by The Verge, AI is fine at removing content that has already been identified by humans as unwanted. However, AI isn’t able to detect nuances between videos that people might.

For example, after Philando Castile was shot by the police, his girlfriend, Diamond Reynolds, began streaming on Facebook Live. Although the stream opened up new conversations about how Black death is consumed, as it became unavoidable across news channels, it also helped make it harder for people to lie about what occurred immediately after the shooting.

In that instance, it was also an act of safety, as Reynolds had her four-year-old daughter in the car. If content moderation was left solely up to AI, it may have ended the live stream, putting Reynolds at increased risk. The live stream was removed from Reynold’s page by Facebook at one point but put up again a few hours later.

There are dangers of live streaming, especially as people become increasingly comfortable expressing hate online. Social media platforms need to do more to make themselves unwelcoming to that type of behavior. Otherwise, people will use whatever tools a platform has to spread their message.