Since the Christchurch massacre, social media platforms have scrambled to keep video of it off their platforms. Days after the attack, it’s not difficult to find clips or still images from it. To many, this opens up questions about tech companies’ failures to regulate hate on their platforms, and who shares responsibility in moments like this.
After the shooting, where at least fifty Muslims were killed in two New Zealand mosques, archives of the alleged shooter’s page revealed only 10 people had tuned into his Facebook Live broadcast of the event, according to The Wall Street Journal.
Although the original video didn’t have many viewers, it exploded across social media in the days following the attack. Facebook, which has faced the brunt of criticism due to its site hosting the livestream, says it removed 1.5 million videos of the New Zealand shooting in the 24 hours after the shooting was broadcast.
In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload…
— Meta Newsroom (@MetaNewsroom) March 17, 2019
In a thread on Twitter, the company said it blocked over 1.2 million of the videos before they were uploaded. However, that means about 300,000 videos managed to appear on Facebook, and even that number is still far too big.
On Sunday, New Zealand’s Prime Minister, Jacinda Arden, told reporters during a press conference in Wellington that Facebook’s chief operating officer Sheryl Sandberg reached out and the two haves plans to discuss the livestream.
“Certainly, I have had some contact with Sheryl Sandberg. I haven’t spoken to her directly but she has reached out, an acknowledgement of what has occurred here in New Zealand,” Arden said.
She went on to add, “This is an issue I will look to be discussing directly with Facebook. We did as much as we could to remove, or seek to have removed, some of the footage that was being circulated in the aftermath of this terrorist attack. But ultimately, it has been up to those platforms to facilitate their removal.”
In addition to a livestream, the alleged shooter uploaded a 17-minute video to Facebook, Instagram, Twitter, and YouTube.
An Uncontrollable Spread
Part of the issue is companies have begun to over-rely on artificial intelligence software that can’t actually detect violent content as it’s being broadcasted, as noted by The Wall Street Journal. Although some platforms, like Facebook, have human content moderating teams, they’re sometimes overworked, traumatized, and sometimes end up radicalized themselves.
Once a video goes up, it’s not difficult for people to upload it themselves, create copies, and slightly doctor them in order to repost. For example, The Wall Street Journal reported that a version of the video was edited to look like a first-person shooter game and then uploaded on Discord, a messaging app for videogamers.
Since the attack, YouTube said its removed thousands of uploads of the video, but even the company supported by Google couldn’t stop the spread of the footage quick enough. Elizabeth Dwoskin and Craig Timberg of the Washington Post say reported that the tech giant had to take drastic measures:
As its efforts faltered, the team finally took unprecedented steps — including temporarily disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems. Many of the new clips were altered in ways that outsmarted the company’s detection systems.
-The Washington Post
But even as tech companies, with all their engineering support, took down the video, people who wanted to see it and distribute it knew exactly where to go.
Back in 2018, Reddit quarantined their infamous subreddit r/watchpeopledie, which allowed people to do exactly what it said — watch videos of people dying. According to TechCrunch, the subreddit shared extremely graphic videos like the 2018 murder of two female tourists in Morocco.
Despite the quarantine, people could still access the subreddit directly. It became active as people sought out videos of the Christchurch shooting. TechCrunch reported one of the subreddit’s moderators locked a thread about the video and posted the following statement:
“Sorry guys but we’re locking the thread out of necessity here. The video stays up until someone censors us. This video is being scrubbed from major social media platforms but hopefully Reddit believes in letting you decide for yourself whether or not you want to see unfiltered reality. Regardless of what you believe, this is an objective look into a terrible incident like this.
Remember to love each other.”
Late Friday morning, Reddit finally banned the subreddit entirely and similar ones such as r/gore and r/wpdtalk (“watch people die talk”). A spokesperson told TechCrunch, “We are very clear in our site terms of service that posting content that incites or glorifies violence will get users and communities banned from Reddit.”
The gaming platform, Valve, also had to remove over 100 profiles that praised the shooter. According to Kotaku, dozens of users on the site were offering tribute to the alleged shooter. One profile even showed a GIF of the attack and others called the shooter a “saint”, “hero”, or referred to him as “Kebab Remover”.
The concern of social media’s role in promoting the attack isn’t contained only to New Zealand. The leader of Britain’s Labour Party, Jeremy Corbyn, told Sky News on Sunday, “The social media platforms which were actually playing a video made by this person who is accused of murder…all over the world, that surely has got to stop.”
Corbyn went to on to explain how that although the responsibility rests in the hands of the operators of social media platforms, it calls into question how social media companies are regulated.
The spreading of hateful messages is one of social media’s biggest, oldest problems.
Before Cambridge Analytica or any other misinformation battle, hate speech and harassment were at the forefront on these platforms and groups were able to use them as a megaphone to spread their messages. Facebook is one of the wealthiest companies in the entire world. They supposedly employ the smartest people and the best engineers. So why has this problem, one that’s festered for so long, not been fixed? That’s something tech companies are going to have to start answering for.