The Christchurch massacre’s live-stream on Facebook — and subsequent spread across the internet — further illuminated social media’s hate problem that many have criticized for years. It left countries across the world scrambling to force tech companies to answer for their role in white nationalism’s presence online.

On Wednesday, New Zealand’s Prime Minister Jacinda Ardern announced that she’s planning a summit in Paris alongside French President Emmanuel Macron. The summit’s goal is to have industry and world leaders agree to a pledge called the “Christchurch Call” to eliminate terrorist and violent extremist content online.

In the announcement, Ardern said:

“The March 15 terrorist attacks saw social media used in an unprecedented way as a tool to promote an act of terrorism and hate. We are asking for a show of leadership to ensure social media cannot be used again the way it was in the March 15 terrorist attack…We all need to act, and that includes social media providers taking more responsibility for the content that is on their platforms, and taking action so that violent extremist content cannot be published and shared.”

Since Christchurch, social media companies have found themselves under growing scrutiny. The House Judiciary Committee even questioned Google and Facebook on the rise of white nationalism online. Ironically, YouTube was forced to shut down comments on its live-stream of the hearing due to racist and hateful remarks.

Tackling hate speech and the misinformation spread by white nationalism is going to require more than companies simply drafting new rules or putting up some fact-checking filters.

After all, Facebook finally banned white nationalism and white separatism on its sites. However, the company then said a video promoting white nationalism didn’t fall under its ban. Facebook later removed Canadian white nationalist groups and major far-right UK groups, but not because of the white nationalism, white separatism, or white supremacy ban.

Hesitancy to name white supremacy for what it is means it can continue to fester. Facebook’s refusal to accurately label the hate on its platforms shows how hollow bans can be.

Meanwhile, YouTube’s fact-checking tool — designed to tackle misinformation — showed articles about 9/11 during Notre Dame fire live-streams. That moment highlighted larger concerns around how conspiracy theories are developed and spread on the platform.

“Social media platforms can connect people in many very positive ways, and we all want this to continue,” Ardern said. “But for too long, it has also been possible to use these platforms to incite extremist violence, and even to distribute images of that violence, as happened in Christchurch. This is what needs to change.”

The May summit will be held alongside the Tech for Humanity meeting of G7 Digital Ministers and France’s separate Tech for Good summit. It is unclear who plans to attend, but Ardern has reportedly spoken directly with Facebook CEO Mark Zuckerberg.