In March, Facebook’s live stream feature was used to broadcast the Christchurch shooting, where fifty people were killed in two mosques in New Zealand.

Since the shooting, Facebook has understandably come under fire and scrutiny by both the public and government officials. In April, the company was even called to testify before the House Judiciary Committee alongside Google on the rise of white nationalism online.

Today, Facebook announced it’s implementing a new “one-strike” rule. Any users who violate Facebook’s most serious policies — such as the Dangerous Organizations and Individuals policy — will be prohibited from using Facebook Live for a set period of time.

“Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate,” Facebook’s VP of Integrity, Guy Rosen, wrote.

The company plans on extending its restrictions to other areas on its platform, starting with preventing those same people from creating ads on Facebook.

“We recognize the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook. Our goal is to minimize risk of abuse on Live while enabling people to use Live in a positive way every day,” Rosen wrote.

In addition, the company plans to invest in preventing manipulated media. When the Christchurch shooting originally live streamed, modified versions of the video would pop up across social media.

The company’s own algorithms were unable to detect many of those videos. That’s part of the reason why Christchurch shooting videos were still found on Facebook and Instagram in May.

Ultimately, the failure to have technology capable of removing those videos rests with Facebook, and the company recognized that today.

“Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research,” Rosen wrote.

To do so, Facebook plans to partner with The University of Maryland, Cornell University, and The University of California, Berkeley.

The research will revolve around new techniques to: “detect manipulated media across images, video and audio, and” — also — “distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs.”

Facebook hopes to partner with more organizations in the future.