Over the past year, Facebook has made it clear that the company wants to improve its reputation amongst the public. Since pivoting to privacy, the company has poured money into global advertising to help offset some of the consequences from its security mishaps, such as the infamous 2018 Cambridge Analaytica scandal

Part of Facebook’s push to privacy included an increased focus on private groups and messaging. That can be seen with some of the ads Facebook is developing, like ones with the campaign, “More Together.” The ad is intended to make people think about the meaningful exchanges Facebook allows them to have, but there’s a downside to the company’s new focus on groups. 

Facebook has always struggled with content moderation. By encouraging users to take their Facebook experience into closed, private forums — such as groups or messaging — the company has essentially skirted a lot of its responsibility as a platform. Recently, reports found that Border Patrol officers were in a closed Facebook group making jokes about migrants’ death and sexist memes often featuring Latina members of Congress. 

In response to the reports, many said that Facebook as a platform couldn’t be blamed. If something occurs in a private group and isn’t reported, there’s no way for Facebook to know about it. That viewpoint is something that ultimately benefits Facebook as a company and allows them to essentially shrug their shoulders at what’s happening. 

There have been numerous cases this year alone of people congregating in secret Facebook groups to threaten vulnerable communities. In June, a report from Reveal found hundreds of current and former law enforcement members in a closed Facebook group promoting hate speech and bigotry. Last week, CNN discovered the secret Facebook group, “The Real CBP Nation,” that has about 1,000 members and mocks the act of separating families, contains memes demeaning Rep. Alexandria Ocasio-Cortez, and other racist imagery.

Hate speech online isn’t limited to Facebook, but the company needs to take responsibility for what happens on its platform. There is a reason that people feel comfortable enough to create groups of that size on Facebook where they threaten violence against vulnerable communities.

From narrow definitions of white supremacy within its policy that allowed many examples to go unchecked to unequal moderation that banned Black people for discussing their own oppression, Facebook has developed a platform that emboldens hate. The company’s algorithms are partially to blame, as they can bring members of hate groups together, but fundamentally Facebook itself is the problem.