Mark Zuckerberg announced that Meta—the company that owns Instagram, WhatsApp, and Facebook, which collectively have a combined daily active user base of 3.3 billion—will be implementing sweeping changes to content moderation. The announcement was made via a Reel on Zuckerberg’s Instagram.

Anyone on the internet knows that, over the last five years, content has often taken precedence over real connection. Considering the incoming presidential administration and the immense influence platforms like Facebook and Instagram have on what content is seen or suppressed, it’s unsurprising that Meta is overhauling its approach ahead of this political transition. This is especially relevant given the strained relationship Donald Trump has had with social media companies like Facebook and X (formerly known as Twitter)over the years.

Content moderation has become a highly polarizing issue in the United States, especially due to the political nature of our cultural spaces and the increasing prevalence of misinformation—especially after the COVID-19 pandemic. In this piece, I will discuss the changes Mark and Meta are making and explore the potential implications of those changes.

One major shift is that Meta is eliminating fact-checkers and replacing them with a community notes feature. For those unfamiliar with community notes, this feature was initially launched by Twitter’s previous leadership team before Elon Musk acquired the platform in 2022. It was originally called Birdwatch and aimed to shift more power of moderation to users rather than employees and contractors. On X, users apply to become community notes contributors, and once accepted, they can add context to tweets that may need it. Even if not a contributor, users can still rate community notes they see on their timeline on a scale of 1 to 5 stars. However, community notes on X have not been particularly helpful. By the time a harmful tweet gets a community note, it has already spread and caused damage. Over-relying on users to proactively engage with misinformation seems like a lazy approach to moderation, and I believe it will create more problems for average users than it solves.

Meta is also simplifying its content policies around hot-button topics such as immigration and gender—issues that gained prominence during Trump’s 2016 and 2020 campaigns. I believe this policy easing will make platforms like Instagram and Facebook more harmful for marginalized groups. If individuals are no longer penalized for bigoted language, what’s to stop them from using it? Over the next few years, I anticipate that marginalized communities will seek smaller, less hostile platforms like BlueSky, where they may feel safer. In addition to changing policies, Meta will alter how they enforce these policies. Previously, their filters scanned for any violations, but now they will focus only on illegal or high-severity violations, with reduced sensitivity. For violations that don’t meet these criteria, Meta will rely on self-reporting before taking action. Overall, these changes seem like a shift in responsibility away from Meta and onto users when it comes to safety. While some may welcome this shift to feel more in control of their experience, Mark acknowledged that it could lead to more negative experiences for some users, noting that it is a tradeoff he is willing to make to avoid unjustly removing posts or accounts.

The final notable change is that Meta is moving its trust and safety and content moderation teams from California to Texas, where they believe their teams will be less susceptible to accusations of liberal bias.

Overall, these changes suggest that Meta as a business will continue to adapt its moderation policies based on the political leanings of the current administration. The more liberal policies were implemented during the tenure of a liberal president, and now, with a more conservative administration, Meta appears to be easing those policies. Shifting the responsibility of safety and trust onto users will likely result in a decline in the overall experience, while appeasing the president. As someone who has worked in Trust & Safety from a product perspective at TikTok, I understand how difficult it is to effectively moderate content at scale. However, I don’t believe content moderation policies should be shaped by the political affiliation of the president.