On Twitter, “Jack, ban the Nazis” has become an increasingly common refrain. Although members of vulnerable communities have spoken out about white supremacists on Twitter for years, nothing has been done about it.

Now, Twitter has finally decided to research whether white supremacists belong on the platform, a Motherboard report revealed. However, it may be too little too late.

Twitter’s head of trust and safety, legal and public policy, Vijaya Gadde, told Motherboard that Twitter believes “counter-speech and conversation are a force for good, and they can act as a basis for de-radicalization, and we’ve seen that happen on other platforms, anecdotally.”

That belief seems to be the leading cause of Twitter’s new research. Gadde went on to say:

“We’re working with them specifically on white nationalism and white supremacy and radicalization online and understanding the drivers of those things; what role can a platform like Twitter play in either making that worse or making that better? Is it the right approach to deplatform these individuals? Is the right approach to try and engage with these individuals? How should we be thinking about this? What actually works?”

However, Gadde’s comments seem to blatantly ignore just how deep the problem runs on Twitter — and the role that the platform has played in the spread of hate speech and white supremacy.

It is difficult to see how Twitter could propose to “de-radicalize” anybody when its refusal to crack down on white supremacists has allowed the platform to become a space for them to organize. This is seen clearly by the accounts Twitter allows to remain up (and even verify), such as former KKK leader David Duke. 

White supremacists organizing on Twitter is also seen through coordinated attacks launched against vulnerable communities. For example, Black feminists created the hashtag #YourSlipIsShowing because people began masquerading as Black women on Twitter.

Slate’s Rachelle Hampton wrote a report covering the hashtag’s creation and the lack of people paying attention.

“But despite the evidence that harassment campaigns fueled by a noxious mixture of misogyny and racism spelled out a threat to users from vulnerable groups, Hudson and Crockett felt that Twitter basically did nothing,” Hampton said.

Twitter isn’t actually a great platform for facilitating conversations, especially not the ones necessary for “de-radicalization.” Becca Lewis, a research affiliate at Data & Society, told Motherboard:

“Counter-speech is really appealing and there are moments when it does absolutely work, but platforms have an ulterior motive because it’s a less expensive and more profitable option. When you’re talking about counter speech, what that is assuming is an environment where people are generally interested in engaging in good-faith conversations and having their minds changed. That is often extremely far from the case on Twitter, where networked harassment campaigns are common, with white nationalists often take part in those campaigns.”

Recently, a team of Italian researchers found that “Twitter not only fails to enhance intellectual attainment but substantially undermines it,” as reported by The Washington Post.

The biggest issue with Gadde’s comments is this: people don’t sign up for Twitter to confront white supremacists. By putting the burden on its users, Twitter is once again evading its own responsibility for allowing the problem to fester for so long.

If Twitter is genuinely interested in confronting white supremacy on its platform, the company needs to listen to the communities most impacted. Right now, Twitter is essentially proposing that it investigates itself — and that’s never really worked before.