There is a prevalent issue of racial bias within artificial intelligence (AI). However, a particular online company has been called out for its AI feature.

In November 2022, Canva, a graphic design platform, announced its image-generating app, Text to Image.

“We’ve invested heavily in safety measures that help the millions of people using our platform ‘be a good human’ and minimize the risk of our platform being used to produce unsafe content,” Canva shared in the announcement. “For Text to Image this includes automated reviews of input prompts for terms that might generate unsafe imagery, and of output images for a range of categories including adult content, hate, and abuse.”

What was initially presented as “safe, responsible and ethical technology” and a feature to “empower” Canva’s community has ruffled some feathers.

In a LinkedIn post, Adriele Parker, a DEI thought partner, shared that while searching on Canva for a photo of a Black woman with a certain hairstyle, the results weren’t pleasant.  

“I was playing around with Canva‘s text-to-image app and prompted it to generate a ‘Black woman with bantu knots’ and an error appeared telling me that ‘bantu may result in unsafe or offensive content,'” she wrote. “Tell me your AI team doesn’t have any Black women without telling me your AI team doesn’t have any Black women. My goodness.”

She added, “Canva, if you need a DEI consultant, give me a shout. I’ve been a fan of your platform for some time, but this is not it. Be the change. Please.”

Following Parker calling out Canva, fellow LinkedIn users chimed in about their negative experience using its Text to Image app.

In the post’s thread, one of Canva’s leads responded to the online backlash with the claim of a resolution.

“Yes we’ve fixed this in Text to Image, and have raised the elements concerns with that team too,” Trust & Safety Product Lead at Canva Joël Kalmanowicz shared. “Thank you for flagging it Adriele Parker. These are actually a great pair of examples of the balancing act we have: on the one hand, if the safeties over-trigger like in Text to Image, it can result in perceptions like you raise here. On the other, if the safeties *don’t* trigger, we can end up showing offensive results like in the element search you’ve also highlighted.”

He continued, “Of course, we do strive for simply not having offensive representations. That isn’t easy to do at scale and feedback like yours is crucial to helping us find things that slip through the gaps.”