From Snapchat filters to FaceApp’s ability to age you, people love apps that turn them into other things. Recently, researchers capitalized on people’s eagerness to see themselves cast in a new light with AI Portraits. On the site, people uploaded a selfie, which was then turned into a classical portrait — all thanks to artificial intelligence. 

Although the site is still down after crashing last week due to increased traffic, researchers had intentions beyond just giving you something fun to do. They also wanted to reveal how bias within artificial intelligence functions.

Based out of MIT-IBM Watson AI Lab, researchers trained their model using 45,000 paintings. Although the portraits fed to the model spanned throughout eras, it focused on 15th-century Europe, and that became quite clear through the portraits that the AI churned out. 

You’ll notice that not only do all the portraits give off serious Renaissance vibes, but nobody is smiling. That’s because even if you put in a selfie where you’re showing emotion, the AI won’t translate that. According to Vox, the researchers wrote:

“Portrait masters rarely paint smiling people because smiles and laughter were commonly associated with a more comic aspect of genre painting, and because the display of such an overt expression as smiling can seem to distort the face of the sitter. This inability of artificial intelligence to reproduce our smiles is teaching us something about the history of art.”

Beyond not showing smiles, some noticed that the portrait app also made people of color into white people. Again, a reflection of the data that was used to train the AI system. 

What’s important about this project is it can help people visualize AI bias. The problems with AI being trained on limited datasets has been noted before, especially with technologies such as facial recognition.

Often, facial recognition programs fail to recognize people of color, especially if they’re darker-skinned. In a test conducted by the American Civil Liberties Union, Amazon’s Rekognition falsely matched 28 members of Congress with mugshots. The false matches included six members of the Black Congressional Caucus.

As seen with the AI Portrait project, limited datasets means that AI can be trained to replicate biases. In this case, the AI model leaned towards only representing Western art styles, and essentially rendered people of color invisible.

This type of portraiture is quite distinctive of the Western artistic tradition,” the researchers wrote on their site, according to Vox. “Training our models on a data set with such strong bias leads us to reflect on the importance of AI fairness.”