If you participated in the Mannequin Challenge, surprise: you may have helped train robots.

In 2016, the Mannequin Challenge seemed to totally take over people’s feeds. It got so big that even celebrities (Destiny’s Child, anybody?) got involved. Recently, MIT Technology Review reported that a team from Google AI scrapped videos of people doing the Mannequin Challenge to train neural networks.

Researchers used the videos so they could train AI to better recognize depth in videos. To build their dataset, researchers scrapped 2,000 videos from YouTube. Then, researchers converted all the videos into 2D images, estimated the camera pose, and created depth maps.

As a result of their methods, the AI system was able to predict the depth of moving objects in video way better than before. This technology can be helped to develop better self-driving cars or robots who can navigate new areas.

However, the researcher’s methods open up a lot of questions about how consent operates in the tech industry. Generally speaking, researchers are legally allowed to take information from sites like YouTube or public places, but is it ethical?

MIT Technology Review said that the data-scraping practice is “neither obviously good nor bad.”

“As data becomes increasingly commoditized and monetized, technologists should think about whether the way they’re using someone’s data aligns with the spirit of why it was originally generated and shared,” the outlet said.

This isn’t the first time researchers have taken data for their own projects in ways that users really didn’t consent to. In March, NBC News reported that IBM took almost a million photos from Flickr to train facial recognition without anyone’s consent.

That same month, Slate ran a report that revealed the government uses images of vulnerable people to train facial recognition tech. That includes images of immigrants, abused children, and dead people — all of them used without consent.

The researchers developed the Mannequin Challenge dataset to support future research, according to MIT Technology Review. Since that’s the case, there’s really no way of knowing whose videos were chosen to even remove them from the dataset at this point.

People may not care that their videos were used to help AI recognize depth. However, in the case of facial recognition and other technologies that carry immediate risks for surveillance, a lot of people may not want their faces involved with that.