At this point, artificial intelligence is a largely unavoidable technology. Although it’s hard to come up with an exact definition as to what constitutes AI, it includes everyday objects like your smart devices and its crept its way into algorithms too. Unfortunately, while artificial intelligence is often touted as having the capability to solve big problems, it can often reinforce social inequalities.

From Google’s algorithms labeling images of Black people as gorillas to predictive policing models used to target historically over-policed communities, the biases that exist within AI cannot be denied. People are interacting with AI every single day, either as users or as the stories behind data points that researchers rely on to train their systems. There’s no way for people to truly opt-out of being included in the development of AI technology, and that’s why the problems within it need to be addressed.

Often, AI is placed on a pedestal intended to keep it out of human reach. By regarding AI as a magical product, people are unable to properly critique it. However, part of the issue with AI is examining who builds it. That means not only looking at the developers — who are often white men and bringing their own biases into products — but also the individuals behind data. Right now, there’s a race for companies to develop the next big thing, all driven by capitalism. In order to profit, companies need to get ahead of the competition, and in this race, data has become currency. 

As a result, we see companies using people’s information in ways that they didn’t consent to. Take the recent news that researchers were using YouTube videos of the mannequin challenge to train AI as an example. Researchers scrapped over 2,000 videos from YouTube — and nobody consented to it. While what the researchers did wasn’t technically illegal, it poses questions about ethics. 

“The emphasis is on ‘data’ as something that is up for grabs, something that uncontestedly belongs to tech-companies, governments, and the industry sector, completely erasing individual people behind each data point,” Abeba Birhane — Ph.D. candidate in cognitive science at the University of College Dublin — wrote in an article.

By erasing the person behind the data, Birhane pointed out, it’s much easier to “manipulate behavior” or otherwise guide users towards what’s often profitable for a company. 

“The rights of the individual, the long-term social impacts of AI systems and the unintended consequences on the most vulnerable are pushed aside, if they ever enter the discussion at all,” Birhane went on to add.

To change AI, the way that data is seen and interacted with needs to undergo its own sort of revolution. After all, if the personhood behind a data point was actually seen and respected, then the National Institute of Standards and Technology (part of the U.S. government) wouldn’t have used images of vulnerable people — including pictures of abused children and dead bodies —  to train their own AI programs.

The idea of reorienting how we look at data isn’t new. Organizations like Data for Black Lives seek to use data science as a way to create concrete and measurable change in the lives of Black people, noting that data in history has often been a tool of oppression. What’s important about the organization is that they account for data beyond the digital. Data existed long before computing, such as in slave ledgers. Redlining was a “data-driven enterprise” as well, according to Data 4 Black Lives. 

Along with re-examining data, Black people have begun to develop their own institutions to challenge the biases found within AI. Researcher Joy Buolamwini founded the Algorithmic Justice League, a collective that also aims to develop practices for accountability during the design, development, and deployment of coded systems.

Often, people try to suggest that the solution to AI’s biases is hiring more people of color. While that may help with industry diversity, it only tackles one part of a much larger issue. Tech as an industry pretends it doesn’t need to interact with anybody else, but if tech and who it impacts is to be more equitable, those building it need to be engaged with the communities they plan to deploy systems in, not just people with fancy credentials on a resume. The tech community also needs to be in conversations with other fields, such as the humanities.

Changing the biases found in AI is not going to be easy because it will ultimately require a social overhaul. If the biases aren’t cut off at the source, they’ll always find their way back into technology.