This article originally published on February 2, 2019

We are in a world where we’re living out science fiction fantasies of the past. Like Octavia Butler’s  1993 Parable of the Talents predicting President Trump’s Make America Great Again slogan, our biggest tech fears are being realized in real time.

No technological advancement is more evident of this than artificial intelligence. As AI develops and becomes more prominent, so have concerns around what purposes those tools will serve.

Recently, Trump signed the American Artificial Intelligence (AI) Initiative and critics noted that it made no mention of AI’s social impact. In a 2018 report, AI Now wrote, “This year, we have seen AI amplify large-scale surveillance through techniques that analyze video, audio, images, and social media content across entire populations and identify and target individuals and groups.”

It may seem strange that lawmakers would fail to mention the many social issues that come with the development of AI. But, the government shouldn’t be expected to regulate artificial intelligence, because they actively desire to use it for surveillance themselves.

When it comes to AI, the question of how people maintain their privacy is most commonly raised. By asking the federal government to protect privacy, many assume people will be fine.

However, privacy generally refers to ownership. In a country where the function of Blackness was never to be an owner, but property to be seen, observed, and disrupted, privacy is not the norm. Instead, Blackness has always existed under surveillance.

Surveillance has to be understood as rooted in a desire to constantly see and control. Knowing this, one of the earliest methods of surveillance was the “slave pass”. This early form of identification developed out of “Slave Codes” or early laws responding to a fear of rebellion by placing restrictions on enslaved people, including tracking their movements.

Enslaved people were forced to travel carrying a pass signed by their owners. Those caught without it were subject to various types of punishment, such as detainment or being labeled a runaway. Early on, the pass relied on assumptions that only white people could read and that all enslaved people were illiterate and unable to mimic passes. However, as Christian Parenti noted in The Soft Cage: Surveillance in America, From Slavery to the War on Terror, that dichotomy wasn’t based in absolute truth.

In his book, Parenti writes, “Literate African Americans could resist with the very tools of white oppression; they could in effect bend the political technology of literacy back upon itself.”

And so, as Black people continuously resisted surveillance in the forms they encountered it, those in power were forced to come up with improved ways to maintain their hold. In the case of surveillance, it included a turn to biometrics.

“The demise of the institution of slavery did not stop the advancement of biometric identification technologies largely premised on systems motivated by white supremacy,” Privacy SOS, an organization “shining sunlight on surveillance” tied to the ACLU’s Massachusetts branch, wrote, adding that although the biometrics today differ from their roots, “the primary function of biometric identification systems remains the same: control.”

Knowing that biometrics developed within a particularly anti-Black history, facial recognition as another system of biometrics must be understood within that. Although people like to imagine technology as neutral, it’s not.

Artificial intelligence can be later taught bias but, beyond that, the bias of a creator is already programmed in the creation to start. In a society where biometrics have anti-Black roots, artificial intelligence can’t expect to escape that same history, especially as government and law enforcement have previously demonstrated a commitment to implement new technology into surveillance, such as social media.

In February of 2018, documents gathered by the ACLU revealed social media monitoring by the Boston Police Department. Privacy SOS wrote, “the Boston Police Department’s Regional Intelligence Center used a social media surveillance system called Geofeedia to conduct online surveillance in 2014, 2015, and 2016.”

Through Geofeedia, the BDP specifically monitored hashtags like #BlackLivesMatter, any words associated with political action, and the use of various, basic Arabic words and the hashtag #MuslimLivesMatter. The previous year, the FBI arrested Rakem Balogun due to his Facebook posts. The Guardian reported Balogun was targeted and prosecuted under a surveillance effort to track what the FBI labeled “Black identity extremists”.

Developers like Microsoft have noted the potential for widespread surveillance under AI to increase, but it has already begun. In May 2018, the American Civil Liberties Union obtained documents revealing Amazon sold its facial recognition software, Rekognition, to law enforcement in Orlando and Oregon.  Additional documents revealed Amazon was actively marketing Rekognition to the Department of Homeland Security Immigration and Customs Enforcement (ICE).

However, Amazon is not the only one who aimed to provide the federal government AI for surveillance purposes. Google began working with the US Department of Defense on Project Maven to deploy computer algorithms in war zones. Although Google employees forced the company to drop the contract, the implications are haunting.

Common critiques around AI’s place in surveillance currently focus on the fact that it cannot read Black people correctly. Amazon’s Rekogniton, for example, mistakes darker-skinned women for men. In addition, an ACLU study found the software falsely matched 28 members of Congress with mugshots — the false matches were disproportionately people of color.

Amazon responded to studies by claiming researchers were not using the technology properly. However, law enforcement agencies who Amazon sold Rekognition to have come forward to say they use the technology the same way researchers do. Gizmodo reported that when asked by Representative Jimmy Gomez if Amazon performs audits to make sure clients are using Rekognition properly, the company said “we have to get back to you.”

AI’s problems extend beyond Amazon’s Rekognition as well. Google’s own photo identifier once mistook Black people for gorillas. But, what this highlights goes beyond trying to make sure facial recognition programs and other AI designed or marketed for surveillance can read Black people.

The consequences of AI’s process of recognition are severe. Because, if a program is only taught to conceptualize you in order to cause you, or those who look like you, harm, then there is no solace to be found. Instead, it poses broader questions around confronting surveillance at its core. Doing so will not happen by turning to the same government who has been invested in it from the start.