San Francisco Banned Facial Recognition Technology Because It’s Too Dangerous, But Black And Brown Communities Across The Country Are Still At Risk
Photo Credit: Aerial view of San Francisco business district in California on a sunny day in the United States of America.
On Tuesday, San Francisco officially made history as the first city in the United States to ban government use of facial recognition technology. In a reported 8-to-1 vote, the city’s Board of Supervisors passed the Stop Secret Surveillance Ordinance. The new law restricts all city departments from using facial recognition technology and requires board approval to purchase any new surveillance devices.
The Stop Secret Surveillance Ordinance expressed concerns around facial recognition’s potential to exacerbate pre-existing social issues, such as anti-Blackness and over-policing of vulnerable communities. The proposal itself noted that the “propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purposed benefits”, going on to specifically cite concerns around continuous government monitoring.
The coalition supporting the ordinance — made up of civil rights, racial justice, LGBTQ rights, homeless, and immigrants’ rights advocates — said in a statement on the ACLU Northern California’s website, “By passing by this law, the city gave the community a seat at the table and acted decisively to protect its people from the growing danger of face recognition, a highly invasive technology that would have radically and massively expanded the government’s power to track and control people going about their daily lives.”
There are a multitude of issues with facial recognition. Like technology in general, artificial intelligence is not autonomous. It does not decide on its own to uphold social biases. However, not only can programs teach themselves — as seen with Amazon’s recruitment tool that learned to prioritize male candidates over women — but they can be deployed in efforts to regulate or even re-define Otherness. In addition, the tech industry has been critiqued repeatedly for its lack of diversity. When technology is made and applied almost entirely by white men, it is bound to prioritize their experiences, often through subconscious biases.
San Francisco may have stopped the government from applying facial recognition in their city for now, but it’s already in use by both federal and local government agencies across the country. Amazon’s Rekognition is perhaps one of the most infamous, as the ACLU discovered the program was sold to at least two law enforcement agencies and peddled to the Department of Homeland Security’s Immigration and Customs Enforcement. Multiple studies revealed Rekognition’s potential to cause communities harm. In July 2018, the ACLU found that Rekognition falsely matched 28 members of Congress to mugshots — six of them were members of the Congressional Black Caucus. Then, a January 2019 study revealed the program has greater errors in trying to recognize darker-skinned women.
Alongside facial recognition’s issues at reading anyone who isn’t a white man, there are deeper concerns about pseudosciences embedded within the technology itself. Many facial recognition programs claim to be able to read things like people’s gender, emotion, or race. First, it is impossible to read someone’s gender by looking at their face. That assumption reveals how technology is being built and normalized through a gender binary that harms trans, non-binary, and gender non-conforming people.
In addition, the idea that someone’s emotions can be read by a program (also known as “affect recognition”) are eerily reminiscent of phrenology and physiognomy, as AI Now said in their 2018 report. The organization wrote, “These claims are not backed by robust scientific evidence and are being applied in unethical and irresponsible ways…Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.”
The claims that programs can recognize race become particularly contentious and problematic when remembering that race isn’t fixed. As noted by Camilla Hawthorne, Assistant Professor of Sociology at The University of California, Santa Cruz, race itself can be thought of as a sociotechnical system, or an “arrangement of humans, technologies, spaces, and policy regimes” that encompass “the biological, the sociological, the political, and the technical”, citing Laura Forlano and Kat Jungnickel’s 2015 essay. This means that race itself is not stable and new technologies can impact the way that race is constructed and ultimately understood. In some ways, race itself can be thought of as a technology of oppression. The assumption that facial recognition can “read” somebody’s race means that further weight is being given to the false notion of a biological and measurable factor.
Concerns around facial recognition and race are repeated within the ordinance itself, stating, “surveillance efforts have historically been used to intimidate and oppress certain communities and groups more than others, including those that are defined by a common race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective.” Examples of early methods of surveillance, especially using biometrics, can be traced back to “enslaved people in transatlantic slavery, slaves passes, and runaway notices,” Simone Browne wrote in Dark Matters: On the Surveillance of Blackness.
Although the New York Times reported that San Francisco’s police department doesn’t currently use facial recognition, the ordinance can be read as an overdue preventative measure. The surveillance of Blackness and indigenous populations forged the foundation of the United States. Facial recognition is not new in its potential to harm communities through surveillance, but it simply exposes how past behaviors and manners of oppression can become digitized in a new digital era.
San Francisco’s ban may be historic, but it does contain its faults. The ban itself only focuses on government use of facial recognition technology, which means private corporations and businesses can still utilize it. In a Medium article, author David Golumbia proposed a question that the ban does not cover, “Even if the local police or the FBI is prevented from actively deploying facial recognition by its own employees, would it also be prohibited from purchasing or subcontracting those services to a private company that uses them completely legally?”
Issues around privatizing surveillance have been around long before this ban. Data mining giant Palantir’s software played a key role in ICE deportations. In addition, a 2018 report revealed that the company provided the Los Angeles Police Department with software to develop its predictive policing program. The LAPD’s program was described by the coalition as a “racist feedback loop” in which a “disproportionate amount of police resources are allocated to historically hyper-policed communities.” It’s worth noting that while Palantir has offices across the country, it’s headquarters are located in Palo Alto, California.
In March 2019, a Slate report revealed that the government trains its facial recognition programs using the images of vulnerable people without their consent — including immigrants, abused children, and dead people. This opens up a whole new set of questions around consent and facial recognition. Even if private owners of facial recognition give you the option to “opt out”, were you involved in its training without your permission? This question is becoming increasingly relevant, as a recent report revealed that Ever — a photo sharing app — used millions of people’s photos to train its own facial recognition tools without their consent. Ever AI advertises that its tools can be used by governments to “improve outcomes for surveillance” and sells the technology to law enforcement, private companies, and the military.
When it comes to facial recognition technology, there are a number of academics who push back against the notion that it can be “fixed”. For example, teaching facial recognition to better recognize Black people won’t solve the underlying issues of surveillance. A recent Vox article makes the argument that facial recognition “simply shouldn’t exist”, a position backed by Golumbia.
Many see San Francisco’s ban as a step in the right direction, but facial recognition has to be thought of as a hydra rather than a regular dragon. It has multiple heads and ways of asserting itself in order to cause harm and destruction. For members of communities who have historically been surveilled in this country, the fight is still long from over.