Can Artificial Intelligence Frame You For Crimes You Didn’t Commit?
It sounds like a theory from Orwell or an episode of Black Mirror, but in a world where technology is moving faster than humans have ever imagined, and law enforcement agencies are experimenting with artificial intelligence, the possibility of being framed by machines is closer to reality than we think.
Chicago has been at the center of a broad public conversation around gang and gun violence in America. What has received considerably less media attention is that Chicago joined Cleveland, Ohio and Ferguson, Missouri in state and federal investigations around unconstitutional police practices in the black and Hispanic communities.
In November, Cook county cleared 15 men in its first-ever mass exhortation for victims of police harassment and abuse on falsified and pressured drug charges. Many of the exonerated men detailed how they were framed by police officers for years before a petition was filed.
A month prior, a research paper from Nvidia, a Silicon Valley based company that creates graphics processing units (GPUs) and chips for computer processing systems, announced that one of its AI’s successful created images of “unprecedented quality” of people who do not exist.
The algorithm used by Nvidia’s AI, called a generative adversarial network or (GAN), is comprised of two artificial neural networks that were designed to mimic the functions of neurons in the human brain. The GAN Nvidia used essentially pitted against one another to produce a product; faces of non-existent people.
The company also announced that one of its AI’s has successfully created fake videos. The example highlighted was a video of street that the AI only observed in the summer, replicated with the same street completely covered in snow.
While Nvidia says the technology can be used to teach self-driving cars how to drive in different conditions, according to a statement from company representatives in the Verge, Nvidia’s partnership with local law enforcement agencies through Nvidia Metropolis, a venture to bring AI to cities, indicates that the technology can be used to do much more.
In October, COBAN technologies announced that it was using Nvidia AI in its Focus H1 police dash cams. The technology can find vehicles linked to Amber Alerts, automatically identify car makes and models, read license plates and driver’s licenses for officers and monitor the health and safety of people taken into custody.
In an interview with an NBC-affiliate in Houston, COBAN’s VP engineering said that the dash cam, which is currently in use in Los Angeles and being piloted in Houston, could learn to detect if a suspect is holding a gun within eight months.
AI from Nvidia Metropolis is currently being used for mass surveillance including facial recognition technology around the world. One can argue that machine learning can reduce the number of errors law enforcement makes in arresting citizens, but in a country with a rich history of discrimination and mass incarceration in communities of color, this poses a threat.
Use of a technology in law enforcement that learns on its own with the potential to create alternate realities in video and create people who do not exist, should be nothing short of alarming.
The technology pioneer behind Tesla, SpaceX and Hyperloop One, Elon Musk, issued a warning at the National Governor’s Association meeting this summer that humans should move to regulate AI before “its’ too late.” Let’s make sure the leaders of our country heard it loud and clear.