Police May Start Using AI To Tell If Someone Is Lying. Here's Why That's Concerning
Photo Credit: MIAMI, FL - OCTOBER 24: A patch is seen on the jacket of a Transportation Security Administration official as he works at the automated screening lanes funded by American Airlines and installed by the Transportation Security Administration at Miami International Airport on October 24, 2017 in Miami, Florida. The automated checkpoint technology, which is now in use at 11 airports across the country, is said by officials with the Transportation Security Administration to enhance security efficiency as well as decrease the amount of time spent in the security screening process. (Photo by Joe Raedle/Getty Images)

Police May Start Using AI To Tell If Someone Is Lying. Here's Why That's Concerning

People are obsessed with finding ways to detect when someone is lying. This pursuit to analyze people’s bodies or expressions has led to government programs like TSA dropping billions on an unscientific “behavioral detection program.” Now, the United Kingdom-based startup, Facesoft is in talks with police in Britain and India about an AI system that claims to read the hidden emotions of suspects. 

Facesoft — which describes itself as industry leaders in face analysis technologies — used a “database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain,” Bloomberg reported.

The program’s purpose is to “identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.”

“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” the company’s co-founder and Chief Executive Officer, Allan Ponniah, explained to The New York Times.

To be clear, this technology is just artificial intelligence taking another stab at legitimizing pseudosciences. The idea of AI that can supposedly read emotions — or affect recognition — isn’t new. In a 2018 report, AI Now addressed affect recognition, writing:

“Affect recognition is a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and ‘worker engagement’ based on images or video of faces. These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy.”

There are immediate concerns with any AI program that claims to read emotion or detect if someone is lying. You have to remember that AI datasets don’t magically appear. AI is designed, so it can absolutely pick up the bias of its designers.

Previous studies of affect recognition specifically have found that programs struggle to interpret the emotions of Black faces. Not only are Black faces consistently scored as angrier than white ones for every smile, but if there was any ambiguity about an expression, Black faces were always scored angrier. That means even if someone has a half-smile, programs assume that a Black person is essentially angry by default.

This isn’t the first time AI has been caught trying to legitimize bad science. Facial recognition programs that claim to read race and gender are prime examples of that, although their claims are more normalized.

It’s impossible to read someone’s gender simply by glancing at their face. That assumption opens up new ways in which transphobia can manifest itself through technology. In addition, race is not biological. It’s a technology of oppression by itself, as equally created as artificial intelligence. Any claims to read race means computer programs are told to give weight to the false notion of a biological and measurable factor.

Although it may be tempting to laugh at such an obvious display of bad science, remember that many pseudosciences — such as race sciences — were treated as valid at one point in time. Facesoft’s program is terrible, but its existence is a cause for concern.