People are obsessed with finding ways to detect when someone is lying. This pursuit to analyze people’s bodies or expressions has led to government programs like TSA dropping billions on an unscientific “behavioral detection program.” Now, the United Kingdom-based startup, Facesoft is in talks with police in Britain and India about an AI system that claims to read the hidden emotions of suspects. 

Facesoft — which describes itself as industry leaders in face analysis technologies — used a “database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain,” Bloomberg reported.

The program’s purpose is to “identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.”

“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” the company’s co-founder and Chief Executive Officer, Allan Ponniah, explained to The New York Times.

To be clear, this technology is just artificial intelligence taking another stab at legitimizing pseudosciences. The idea of AI that can supposedly read emotions — or affect recognition — isn’t new. In a 2018 report, AI Now addressed affect recognition, writing:

“Affect recognition is a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and ‘worker engagement’ based on images or video of faces. These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy.”

There are immediate concerns with any AI program that claims to read emotion or detect if someone is lying. You have to remember that AI datasets don’t magically appear. AI is designed, so it can absolutely pick up the bias of its designers.

Previous studies of affect recognition specifically have found that programs struggle to interpret the emotions of Black faces. Not only are Black faces consistently scored as angrier than white ones for every smile, but if there was any ambiguity about an expression, Black faces were always scored angrier. That means even if someone has a half-smile, programs assume that a Black person is essentially angry by default.

This isn’t the first time AI has been caught trying to legitimize bad science. Facial recognition programs that claim to read race and gender are prime examples of that, although their claims are more normalized.

It’s impossible to read someone’s gender simply by glancing at their face. That assumption opens up new ways in which transphobia can manifest itself through technology. In addition, race is not biological. It’s a technology of oppression by itself, as equally created as artificial intelligence. Any claims to read race means computer programs are told to give weight to the false notion of a biological and measurable factor.

Although it may be tempting to laugh at such an obvious display of bad science, remember that many pseudosciences — such as race sciences — were treated as valid at one point in time. Facesoft’s program is terrible, but its existence is a cause for concern.