When it comes to artificial intelligence, developers make a lot of bold claims about what their programs can do. It’s quickly become a part of our every-day lives, especially with the rise of the ever-controversial facial recognition technology. There’s even software advertised to read your emotions and even predict criminality. What these types of claims show aren’t the endless possibilities of AI, but how the technology is used to legitimize pseudosciences, or believes that claim to be based in science.
Generally, the idea that AI can read your emotions is referred to as “affect recognition.” It builds on the pseudoscience of phrenology, which is an offshoot of physiognomy, or the idea that you can judge character based on someone’s appearance. As noted by Dr. Richard Firth-Godbehere, physiognomy helped to provide the scientific justification for many prejudices. For example, U.S. physician James W. Redfield’s 1852 book, Comparative Physiognomy, compares various groups to animals, including “of Negroes to Elephants,” “of Jews to goats,” and more.
The idea that AI can legitimately read emotions has become so entrenched that VCs are giving funding to companies making these claims. Last year, Realeyes, a company that “reads” people’s emotional responses as they watch videos, raised $16.2 million in funding. Used for marketing, TechCrunch reported that Realeyes added customers like Coca-Cola and Mars to its list. However, in a 2018 report, NYU research institute AI Now responded to the claims of reading emotion, mental health, and more with the following:
“These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.”
Physiognomy was also used to develop race science, which can be seen in AI today. For example, the idea that Black people resemble apes, in both appearance and behavior, draws from physiognomy. There is often a knee-jerk reaction to saying these bad sciences couldn’t make their way into AI, but is it a coincidence that Google Photos once began classifying Black people as “gorillas”?
Meanwhile, companies like Faception claim to offer “facial personality analytics,” including the ability to supposedly determine if someone is a terrorist. The company claims that, based on appearances alone, it can determine if someone is “psychologically unbalanced,” depressed or anxious. Despite Faception’s claims that its technology is objective due to machine learning, all mentions of mental illness correlate with categories of criminal offenses such as white-collar offender, terrorist, and pedophile.
“From Faception claiming they can ‘detect’ if someone is a terrorist from their face to HireVue mass-recording job applicants to predict if they will be a good employee based on their facial ‘micro-expressions,’ the ability to use machine vision and massive data analysis to find correlations is leading to some very suspect claims,” AI Now Co-founder Kate Crawford told The Intercept.
Perhaps the biggest issue is untangling the faith that many people put into digital technologies. The myth of AI’s objectivity has been perpetuated again and again. However, AI systems are created. That means they can — and will — begin to back the biases of their creator, whether those are conscious or not. For example, Amazon’s AI recruiting tool learned to have bias against women. In some ways, the AI may have taught itself this, as nobody explicitly called for it. However, remember that machine learning relies on data. If you give AI garbage data, it’ll give you garbage back, something referred to as “garbage in, garbage out.”
“What you would do is you go back and look at historical data from the past and look at successful candidates and feed the algorithm with that data and try to find patterns or similarities,” Oxford University researcher Dr. Sandra Wachter told Business Insider. “You ask the question who has been the most successful candidates in the past…and the common trait will be somebody that is more likely to be a man and white.”
Although we all would like to think that we won’t be tricked by pseudosciences — the reality is that these sciences were fostered by people regarded as intelligent or educated by their peers. As the technology shows, pseudosciences will adapt themselves to new technologies, if given the chance.