This article was originally published on 04/17/2019
As artificial intelligence gains popularity, members of the public have started to notice how it seems to perpetuate discrimination. With situations like Google Photos misclassifying Black people as gorillas and Amazon’s recruiting tool that taught itself men are the preferred candidates, AI’s problems are clear.
In a report released this year, researchers from AI Now stated the industry is facing a “diversity crisis.” Researchers also raised important questions about efforts to improve diversity and AI’s entanglement with old pseudosciences.
According to the report, more than 80 percent of professionals in AI are men. At Facebook and Google, women make up only 15 percent and 10 percent of AI research staff, respectively. The data is even worse for Black workers, though, where only 2.5 percent of Google’s workforce is Black, while Facebook and Microsoft sit at 4 percent.
Researchers caution that efforts to “focus on women in tech” are too narrow, and likely to only benefit white women. They noted the need to acknowledge how intersections of race, gender, and other identities shape people’s experiences with AI.
“The vast majority of AI studies assume gender is binary, and commonly assign people as ‘male’ or ‘female’ based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity,” the researchers added.
That is further emphasized by the fact that there’s no public data on trans workers or other gender minorities. This is an issue because AI has the potential to exacerbate transphobia. This was seen recently with Googlers Against Transphobia pushing back against a now-canceled AI board that included the president of the Heritage Foundation.
The use of AI systems for classification, detection, and prediction of race and gender is also brought up by researchers, who call for urgent re-evaluation. The researchers wrote:
“The histories of ‘race science’ are a grim reminder that race and gender classification based on appearance is scientifically flawed and easily abused. Systems that use physical appearance as a proxy for character or interior states are deeply suspect, including AI tools that claim to detect sexuality from headshots, predict ‘criminality’ based on facial features, or assess worker competence via ‘micro-expressions.’ Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality. The commercial deployment of these tools is cause for deep concern.”
This is an important point. More AI tools are emerging to detect someone’s race, which was seen in a recent a patent by the top maker of police body cams, Axon.
For the researchers, solutions to AI’s discriminating systems don’t lie with simply “fixing the pipeline,” because that doesn’t address deeper cultural issues within the workplace.
Instead, researchers are calling for companies to improve transparency — like by publishing data on compensation broken down by race and gender and harassment and discrimination reports.
“The diversity crisis in AI is well-documented and wide-reaching,” the researchers conclude. “It can be seen in unequal workplaces throughout industry and in academia, in the disparities in hiring and promotion, in the AI technologies that reflect and amplify biased stereotypes, and in the resurfacing of biological determinism in automated systems.”