This article was originally published on 05/08/2019
The film Minority Report showed us a future where technology can be used to prevent crimes before they occur. The 2002 thriller starring Tom Cruise and Colin Farrell imagined Washington, D.C. in 2054 with a 0 percent murder rate due to the fictional PreCrime program. However, it was not without its failings.
It’s been widely known that artificial intelligence is subject to the complicit biases of the programmers who give it life. Much like a child learns from a parent, machine learning is the concept that machines can learn context from simply combing through information. When the machine starts to see a word’s association with a concept over and over, a connection is born. Anyone who has spent enough time on the internet knows that this is a dangerous game.
As evidenced by the Microsoft/TayTweets incident in 2016, a machine can be swayed by deviant minds from the internet’s darkest corners. The bot was created to engage with people on Twitter. Within one day, it went from being “stoked” to meet people to agreeing with Hitler and damning feminists to hell. The bot was shortly thereafter put out to pasture. You would think that Microsoft learned their lesson, but they introduced another bot, Zo, at the end of 2016.
Aside from racist rants, machines inheriting biases from their creators presents a tangible challenge when it affects people and their lives.
Corporations have begun to hand over the reins of their recruitment over to AI. This can be problematic when those programs are left to sift through resumes to decide job worthiness based solely on one document. Biases can often end a person’s chances before they truly get one.
Just last year, Amazon ended an experiment they hoped would assist them in their recruiting efforts. Since there were so many male applicants — programming being the male-dominated field it is — the program valued characteristics shared among those applicants. As a result, women were devalued. Especially those who had the word “women’s” on their resume. Graduates of two unnamed women’s colleges were downgraded.
Law enforcement’s recent interest in AI and machine learning is even more frightening. Since systemic issues already exist within the justice system, this injustice persists even when technology is brought into the fold. In instances where simulations were run to calculate the probability to commit a crime because there is an overrepresentation of Black people within the justice system, Black people were often flagged as a potential risk.
In 2016, ProPublica did a study about software being used to weigh a defendant’s recidivism — the likelihood of a criminal to re-offend — and found that Black offenders were routinely given higher risk rates than their white counterparts. This was true even when the Black offender had no criminal history and the white criminal offender did. Risk assessments from Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) and similar programs are used across the country for multiple steps in the judicial process — from consideration in assigning bond to sentencing. The tech industry has mostly responded to the issues with the technology itself and not the vast gray area for potential misuse of the technology.
There is a chorus of voices who say that diversifying the programmers who work on AI or machine learning projects would alleviate the issue. However, this doesn’t address instances where programmers of color express concern about projects and are overruled. There has to be more people of color in leadership positions, but that ultimately comes down to providing access and support for students interested in STEM as well as providing positive environments for them to thrive once they enter the workplace.
Even still, machines learn bias from the data they are fed. There is nothing to be done about this other than the monumental task of bringing attention and change to systemic oppression on every level. No matter the perceived importance of the slight, we must all show up and be our authentic selves to prevent racism from going digital.
When MIT’s Joy Buolamwini conducted a study which found that gender-recognition software was 34 percent less likely to accurately categorize Black women than it did white men, she brought awareness to the issue and more data was fed to the software to improve its accuracy.
Spoiler alert (for a 17-year-old movie, but still) – Tom Cruise managed to get the PreCog program shut down after it was found that its creators believed they were above the law. As we move closer to a more automated future, we should not resign ourselves to think machines will be able to discern complicated issues that us humans cannot. Machines will just take that existing dysfunction and carry it out in explicit precision.
This essay was done in partnership with Intuition, an Atlanta based multicultural marketing firm specializing in brand development