Facebook has some explaining to do…again.
A video posted by Daily Mail on the platform shows Black men being harassed by a white onlooker who eventually calls the police on the Black men for allegedly trespassing. The situation worsened after several Facebook users reportedly received an automated message asking if they would like to “keep seeing videos about Primates,” reports the New York Times. In response to the Artificial Intelligence (AI) blunder, Facebook has issued an apology.
A former Facebook employee, Darci Groves, sounded an alarm through a Facebook feedback forum after receiving notice from a friend. She also took to Twitter to expose the AI failure on Sept. 2.
Um. This “keep seeing” prompt is unacceptable, @Facebook. And despite the video being more than a year old, a friend got this prompt yesterday. Friends at FB, please escalate. This is egregious. pic.twitter.com/vEHdnvF8ui
— Darci Groves (@tweetsbydarci) September 2, 2021
“This ‘keep seeing’ prompt is unacceptable, @Facebook,” Groves wrote. “And despite the video being more than a year old, a friend got this prompt yesterday. Friends at FB, please escalate. This is egregious.”
In response to the forum, A Facebook product manager for their video service deemed the incident as “unacceptable” and vowed to make efforts to investigate the “root cause.”
Facebook immediately disabled the cryptic message after the AI failure was brought to their attention.
“As we have said, while we have made improvements to our A.I., we know it’s not perfect, and we have more progress to make,” Facebook spokesperson Dani Lever shared in a statement. “We apologize to anyone who may have seen these offensive recommendations.”
Unfortunately, this is not Facebook’s first time under fire for its bias within their software. Previous boycotts from organizations such as the Anti-Defamation League, the NAACP, and Color of Change propelled Disney, Starbucks, and more multinational chains to suspend their campaigns. The pressure led Facebook to create a team to investigate anti-bias claims.
Facebooks recent AI blunder is a reminder that racism and implicit bias can spill into the work. Companies have a moral obligation and ethical responsibility to their users to ensure they are not worsening systemic inequalities through their technologies. Technologies are the future and companies such as Facebook must make their responsibility to prevent discrimination by being proactive with hiring diverse candidates in tech. More diverse candidates improvise AI and facial recognition by expanding unrepresented algorithms and widening training data.