ChatGPT, an artificial intelligence (AI) chatbot, has become a trending topic since it hit the tech scene. 

However, an investigative report has been released that details the alleged exploitation to improve it.

According to TIME, OpenAI — the creator of ChatGPT — is said to have outsourced Kenyan workers who earned less than $2 per hour to make the chatbot less toxic.

The workers are under Sama, a San Francisco-based firm that hires people in Kenya, Uganda, and India. The report details that OpenAI had the goal of feeding an AI “with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild.”

Then, to get those labels, the company sent tens of thousands of snippets of text pulled from the dark web to Sama in November 2021.

The firm alleges that there were three contracts worth $200,000 and the Kenyan workers were to be paid $12.50 per hour.

“Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” said Partnership on AI, a coalition of AI orgs that OpenAI is actually a part of, per the outlet. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”

In addition to allegedly being underpaid, the Kenyan workers say that they were “mentally scarred” from the text they had to read through.

“That was torture,” one anonymous Sama worker said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.”

Due to the alleged traumatic work environment, Sama canceled all of its work early for OpenAI in February 2022. In turn, the two parties came to an agreement that the original $200,000 wouldn’t be paid. OpenAI claims that the contracts were worth “about $150,000 over the course of the partnership.”

In regard to the effect of the labeling on Sama workers’ mental health, OpenAI shared a statement.

“…we take the mental health of our employees and those of our contractors very seriously,” an OpenAI spokesperson stated. “Our previous understanding was that [at Sama] wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalization, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so.”