Many tech companies — including Microsoft, Google, and Amazon — have produced object recognition algorithms. This form of artificial intelligence is meant to do exactly what it says: recognize objects.

It sounds like something that can’t be messed up, but a recent study found that object recognition is worse at identifying items from lower-income countries.

The study was conducted by researchers — Terrance DeVries, Ishan Misra, Changhan Wang, and Laurens van der Maaten — from Facebook’s AI Lab. The team focused on analyzing five popular object recognition algorithms: Microsoft Azure, Clarifai, Google Cloud Vision, Amazon’s Rekognition, and IBM Watson

The global dataset included 117 categories focusing on common household items, like shoes and soap. Researchers also made sure to diversify both household incomes and geographic locations.

Researchers found that the difference in accuracy was striking. The object recognition algorithms made had an increased 10 percent error rate when asked to identify items from a household with a $50 monthly income compared to those making more than $3,500 per month.

However, the absolute difference in accuracy around geographic location was even greater. The algorithms were overall 15 to 20 percent better at identifying items from the United States compared to items from Somalia and Burkina Faso.

This may not seem like a huge issue, but it speaks to larger problems within AI. As noted by researchers, there are a couple of reasons why this stark difference in accuracy due to geographic location and class exists.

First, most datasets use English as its “base language.” This means when identifying household objects, a team will start by building from nouns available in English. That causes problems from the start because images tagged online in a different language won’t be included in the dataset.

In addition, there may not be an English equivalent for certain household items or cultural events. Some languages may have more ways to identify a particular object that English-based programs will miss. For example, the researchers pointed out that Inuit languages have over a dozen words for “snow.”

The researchers went on to add:

“Even if a word exists and means exactly the same thing in English and in some other language, the visual appearance of images associated with that word may be very different between English and the other language; for instance, an Indian “wedding’ looks very different than an American wedding and Indonesian “spices’ are very different from English spices.”

Along with language bias, AI tends to reflect its creators. This is often seen with facial recognition programs that fail to recognize anyone who isn’t a white man. That problem can be seen with Amazon’s Rekognition, which is included in the study.

This bias can easily translate into object recognition algorithms. For example, the furniture that people have can look very different based on income levels, even if they’re living within the same country.

As AI continues to become more prominent in people’s daily lives, it’s absolutely necessary for programs to be evaluated. However, it’s sometimes easier said than done.

“Auditing AI systems is not necessarily easy because there are no standard benchmarks for performing such audits,” van der Maaten told The Verge. “The most important step in combatting this sort of bias is being much more careful about setting up the collection process for the training data that is used to train the system.”

It’s important to understand that the issues found within this study are not completely unique to object recognition programs.  AI of all kinds — from facial recognition to “emotion reading” tech that assigns negative emotions to Black men more than white men — is reflective of biases found across the world.

Technology is not created independent of society, after all. Instead, if technology goes unchecked, it often serves as a way to give social biases and inequalities a new digital upgrade.