AI Is Better at Recognizing Items in Higher-Income Households, Facebook Study Finds

The researchers also showed that the computer vision systems perform better in Western countries

Facebook researchers tested five publicly available object recognition systems.
Photo Illustration: Trent Joaquin; Sources: Unsplash

Biases built into artificial intelligence may color even its ability to recognize simple household objects, according to a new study from Facebook researchers.

The report found that major computer vision systems tended to identify everyday items commonly found in lower-income communities and emerging-market countries with less accuracy than those from more economically advantaged areas. Researchers attributed the biases to geographic concentration, primarily English-language labelling and uniform cultural sensibilities found within public image datasets on which much of the technology is trained.

The results demonstrate yet another way in which the black-box algorithms that are coming to govern more and more of people’s daily lives tend to reflect the biases and blindspots of their disproportionately white, male creators.

“With the great success of [computer vision technology] also comes great responsibility: in particular, the responsibility to ensure that object-recognition systems work equally well for users around the world, irrespective of their cultural background or socio-economic status,” the researchers wrote in their report.

The study focused on the five biggest object recognition systems used by much of the business world through cloud services: Microsoft Azure, Clarifai, Google Cloud Vision, Amazon Rekognition and IBM Watson. That software was tested on 117 categories of household products encompassing everything from toothbrushes to refrigerators from a wide range of countries and income levels.

All five systems made an average of about 10% more errors in identifying objects from households in the bottom income bracket (less than $50 USD per month) than those from households in the top rung (more than $3,500 per month). Some individual differences were even starker; for instance, the AI named objects originating from the United States with about 15-20% more accuracy than those from Somalia or Burkina Faso.

Accounting for biases like these will become increasingly important as tech giants—an overwhelming number of which are based in the U.S.—look to employ nascent computer vision technology in everything from self-driving cars to security cameras. In the advertising and marketing industries, companies are already exploring how object recognition can be used to better price product placement and brand sponsorships, make shopping easier and create and distribute content more efficiently.

The report comes as controversies over biases in facial recognition systems have roiled the AI community. Earlier this year, an MIT Media Lab study uncovered racial and gender prejudice in the performance of this technology, bolstering an ACLU report last year in which Amazon’s system misidentified members of Congress to police mugshots.