Spurred by Bias, Companies Are Trying to Break Open the Black Box of AI Algorithms

High-profile examples propel explainable machine learning

Activists at the American Civil Liberties Union recently used Amazon’s facial recognition software to compare photos of 200 Boston-area professional athletes to a mug shot database. The program determined that 27 of the players—roughly one in six—were among the 25,000 criminals in its system.

It was, of course, wrong.

Amazon has claimed such tests don’t follow the program settings it recommends to law enforcement clients actually using the tool in this way. But it notably can’t explain exactly why its system might misidentify specific sports stars—or congresspeople in an identical ACLU demonstration last year—with the mug shots that it did.



Subscribe today!

To Read the Full Story Become an Adweek+ Subscriber

View Subscription Options

Already a member? Sign in

Adweek magazine cover
Click for more from this issue

This story first appeared in the Nov. 18, 2019, issue of Adweek magazine. Click here to subscribe.