Spurred by Bias, Companies Are Trying to Break Open the Black Box of AI Algorithms

High-profile examples propel explainable machine learning

The thinker with data projection
There is a growing movement to broaden the study of AI to include its social impact.
Getty Images

Activists at the American Civil Liberties Union recently used Amazon’s facial recognition software to compare photos of 200 Boston-area professional athletes to a mug shot database. The program determined that 27 of the players—roughly one in six—were among the 25,000 criminals in its system.

It was, of course, wrong.

Amazon has claimed such tests don’t follow the program settings it recommends to law enforcement clients actually using the tool in this way. But it notably can’t explain exactly why its system might misidentify specific sports stars—or congresspeople in an identical ACLU demonstration last year—with the mug shots that it did. Or why an MIT study earlier this year found that its facial analysis tool performed worse on darker-skinned and female faces (which Amazon has also disputed).

Like most of the deep-learning algorithms that govern a growing portion of people’s everyday lives, the software is what’s commonly called a black box system, a neural network that has formed processes unreadable even to its own developers. Since neural networks learn tasks as a collective mass of nodes, it is often hard to pinpoint the role of any single piece of that system.

Some researchers say these systems need to be this way in order to perform complex tasks well. But high-profile examples like Amazon highlight potential pitfalls. With a growing movement pushing to broaden the study of AI to include more of its social impact, researchers and companies are looking at ways to introduce some level of explainability.

“Most companies adopting AI solutions are keenly aware of the potential negative effects and want to do what they can to prevent algorithmic bias from affecting their consumers,” said Rumman Chowdhury, managing director of Accenture AI. “It is a smarter move to be ethical by design rather than try to retrofit a system for oversight, bias mitigation and more after developing it.”

While research efforts to make more complex neural networks fully explainable are still in their infancy, companies are creating tools and frameworks for at least some degree of better understanding.

Consultancies like Accenture hired staff and developed software and guides to help clients make their AI implementation more transparent and accountable. PwC also rolled out its responsible AI toolkit in July, IBM open-sourced one for explainable AI in August and Microsoft open-sourced its explainability software InterpretML in May.

“If you think about the opportunity in business to be able to leverage AI and then you think about a black box, the two are only going to go hand in hand if that black box is understandable,” said Beth Smith, IBM Watson data and AI general manager, “if it can be explained, if it’s transparent, if it’s consistent in its answer, if it has accountability in the decisions it makes.”

Chowdhury and others in the AI social research community stress that technical explainability is only one component in a broader push toward fairer and more transparent algorithms.

“Forcing the team that’s building the model to think hard about transparency will actually help a lot because then you’re not just thinking about, ‘Hey, is my prediction accurate?’ But you’re also thinking about [why it’s working],” said Kartik Hosanagar, a Wharton business school marketing professor and author of a book about algorithms titled A Human’s Guide to Machine Intelligence.

More explainability is also seen as a sensible business precaution as lawmakers look to more extensively regulate algorithm development and AI in general. It’s often a must-have for the use of AI in government agencies or legal contexts, where stricter internal processes don’t allow one to simply defer to a black box.

But not everyone believes that explainability is the best way to more accountable algorithms.

This story first appeared in the Nov. 18, 2019, issue of Adweek magazine. Click here to subscribe.

Recommended articles