How to Eliminate Bias in Data-Driven Marketing

A common misstep is not being upfront about limitations in methodology

Getty Images
Headshot of Lisa Lacy

Problems with data bias are well-documented, from an image recognition algorithm that identified black users as gorillas to language translation services that referred to engineers as male and nurses as female.

And just as bias found its way into these data sets, so, too, can it sometimes be found in the models marketers use to make predictions about their customers.

Alex Andrade-Walz, head of marketing at location intelligence company Spatially, said segmentation via predictive analytics results in a stereotype of an ideal customer that may have higher conversion and retention rates and greater lifetime value. But, by focusing on these regular customers, brands risk not appealing to other market segments.

“For example, a recruiting firm that specializes in tech placements likely works with more men than women,” Andrade-Walz said. “If gender is allowed into the predictive algorithm, it may lead this firm to only advertise to male candidates given a higher historic placement rate.”


Divya Menon, founder at branding and marketing consultancy Bad Brain, noted when data scientists use data that reflects systemic issues that discriminate against genders, races or religions, these forecasts amplify the issues.

Let’s say a real estate company decides to predict housing values based on the predominant race of the neighborhood. This correlation might exist, said Pete Meyers, marketing scientist at Moz, a software company, but it ignores gentrification, socioeconomic factors and the inherent racism built into the ways many neighborhoods were developed in the first place.

“By choosing this data point and ignoring others, you would start to make race-based predictions that further disadvantage people,” Meyers said. “You’ve built your own bias into the system.”

And, Meyers said, it can be even more subtle with machine learning in which systems can be biased toward the specific knowledge of the person who built them.

Ikechi Okoronkwo, director of marketing sciences at media and marketing services company Mindshare, said bias comes from data scientists inadvertently allowing their own assumptions to influence the output—or an analytical approach that is one-dimensional.

For instance, a data scientist might conclude a brand should target an older demographic because age appears to correlate with purchase probability, Andrade-Walz said. “However, age may just be a red herring: Income could be the factor that’s tied to conversion and it just so happens that older people typically have more disposable income,” he added.

Nevertheless, it is possible to eliminate bias from data models.

“Those of us who work within strategy, especially at a level where we are working with statistical analysis, have a real opportunity to change the way brands see consumers by constantly challenging biases and being cognizant of the world around us and the context within which we are seeing data,” Menon said.

This starts with data scientists being aware biases can be mirrored in predictive models and building mechanisms to safeguard against them, said David Zwickerhill, director of analytics at advertising agency GSD&M. Companies must also be willing to make changes even if some models have worked in the past, he added. In addition, data scientists need to be transparent about assumptions so stakeholders can provide feedback. For instance, “if you believe that TV spend is going to perform more strongly based on previous campaigns, explain that hypothesis upfront to your clients and teams so that the other stakeholders can chime in with other viewpoints to consider,” Okoronkwo said.

Okoronkwo added another common misstep is not being upfront about other limitations in methodology, pointing to how predictive models have some degree of error, inherent bias and assumptions that fill gaps in data. Transparency, on the other hand, enables data scientists to work with clients to identify new data sources or to challenge assumptions. “Properly managed predictive analytics uses a system where the model can take on new inputs and learn as time passes,” he added.

Okoronkwo said it is also important to use different approaches for measurement and predictions and to have a culture that questions assumptions and tests if there are outputs that could conflict with initial findings.

“It’s all part of the scientific method. It’s good to have a hypothesis of what you think you’ll see in the future—the key is to then test that hypothesis in multiple ways,” he said. “You want to gather as much data as possible and test many different options of explanatory variables.”

This story first appeared in the April 16, 2018, issue of Adweek magazine. Click here to subscribe.

@lisalacy lisa.lacy@adweek.com Lisa Lacy is a senior writer at Adweek, where she focuses on retail and the growing reach of Amazon.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}