'Who Does This Serve?' Dismantling Bias in Generative AI

Tech must reckon with its history of discriminatory programs

Mark your calendar for Mediaweek, October 29-30 in New York City. We’ll unpack the biggest shifts shaping the future of media—from tv to retail media to tech—and how marketers can prep to stay ahead. Register with early-bird rates before sale ends!

In 1932, the first African American could have integrated Major League Baseball by accepting a position on the Philadelphia Athletics. But for Romare Bearden to become their star pitcher, he would have to pass as white.

An assumption can be made that Bearden asked himself, “Who does this serve?” Rather than play along, Bearden quit baseball and became one of America’s most renowned and influential artists. Jackie Robinson would go on to break the color barrier in America’s pastime.

In discussions with peers, “Who does this serve?” is a constant question; the answer is often, “Clearly not us.” This “us vs. them” feeling isn’t new to the Black community when it comes to many facets of American life, from education and medicine to government programs and legislation.

Following the trend, much of generative AI has been created and fed data by “them.” Examples include facial recognition technology that can’t render Black faces, chatbots recreating racial profiling, and social media AI tagging African American Vernacular English as hate speech.

Unfortunately, the people creating these tools aren’t asking questions of inclusion, and the technology gap is becoming difficult to close. Lack of representation in technology research and development, and lack of representation in the data used to train these artificial intelligences, perpetuates bias and leaves important questions unasked.

To bridge the gap, there are clear moves the tech community should make to ensure we don’t follow the discriminatory patterns of our innovative predecessors.

Employ developers trained in equitable coding practices

We should revamp current AI algorithms, using fresh eyes trained in equitable coding practices. Investments in back-billing data and actively including diverse datasets to combat bias will further improve machine learning.

However, development teams made up of the folks who created the biased system will re-dig the same hole. Employing thinkers who code diversity first to improve existing models is a quicker solution to rectify inequitable AI algorithms.

For any developer who has received accessibility bug tickets after website development, think of how much research had to be done (and time spent) to clear them. Now think of how much faster issues could have been alleviated by a developer trained in a11y practices. Having bias top of mind when retooling an AI product will provide a concise, cost-effective solution.

Create company guidelines against bias in AI

We need to create documentation that allows us to code with equality as the baseline, not an afterthought. Just as WCAG 2.2 is the reference guide for accessibility standards, establishing rules for planning, development, testing and implementation of generative AI will help dissipate future bias.

Making this a living document and including different voices in the industry who reflect the populations we serve will make this new horizon of technology groundbreaking and transformative.

At present, centrally organized standards don’t exist, so companies need to craft their own rubrics. For example, IBM and Amazon have developed their own tools to detect bias in AI. Companies don’t have to wait for standards like WCAG2.2 to build something into their process before deploying AI.

Imagine the liability we, as technologists, are leaving our companies and consumers exposed to.  People can get hurt when AI models aren’t built by, and tested on, real people with varying backgrounds, in real scenarios—much like how the AI used in hospitals to detect sepsis that was wrong two-thirds of the time.

Fill the room with the communities being served

We must fill development teams with diverse thinkers by investing in internal talent. If the room is filled with people who look and think alike, you need a different room. If the room has only one or two divergent thinkers, you need a bigger room. If you cannot fill a bigger room, invest in it.

For Black employees, the lack of company support and investment in them and their goals are major reasons for seeking jobs elsewhere. Tapping into internal resources to build and sustain generative AI projects is crucial.

With training, education and focus, those assets can push the conversation within our organizations, making sure that the voices in the room serve those outside of it. If you don’t have the diversity of talent within your ranks to cultivate creative thinkers, you have a different problem to solve.

Development teams need to reflect what society looks like to help remove boundaries of mistrust. Not only does this diversify the industry, but having people who speak relationally means marginalized populations can willingly contribute to the data that’s used to feed existing and future AI systems. Those communities can then also be served, wholly, by the technology that comes from it. Cultural understanding builds a foundation of consumer trust like only investment can.

Inclusive generative AI can be revolutionary. Imagine producing something that serves its creator and consumers equally; that’s more than technology. It’s art.

I imagine that’s what our pitcher-turned-painter was chasing when he was asked to join the MLB in 1932. When Bearden asked himself, “Who does this serve?” his initial answer was only himself. That wasn’t good enough for him, and it shouldn’t be good enough for us either.