Bias & Cultural Representation

Investigating how AI training data shapes whose aesthetics, narratives, and perspectives get amplified or erased.

Diego is a graphic designer working on a campaign for a community health initiative in a predominantly Latino neighborhood. He uses an AI design tool to generate imagery of families, healthcare workers, and community spaces. Every image the tool produces features light-skinned people in settings that look like affluent suburbs.

Diego manually adjusts his prompts, specifying skin tones, cultural markers, and architectural details specific to the neighborhood. The results improve but still feel off, the AI seems to have a narrower visual vocabulary for non-Western, non-affluent contexts. Poses are stiffer. Compositions are less natural. The tool is clearly better at generating some worlds than others.

Diego brings this up at a design conference. A machine learning researcher explains that the training data overwhelmingly represents commercially produced, Western-centric imagery because that is what dominates the internet. "The AI is a mirror," she says. "It shows us what we have already over-produced and under-produced."

A community organizer in the audience pushes back: "A mirror that only reflects certain people is not a mirror — it's a filter. And deploying that filter in healthcare communications has real consequences. People who don't see themselves in health materials are less likely to seek care."

Diego is forced to weigh his options. He can spend extra time fighting the tool's defaults. He can commission a local photographer instead and skip AI entirely. Or he can accept the imperfect AI output for efficiency and focus his effort elsewhere. Each choice involves a tradeoff between speed, quality, representation, and cost.

What do you think?

DISCUSSION QUESTIONS

• If an AI tool produces biased output, who bears responsibility — the developers, the training data, or the user who deploys the output?

• Is it enough to fix bias at the tool level, or does the problem require changing what gets created and published in the first place?

• Should AI tools be required to disclose the demographic composition of their training data?

• How should designers handle situations where AI tools are less capable of representing certain communities?

• Can efforts to 'de-bias' AI inadvertently flatten cultural differences into a homogenized idea of diversity?