Sustainability

Nina runs a small design studio specializing in packaging for sustainable consumer brands. Her clients choose her specifically because she shares their environmental values. When she integrates AI tools into her workflow, the results are remarkable: she can explore ten times more concepts in the same timeframe, her iteration speed doubles, and her clients are thrilled with the quality.

Then a sustainability consultant she works with sends her a report. The cloud computing infrastructure her AI tools rely on consumes enormous amounts of energy and water. The consultant estimates that Nina's AI-assisted workflow has roughly tripled the carbon footprint of her design process. "You're making packaging for zero-waste companies using the most energy-intensive creative tools available," the consultant notes.

Nina investigates further. She learns that a single AI image generation session can consume as much energy as charging a smartphone dozens of times. Training the models she relies on required energy equivalent to the annual consumption of small towns. The data centers are cooled by millions of gallons of water, often in drought-prone regions.

But she also learns that her studio's total AI energy use is still a fraction of her clients' manufacturing energy. A colleague argues that obsessing over AI's energy consumption is "rearranging deck chairs" when transportation and manufacturing dwarf it. An environmental scientist counters that AI's energy use is growing exponentially and the time to establish sustainable norms is now, not after the infrastructure is locked in.

Nina considers switching to local, smaller AI models that use less energy but produce less sophisticated results. She wonders what she owes her clients, what she owes the environment, and whether these obligations are genuinely in conflict.

What do you think?

DISCUSSION QUESTIONS

• Should creative professionals factor the environmental cost of their tools into their practice — or is that an unreasonable burden?

• Is AI's environmental impact acceptable if the total is small relative to other industries?

• Does working on sustainability-focused projects create a special obligation to use sustainable tools?

• How should the industry weigh the environmental cost of AI against the time and resources it saves?

• Who should bear the cost of making AI infrastructure sustainable — tool providers, users, or governments?

Bias & Cultural Representation

Diego is a graphic designer working on a campaign for a community health initiative in a predominantly Latino neighborhood. He uses an AI design tool to generate imagery of families, healthcare workers, and community spaces. Every image the tool produces features light-skinned people in settings that look like affluent suburbs.

Diego manually adjusts his prompts, specifying skin tones, cultural markers, and architectural details specific to the neighborhood. The results improve but still feel off, the AI seems to have a narrower visual vocabulary for non-Western, non-affluent contexts. Poses are stiffer. Compositions are less natural. The tool is clearly better at generating some worlds than others.

Diego brings this up at a design conference. A machine learning researcher explains that the training data overwhelmingly represents commercially produced, Western-centric imagery because that is what dominates the internet. "The AI is a mirror," she says. "It shows us what we have already over-produced and under-produced."

A community organizer in the audience pushes back: "A mirror that only reflects certain people is not a mirror — it's a filter. And deploying that filter in healthcare communications has real consequences. People who don't see themselves in health materials are less likely to seek care."

Diego is forced to weigh his options. He can spend extra time fighting the tool's defaults. He can commission a local photographer instead and skip AI entirely. Or he can accept the imperfect AI output for efficiency and focus his effort elsewhere. Each choice involves a tradeoff between speed, quality, representation, and cost.

What do you think?

DISCUSSION QUESTIONS

• If an AI tool produces biased output, who bears responsibility — the developers, the training data, or the user who deploys the output?

• Is it enough to fix bias at the tool level, or does the problem require changing what gets created and published in the first place?

• Should AI tools be required to disclose the demographic composition of their training data?

• How should designers handle situations where AI tools are less capable of representing certain communities?

• Can efforts to 'de-bias' AI inadvertently flatten cultural differences into a homogenized idea of diversity?

Creator Consent

Tomoko is a manga artist with a distinctive style she developed over fifteen years. One morning, a fan sends her a link to an AI image generator that offers her name as a selectable style option. Users can type "in the style of Tomoko Nakamura" and receive images that closely mimic her line work, color palette, and compositional approach. The model was trained on every image she ever posted online.

Tomoko did not consent to this. She was never contacted, never compensated, never credited. She contacts the AI company, which responds that her publicly posted images were legally scraped under fair use provisions. They compare it to how art students learn by copying masters in museums.

Tomoko finds this comparison offensive. "A student who copies my work in a sketchbook is learning," she says in an interview. "A company that copies my work into a product that sells access to my style for $20/month is profiting." But a technology ethicist complicates the issue: "The model did not copy any single image. It learned statistical patterns across millions of images. Tomoko's style is one signal among millions. Is that really the same as copying?"

The debate splits the creative community. Some artists begin using tools that "poison" their images with invisible data perturbations that corrupt AI training. Others argue for a licensing system where artists opt in and receive royalties proportional to their influence on model outputs. A third group contends that once you publish work publicly, you accept that it becomes part of the cultural commons, just as musicians accept that their riffs will be absorbed into the musical vocabulary.

Tomoko joins a class-action lawsuit. But privately, she also uses AI tools for parts of her own workflow: background generation, color exploration, layout iteration. She wonders whether she is entitled to protections she is not willing to extend to others.

What do you think?

DISCUSSION QUESTIONS

• Is there a meaningful ethical difference between a human studying your work and an AI being trained on it?

• Should creators be able to opt out of AI training? Should opt-out be the default?

• If an AI model learns from millions of works, does any individual creator have a legitimate claim?

• How should compensation work if an AI's output reflects the influence of thousands of contributors?

• Is it hypocritical to oppose AI training on your work while using AI tools that were trained on others' work?

Authorship & Ownership

Priya is a senior graphic designer at a branding agency. A major client asks for a complete visual identity system. Under deadline pressure, Priya uses an AI design tool to generate 200 logo concepts based on a detailed brief she wrote. She narrows these to 15, then to 3, making modifications to each: adjusting proportions, refining typography, shifting color relationships. The client selects one. It becomes the face of a global campaign.

When the client's legal team draws up ownership documents, a dispute emerges. The AI tool's terms of service state that outputs are in the public domain. Priya's agency argues that her selection, refinement, and contextual judgment constitute authorship. The AI company maintains that the generative model performed the creative act. A competing designer, seeing the final logo, claims it closely resembles a concept she uploaded to the AI platform's community gallery two years ago.

The case forces everyone involved to reckon with what authorship actually means. Priya's brief was detailed and specific. Her selection process was informed by twenty years of design experience. Her refinements were substantial. But the underlying forms (the shapes, the spatial relationships, the typographic gestures) were generated by a model she did not build, trained on work she did not create.

Priya's agency hires an intellectual property lawyer who explains that current copyright law in most jurisdictions requires a human author. But the law has not caught up to the question of where the threshold of human contribution lies. "Writing a prompt is like writing a contract with a ghostwriter," the lawyer says. "The question is how much the ghostwriter contributed, and whether the ghostwriter is a person."

What do you think?

DISCUSSION QUESTIONS

• Is selecting from AI-generated options a creative act comparable to making something from scratch?

• Should the person who writes a detailed AI prompt be considered an author? What about someone who writes a vague one?

• If AI-generated outputs cannot be copyrighted, what protections should creators who refine them receive?

• How should credit be handled when multiple people's uploaded work influenced the AI's training data?

• Does the concept of sole authorship still make sense in an era of AI collaboration?

Originality

Amara is a textile designer who has spent years developing signature patterns inspired by West African kente cloth and Japanese sashiko stitching. When a colleague shows her an AI tool that can generate entirely new textile patterns, she experiments with it and produces a striking geometric weave that looks unlike anything she has seen before.

But when she examines the tool's training data, she discovers it was trained on tens of thousands of textile samples from cultures around the world, including the very traditions she has spent years studying. The AI's output is undeniably novel (textile historians she consults confirm they cannot trace it to any single source) but it is also undeniably derivative, assembled from fragments of many creators' life work.

Amara presents the pattern at a design conference. A fellow designer argues that genuine originality requires some irreducible spark that cannot be decomposed into prior influences. "If you can reverse-engineer the output into a weighted average of inputs, it's sophisticated plagiarism, not creation." A computational artist pushes back: "Every human designer is also a weighted average of influences. The only difference is that we can't see the math. If anything, AI makes the process of influence more honest."

Amara begins working with a material scientist to physically produce the AI-generated pattern. During the process, she makes dozens of modifications: adjusting thread tension, altering color relationships, adapting the weave for specific looms. She wonders whether these physical interventions are what make the final product "original," or whether originality was already present in the AI's initial recombination.

The pattern wins a design award. In her acceptance speech, she is unsure how to describe what she made. Did she design it? Did the AI? Did the thousands of anonymous textile artists whose work trained the model?

What do you think?

DISCUSSION QUESTIONS

• Can recombination of existing elements produce something genuinely original, or does originality require creation from nothing?

• If human creativity also recombines influences, what makes AI recombination different — if anything?

• Does the physical act of producing a design (weaving, printing, building) add originality that digital generation alone does not?

• How should awards and recognition handle works where the line between human and AI contribution is unclear?

• At what point does influence become derivation, and derivation become plagiarism?

Authenticity

Kwame is a documentary filmmaker known for deeply personal films about grief and displacement. His latest project explores his own family's experience of migration. While editing, he hits a wall: he cannot find the right visual language to convey a particular childhood memory of leaving home.

A friend suggests an AI video generation tool. Kwame describes the memory in detail: the color of the light, the sound of a door closing, the feeling of watching familiar objects become smaller through a car window. The AI produces a sequence that is startlingly close to what he remembers. Watching it, he feels an emotional recognition so strong that he cries.

But he did not direct the shot. He did not operate a camera. He did not choose the lens or the framing. He described a feeling, and a machine interpreted it.

Kwame screens the sequence for his editor, who is moved by it and says it is the most honest moment in the film. When Kwame reveals it was AI-generated, his editor pauses. "Does it matter? The feeling is yours. The memory is yours. The choice to include it is yours." But a fellow filmmaker at a festival screening argues differently: "The craft of filmmaking is the struggle to translate feeling into image. If you skip the struggle, you skip the art. What you have is illustration, not expression."

Kwame faces a deeper question: if authenticity is about expressing what you feel, does it matter how the expression is produced? Or does the act of making (the imperfect, laborious, human process of translation) constitute an essential part of what makes art authentic?

What do you think?

DISCUSSION QUESTIONS

• If a work perfectly expresses your feeling but you did not physically create it, is it still authentically yours?

• Does the struggle of translating emotion into craft contribute to a work's authenticity, or is that a romantic myth?

• How would you feel watching a documentary knowing the most emotional scene was AI-generated?

• Is there a meaningful difference between directing a human collaborator to execute your vision and directing an AI to do the same?

• Can authenticity exist without vulnerability — the risk of failing to express what you mean?