Tomoko is a manga artist with a distinctive style she developed over fifteen years. One morning, a fan sends her a link to an AI image generator that offers her name as a selectable style option. Users can type "in the style of Tomoko Nakamura" and receive images that closely mimic her line work, color palette, and compositional approach. The model was trained on every image she ever posted online.
Tomoko did not consent to this. She was never contacted, never compensated, never credited. She contacts the AI company, which responds that her publicly posted images were legally scraped under fair use provisions. They compare it to how art students learn by copying masters in museums.
Tomoko finds this comparison offensive. "A student who copies my work in a sketchbook is learning," she says in an interview. "A company that copies my work into a product that sells access to my style for $20/month is profiting." But a technology ethicist complicates the issue: "The model did not copy any single image. It learned statistical patterns across millions of images. Tomoko's style is one signal among millions. Is that really the same as copying?"
The debate splits the creative community. Some artists begin using tools that "poison" their images with invisible data perturbations that corrupt AI training. Others argue for a licensing system where artists opt in and receive royalties proportional to their influence on model outputs. A third group contends that once you publish work publicly, you accept that it becomes part of the cultural commons, just as musicians accept that their riffs will be absorbed into the musical vocabulary.
Tomoko joins a class-action lawsuit. But privately, she also uses AI tools for parts of her own workflow: background generation, color exploration, layout iteration. She wonders whether she is entitled to protections she is not willing to extend to others.
What do you think?
DISCUSSION QUESTIONS
• Is there a meaningful ethical difference between a human studying your work and an AI being trained on it?
• Should creators be able to opt out of AI training? Should opt-out be the default?
• If an AI model learns from millions of works, does any individual creator have a legitimate claim?
• How should compensation work if an AI's output reflects the influence of thousands of contributors?
• Is it hypocritical to oppose AI training on your work while using AI tools that were trained on others' work?