Should Creators Use AI? The Honest Answer
The real arguments on both sides, the middle path most working creators have landed on, and what happens to the creative economy from here.
The Question Has Changed
Two years ago the debate was “is AI art even real art.” That debate is over — not because anyone won, but because millions of creators just quietly started using AI tools in their workflows and shipped work that audiences responded to.
The question now is more practical: given that AI tools exist, are useful, and are not going away, what is the right way for a working creator to use them? Where do ethical lines actually sit?
The Case Against Using AI
The strongest version of the anti-AI case rests on three points.
Training data theft. Most major AI tools were trained on copyrighted work without licensing or consent. Artists, writers, and photographers argue — reasonably — that their work was stolen to build tools that now compete with them. Several active lawsuits are working through this.
Homogenization. AI tools regress creative output toward the mean. The more creators rely on them, the more the overall creative ecosystem starts to look the same. This is a real effect, visible in any platform with AI content: the thumbnails look alike, the articles read alike, the music sounds alike.
Replacing low-tier work eliminates the career ladder. Junior designers, beginning writers, and stock photographers were the entry points to creative careers. When AI replaces that tier, the only creators left are the established ones. The next generation has nowhere to start.
These are serious arguments. They are not “I do not like the aesthetics.” They are “the economic and cultural foundation of creative work is being removed.”
The Case For Using AI
The strongest version of the pro-AI case also has three points.
Democratization. A working-class kid without an art school education, without connections, without disposable income for software can now produce professional-grade visual work for $20/month. This is real. Dismissing it because it disadvantages incumbents is not a counterargument.
Leverage for small creators. A solo creator can now do the work of a team of five. Writing, editing, thumbnail design, research, transcription. This lets small voices compete with large institutions in ways that were not possible five years ago.
It is a tool like every other tool. Photoshop was once “cheating.” Digital photography was “cheating.” Autotune, sample packs, stock photography, and AutoCAD were all “cheating.” The pattern is that tools that expand creative capability eventually stop being controversial and become infrastructure.
These arguments are also serious. Dismissing them because they benefit new entrants is not a counterargument either.
The Position Most Working Creators Have Landed On
After two years of real-world use, the majority of working creators have converged on something like this:
-
Use AI for parts of the process that are not creative. Research, transcription, boilerplate, research summarization, alt text generation, formatting, moodboards. These are the jobs AI actually does well and that displace no creative decisions.
-
Do not use AI for the parts that are creative. The actual ideas, the actual writing, the actual art, the actual decisions about what to make. These still belong to the human.
-
Do not ship raw AI output as your work. Everyone can tell. Your audience can tell. Your portfolio will suffer.
-
Credit and disclose when asked. If an editor, client, or audience asks whether AI was involved, answer honestly. Do not pretend AI-assisted work is hand-made.
-
Pay human artists for human work. When you hire a photographer, illustrator, or writer, pay them. Do not replace the job with a Midjourney subscription and tell yourself it is the same.
This middle path is not philosophically satisfying. It does not resolve the training data question, it does not solve the career ladder problem, it does not address homogenization. What it does is let a working creator function in 2026 without being either a Luddite or a scab.
What Happens Next
Three things seem likely over the next few years.
Legal clarity on training data. The lawsuits will produce outcomes. Expect some combination of: required opt-outs, required licensing for commercial models, and open-source models in a gray zone.
Quality divergence. Mass-produced AI content will get cheaper and more generic. Premium human-led work (or heavily human-edited AI-assisted work) will become more valuable. The middle tier — generic human work — gets squeezed out.
Platform responses. Search engines, social platforms, and content marketplaces will develop ways to distinguish AI-dominant content from human-dominant content. Some will penalize it, some will embrace it, some will do both inconsistently.
New creative forms emerge. Every major tool has produced new creative forms that did not exist before it. Photography gave us photojournalism. Video editing gave us the music video. It would be strange if AI did not produce its own new forms. The early examples — interactive AI fiction, live-generated visuals, reactive music — suggest the future is more interesting than either the hype or the doom.
The Honest Answer
Use AI. Use it thoughtfully. Use it for the parts of creative work that are not creative. Keep the parts that are creative human. Pay human artists. Do not pretend. Do not panic. The creative economy of 2030 will look different from 2020, but “creative” is not disappearing — the definition is just shifting again, like it does every generation.
The creators who disappear will be the ones who neither adopted the tools nor sharpened the parts of their craft that the tools cannot do. The ones who survive will do both.
Further Reading
For ongoing debate: the Hollywood writers’ and actors’ strikes of 2023-2024 forced studios to negotiate AI clauses in contracts. The agreements are worth reading as a template for other industries. The ongoing lawsuits against OpenAI, Midjourney, and Stability AI will shape what is legal. Groups like the Concept Art Association advocate for working artists in these debates.
Frequently Asked Questions
Is it ethical to use AI for creative work?
Depends on how. Using AI for research, transcription, and draft support is broadly considered ethical. Passing off raw AI output as hand-made work is not. Replacing paid human artists with AI rather than complementing them is contested.
Will AI replace creative jobs?
It has already replaced the lowest tier (stock photos, template designs, filler content). It has not replaced top-tier creative work. The middle tier is the uncertain zone where outcomes are still being decided.
Is AI training on copyrighted work legal?
Currently unresolved. Multiple active lawsuits will set precedent. The US Copyright Office has stated pure AI output is not copyrightable. Training on copyrighted works is being litigated under fair use.
Should I disclose AI use in my work?
Increasingly yes, especially in editorial, journalistic, and commissioned work. If a client or audience would care about the distinction, disclose. Silence when asked is dishonesty; unprompted disclosure in personal work is a judgment call.
Can AI-assisted work win art or writing awards?
Some awards now allow it, some ban it, most have not decided. Grand Slam poetry competitions have banned AI. Some literary magazines have banned it. The Sony Photography Awards had a winner decline after revealing AI generation.
Does AI make creativity easier or harder?
Easier to start, harder to stand out. The floor is much higher than it was. The ceiling requires more craft and taste to distinguish from mass-produced AI work.
Is using AI "cheating"?
Same question asked about Photoshop, digital photography, autotune, and stock photography. Each generation decides eventually that tools are tools. The real question is always: does the final work have something worth attention?