DON'T LAND ON THE WRONG PROPERTY: There is NO get-out-of-jail-free card for AI copyright infringement
- candyandgrim

- Nov 20, 2025
- 12 min read

Why your choice of AI tool could cost you—or your client—everything
REMEMBER TORRENTING? THIS IS WORSE.
Back in the early 2000s, everyone knew torrenting films and music wasn't technically legal—but it wasn't technically illegal either. Regulation takes time to catch up with technology.
Then one day, it was very illegal. Fines arrived. Warnings flooded in. Some people went to jail.
Then LoveFilm emerged. Netflix followed. Spotify launched. The story ends well—for those who adapted.
We're in that exact moment right now with AI.
Everyone knows there are "dodgy dealings" happening with AI training data. But this time, there's a twist: governments are encouraging AI development. There's an AI arms race to win. Billions are at stake.
So this isn't quite the shady piracy of torrenting—it's more like The Big Short. Corporate interests, unclear regulations, massive financial incentives, and a ticking clock before the reckoning.
And just like 2008, the ones who didn't see it coming will pay the price.
You're a creative professional. You've embraced AI because frankly, you'd be left behind if you didn't. But here's the uncomfortable truth nobody's talking about loudly enough: not all AI is created equal, and some of it could land you in legal hell.
This REALLY goes against the grain for me, it wasn't fun, but I've spent months researching AI regulations, copyright law, and indemnification clauses across the EU, UK, and US. What I've found is alarming—and most creatives have no idea they're walking a tightrope without a net.
This isn't scaremongering. This is survival. I am sharing this because I care about my creative community!
THE THREE CATEGORIES OF AI (AND WHY IT MATTERS)
Before we dive into the danger zones, let's establish the ground rules. AI in creative work falls into three distinct categories, each with radically different legal implications:
🟢 CATEGORY 1: AI-POWERED TOOLS (Zero Legal Risk)
These are tools that enhance your work or process your licensed content—they don't generate anything from scraped data.
Examples:
Adobe Sensei (content-aware fill, neural filters, smart selection)
Topaz Labs (AI upscaling, denoising, sharpening)
DaVinci Resolve AI (colour matching, object removal)
After Effects AI (rotoscoping, motion tracking)
Descript (transcription, editing YOUR recordings)
3D rendering denoisers (Nvidia OptiX, AMD ProRender)
Why they're safe:
They work WITH your content or licensed assets
No generative AI involved
Zero copyright liability
No disclosure requirements (except in specific regulated industries)
Use them freely—these are your creative power tools with none of the legal baggage.
🟢 CATEGORY 2: COMMERCIALLY SAFE GENERATIVE AI (Protected, With Conditions)
These tools generate content from scratch BUT offer legal protection through indemnification or ethically-sourced training data.
The Indemnified Four:
Adobe Firefly – Full IP indemnification, trained on Adobe Stock + public domain
Shutterstock AI – Full IP protection, human-reviewed datasets
Getty Images AI – $50,000+ per-image indemnity
iStock AI – $10,000 per-image indemnity
The Ethically-Trained:
Soundraw – Music generation trained exclusively on in-house compositions
Synthesia – Avatar/voice generation with "AI Copyright Pledge" indemnification
Canva Magic Studio – Trained on Canva's licensed library
Why they're (relatively) safe:
Full or partial IP indemnification—they'll defend YOU if copyright claims arise
Transparent training data (licensed, public domain, or owned)
EU AI Act compliant with proper disclosure
Cost/time savings: 20-40% reduction compared to 100% human workflows
The catch:
EU clients require disclosure (labels, audit trails, metadata)
Some are enterprise-tier only (Adobe Firefly indemnity)
You must still follow usage guidelines
Use these strategically—they're your legal safety net when budgets demand efficiency.
🔴 CATEGORY 3: UNINDEMNIFIED GENERATIVE AI (Legal Russian Roulette)
These are the tools everyone's excited about, typically they yeld the the best results—and sadly these are the ones that could destroy you.
The Blacklist:
Image: Midjourney, DALL-E, Stable Diffusion, Ideogram, Flux
Video: Runway Gen-3 (video generation), Pika, Kling, Luma, Sora
Music: Suno, Udio (both facing RIAA lawsuits)
Voice (partial): ElevenLabs music generation (NOT Voice Library stock voices—see grey area section)
⚠️ RUNWAY: A SPLIT SITUATION
Runway requires special attention because they offer BOTH safe tools and risky GenAI:
✅ SAFE: Runway AI-Powered Tools
These enhance YOUR content—no generative training involved:
Background removal (removes BG from YOUR footage)
Object removal (inpainting on YOUR video)
Upscaling (enhances YOUR content)
Colour grading (matches YOUR footage)
Super-slow motion (interpolates YOUR footage)
Why safe: Like Topaz or DaVinci AI—they process YOUR content, don't generate from scraped data.
❌ UNSAFE: Runway Gen-3 (Video Generation)
Trained on YouTube videos without creator permission
No IP indemnification
High copyright infringement risk
The verdict: Use Runway's enhancement tools freely. Avoid Gen-3 video generation for client work unless client explicitly accepts risk.
Why they're dangerous:
1. TRAINING DATA THEFT
Most are trained on scraped content—millions of copyrighted works taken without permission or compensation.
Getty Images is suing Stability AI. The New York Times is suing OpenAI. Artists are suing Midjourney. The list grows daily.
2. NO INDEMNIFICATION (OR WORSE)
Midjourney's Terms of Service explicitly state: If they get sued for copyright infringement and lose, they can counter-sue YOU for using their tool.
Read that again. You generate an image. Midjourney gets sued. You get sued too.
OpenAI/DALL-E? No indemnification. If your content infringes, you're on your own.
ElevenLabs? You indemnify THEM. If someone sues them over YOUR voice clone, YOU pay their legal fees.
3. COPYRIGHT LIABILITY = CRIMINAL IN SOME REGIONS
This isn't just "delete the post and move on." In several jurisdictions, commercial use of copyright-infringing material is treated like being a middleman in theft:
UK: Up to 7 years imprisonment
US: Up to 5 years (federal charges)
Japan: Up to 10 years
Germany: Up to 5 years
And here's the killer: even if YOU substantially edit the AI output, if the base generation came from a tool trained on stolen content, you can still be liable. To be fair, this is more fringe and likely harder to prove or disprove.
THE EU AI ACT: DISCLOSURE IS MANDATORY
If you're working with EU clients (or any client with EU audiences), the EU AI Act now requires:
✅ Clear disclosure of AI-generated or AI-manipulated content
✅ Audit trails (prompts, tools used, edits made)
✅ C2PA metadata where supported
✅ Labels: "AI-generated" or "AI-enhanced"
Penalties for non-compliance: Up to €35 million or 7% of global revenue (whichever is higher).
Exception: Basic edits (crop, colour correction, exposure) don't require disclosure.
But here's the rub: Even if you heavily edit AI-generated content (say, 80% human work), if AI contributed to content creation, you must disclose.
INDUSTRY-SPECIFIC RISKS: NOT ALL CLIENTS ARE EQUAL
Some sectors have stricter rules regardless of which AI you use:
If you're working in fintech, pharma, or Web3 (like I do), you're in high-scrutiny sectors—using unindemnified AI is playing with fire.
THE COMFYUI CONUNDRUM: WHITELIST, GREYLIST, AND HOW TO STAY SAFE
ComfyUI itself is just a framework—a blank canvas. Safety depends entirely on what models and nodes you load into it.
🟢 WHITELIST: SAFE COMFYUI CONFIGURATIONS
These setups use only licensed, owned, or ethically-sourced components:
1. AI-POWERED ENHANCEMENT NODES (Always Safe)
Upscaling models: ESRGAN, RealESRGAN, UltimateSDUpscale
Denoising: Any denoiser (enhances, doesn't generate)
Colour grading: AI colour matching on YOUR footage
Face restoration: GFPGAN, CodeFormer (on YOUR images)
Why safe: These enhance existing content—no generative training data involved.
2. ADOBE FIREFLY INTEGRATION (When Available)
Any ComfyUI nodes that connect to Adobe Firefly API
Full IP indemnification carries through
3. YOUR OWN CUSTOM MODELS
LoRAs trained on YOUR images/photos/3D renders
LoRAs trained on client assets (with written permission)
Checkpoint models YOU trained from scratch on YOUR data
Critical: Document what data you trained on. Keep consent forms.
4. PUBLIC DOMAIN MODELS
Models trained exclusively on pre-1928 content (US public domain)
Verified public domain datasets (OpenImages with CC0 licensing)
Government/NASA imagery (typically public domain)
Verify: Check the model card—if training data isn't disclosed, assume risk.
5. CONTROLNET WITH YOUR REFERENCES
Using YOUR photos as ControlNet guides
Using YOUR 3D renders as depth maps
Using YOUR sketches as composition guides
Safe because: You're using AI to interpret YOUR content, not generating from scraped data.
🟡 GREYLIST: USE WITH EXTREME CAUTION
These require specific conditions or client risk acknowledgment:
1. STABLE DIFFUSION BASE MODELS
Training data: LAION-5B (scraped from the web without permission)
Risk level: 🔴 HIGH for client work
When you might use it:
Internal concept development (not client-facing)
With explicit client acknowledgment of copyright risk
For US clients in non-regulated industries
NEVER for EU clients, pharma, finance, political campaigns
Mitigation: Heavy human editing (80%+ of final work), full disclosure to client
2. CIVITAI CHECKPOINT MODELS
Training data: Unknown, community-contributed, likely includes copyrighted art
Risk level: 🔴 VERY HIGH
Our recommendation: Avoid for commercial work entirely. These are:
Trained on fan art (copyright infringement)
Often trained on specific artist styles (IP theft)
No legal protection whatsoever
Only acceptable use: Personal experimentation, never client work
3. COMMUNITY LORAS (Unverified Sources)
Training data: Unknown
Risk level: 🔴 HIGH to VERY HIGH
If you must use:
Only for internal concepting
Never as final deliverable
Full disclosure to client if it influences final work
Document everything (prompt, model, output, human edits)
4. IP-ADAPTER (Depends on Reference Images)
Safe if: Using YOUR images or licensed content as style references
Risky if: Using copyrighted art/photos as style references without permission
❌ BLACKLIST: NEVER USE FOR CLIENT WORK
1. Models Trained on Copyrighted Art
Any LoRA trained on specific artist styles (Greg Rutkowski, Artgerm, etc.)
Anime models trained on copyrighted manga/anime
Character LoRAs (Disney, Marvel, etc.)
Why: Direct copyright infringement, no legal defence
2. "Style of [Artist Name]" Workflows
Even if the model itself is somewhat safe, prompting "in the style of [living artist]" creates derivative works without permission.
Legal status: Grey area, but trending toward infringement in recent case law
3. Any Model with Undisclosed Training Data
If the model card doesn't clearly state what it was trained on, assume it's unsafe.
📋 COMFYUI SAFETY CHECKLIST
Before using ANY ComfyUI workflow for client work:
✅ Check EVERY node in your workflow What model does it use? What was that model trained on? Can you prove it's licensed/owned/public domain?
✅ Document your workflow Screenshot your node setup Save your workflow JSON Keep model cards/license info
✅ Test the "prompt health" of your workflow If you prompt "Disney princess," does it generate recognizable IP? If yes → that model is trained on copyrighted content
✅ Know your client's risk tolerance EU client? Only whitelist. Pharma/finance? Only whitelist. US tech startup? Maybe greylist with disclosure. Everyone else? Whitelist is safest.
✅ Have a fallback plan If ComfyUI output is risky, can you recreate it with Adobe Firefly? Can you achieve the same result with Photoshop AI tools?
COMFYUI SUMMARY:
🟢 Safe = Enhancement nodes + YOUR content + verified public domain
🟡 Risky = Stable Diffusion base models + community LoRAs + unknown training data
🔴 Blacklist = Copyrighted art models + "style of [artist]" + anything undisclosed
Rule: ComfyUI is only as safe as its least-safe node. One risky model = entire workflow is risky.
THE CUSTOM AI LOOPHOLE: TRAIN YOUR OWN (SAFELY)
You CAN train AI on content—IF you have the rights:
✅ Safe custom training:
YOUR own photos, videos, 3D renders
Client's brand assets (with written permission)
Public domain content (pre-1928 in US, varies by region)
Content you commissioned (check contracts—ensure you own training rights)
❌ Unsafe custom training:
Stock footage/images you purchased (most licenses prohibit AI training)
Client assets without explicit written consent
Any copyrighted material you don't own
Tools for safe custom training:
Leonardo.ai (train on YOUR images)
Adobe Boards Custom (when available—uses your assets)
Recraft (YOUR/client brand assets)
Mago.studio (YOUR assets)
Requirement: Document EVERYTHING. Keep signed consent forms, usage rights contracts, and training logs.
THE GREY AREA: ELEVENLABS—NUANCED BUT NAVIGABLE
ElevenLabs is popular for voice synthesis, and the situation is more nuanced than a simple "unsafe" label.
✅ WHAT'S SAFE:
Voice Library (Stock Voices)
Community-created voices uploaded with consent
Creators voluntarily share their voices and get paid ($1M+ paid out)
Verification system prevents unauthorized cloning
Commercial use license included
Use case: Client presentations, internal videos, low-risk content
⚠️ WHAT'S RISKY:
Music Generation
Training data sources unclear
No indemnification for copyright claims
In the defence of ElevenLabs, I do believe they are working with record labels attaining permission. Can anyone confirm?
The Indemnification Problem
❌ No indemnification—YOU indemnify THEM if issues arise
❌ ToS prohibits: Using output to train other AI
❌ ToS requires: Written consent for ANY voice cloning (even your own)
WHEN YOU CAN USE IT:
✅ Voice Library stock voices for non-regulated content
✅ Custom voice cloning with signed consent + payment agreement
✅ Internal use only (not client-facing in regulated industries)
✅ US campaigns (EU has stricter likeness/voice rights)
WHEN YOU CAN'T:
❌ Music generation for client work
❌ EU campaigns without extensive consent documentation
❌ Commercial voice cloning without explicit contracts
❌ Stock voices for political or pharma content (extra scrutiny)
Bottom line: ElevenLabs Voice Library stock voices = reasonably safe for most uses. Music generation + lack of indemnification = proceed with caution.
THE PROMPT TRAIL PROBLEM: NOBODY'S TRACKING PROPERLY
Here's a dirty secret: most creatives aren't documenting their AI usage—and that's a compliance disaster waiting to happen.
But let's be honest: most creatives can't even remember to fill out a timesheet for billable hours or name their files and layers properly.
You know it's true. We've all got:
final_FINAL_v3_actualfinal_USE_THIS_ONE.psd
Layers called "Layer 1 copy 3"
Timesheets from three weeks ago still blank
So expecting creatives to produce a watertight paper trail of every AI prompt? Beyond unrealistic. It's like asking a cat to fill out a tax return.
The EU AI Act requires audit trails. But how many of you are:
Saving every prompt?
Logging which tools you used?
Recording which assets were AI-generated vs. human-created?
Keeping version history showing human edits?
Nobody. Because it's tedious, disruptive, and frankly, we'd rather be creating than doing paperwork.
The gap in the market: A passive tool (browser extension, desktop app) that:
Automatically captures every prompt across tools (Firefly, Midjourney, ChatGPT, ComfyUI, etc.)
Logs tool names, timestamps, source files
Generates shareable compliance reports (QR code, link)
Flags risky prompts BEFORE submission ("Possible copyright issue: 'in the style of Disney'")
Think: Grammarly meets Read.ai meets legal compliance—but for creatives who can't even name layers consistently.
***If anyone wants to collaborate on building this—I'm all in. Because if we're being realistic, manual documentation will never happen. We need automation or we need acceptance that we're all going to get fined.***
Bird & Bird Have you heard of anyone developing a tool like this for EU compliance?
YOUR AI SAFETY CHECKLIST
Before using ANY AI tool on client work, ask yourself:
✅ THE FIVE SAFETY QUESTIONS:
Does this tool offer IP indemnification? If no → consider alternatives or get client sign-off
What was it trained on? Licensed/owned content = safe Scraped content = high risk Unknown = assume risk
Is my client in a regulated industry? (pharma, finance, political, news) If yes → use only indemnified tools + full disclosure
Is this for an EU audience? If yes → prepare disclosure labels, audit trails, metadata
Can I document the entire process? Prompts, tools used, human edits, source files If no → reconsider the workflow, perhaps stick to a single platform, so all prompts can be captured in a single screen grab
THE BOTTOM LINE: PROTECT YOURSELF, PROTECT YOUR CLIENTS
Here's my three-tier system for client work:
🟢 TIER 1: AI-ASSISTED (Standard Pricing)
AI-powered tools only (Adobe Sensei, Topaz, DaVinci AI)
Zero legal risk, zero disclosure
Price: Full rate
🟡 TIER 2: SAFE GENERATIVE AI (25-35% Discount)
Adobe Firefly, Shutterstock AI, Soundraw, Synthesia only
Full indemnification + EU compliant
Disclosure labels included
Price: 25-35% reduction
🔴 TIER 3: UNINDEMNIFIED AI (Not Offered for Client Work)
Midjourney, Runway Gen-3, Suno, etc.
I don't offer this for commercial client work—liability falls entirely on client
If client insists: written acknowledgment of ALL risks (and I still probably decline)
Why I don't offer Tier 3: Even with a waiver, if Midjourney gets sued and counter-sues users (as their ToS allows), my client is liable—and I'm complicit. Not worth it.
Think of it like big brands using memes:
Everyone does it
It feels harmless
Most of the time, nothing happens
But when it goes wrong, the legal bill is catastrophic
And "everyone else was doing it" isn't a defence
🎨 PERSONAL USE EXCEPTION:
For learning, experimentation, and personal fun? You're generally fine. I have a page of AI Slop fun https://www.ssh-creative.com/ai-slop but none of it is commercially viable.
What's (mostly) safe for personal use:
✅ Generating images for your portfolio (with disclosure)
✅ Experimenting with tools to learn workflows
✅ Creating content for personal social media
✅ Making birthday cards for your mum
✅ Testing prompts, styles, techniques
What's risky even personally:
❌ Creating deepfakes of real people without consent
❌ Generating content impersonating brands/celebrities
❌ Anything you plan to monetise later (treat as commercial from the start)
❌ Content in regulated spaces (pharma, finance, political) even for "fun"
The line: Personal use = experimenting on your own time with no commercial gain. The moment you:
Pitch it to a client
Use it in a portfolio to win work
Monetise it on YouTube, Patreon, etc.
It becomes commercial use—and all the legal risks apply.
Bottom line: Play with Midjourney on weekends, learn ComfyUI, experiment with Runway Gen-3. But the second it touches client work or generates income, switch to indemnified tools or accept you're taking a legal gamble.
FINAL THOUGHTS: THE AI GOLD RUSH IS OVER—NOW COMES THE RECKONING
For two years, we've been in the "move fast and break things" phase of AI adoption. That era is ending.
Lawsuits are piling up. Regulations are tightening. The first wave of copyright infringement cases will set precedents that define this industry for decades.
Don't be the test case.
Use AI strategically. Use it safely. Document everything. Protect your clients as fiercely as you protect your own work.
Because when the legal hammer falls—and it will—ignorance won't be a defence. It is time to grow up.
RESOURCES & FURTHER READING
📋 EU AI Act: eur-lex.europa.eu 🎬 AI Filmmaking EU Regulations: studio.aifilms.ai/blog 🔒 C2PA Content Credentials: c2pa.org 📸 Adobe Content Credentials: adobe.com/sensei/credentials
Want to discuss AI legal safety or collaborate on compliance tools? Connect with me on LinkedIn or visit ssh-creative.com.
Let's build a safer, more transparent AI future—together.
#AI #ArtificialIntelligence #CreativeIndustries #DigitalCreative #ContentCreation #CreativeTechnology #AICompliance #Copyright #DigitalRights #RegulatoryCompliance #EUAIAct #AIRegulation #AIEthics #AIGovernance #ResponsibleAI #MotionDesign #MotionGraphics #VideoProduction #3DDesign #Animation #VFX #PostProduction




Comments