My Journey with Generative AI in the creative industries
This page is a living record of the AI tools, models, platforms, and services I actively use in my creative practice. I've been working with AI in production since 2018 and my stack changes regularly — tools get added when they earn a place, and removed when something better comes along or the ethics don't hold up. I've tried to be honest about both.
​
Each entry carries three ratings:
-
Ethics 1-5/5 — how transparent and responsible the platform is about training data, consent, and IP.
-
Commercial — whether I'd use it on final client work without hesitation: Yes / Conditional / No.
-
C2PA — whether it supports the Coalition for Content Provenance and Authenticity standard for content provenance metadata. Adobe's implementation is called Content Credentials. Increasingly relevant for responsible AI governance in media production: Yes / No / Partial / N/A.
​
ACTIVE STACK - PLATFORMS

Claude (Anthropic):
My primary AI tool and by some distance the one I use most across my entire workflow. Research, copywriting, strategic thinking, creative briefs, prompt engineering, governance frameworks, client communications, pitch development — if it requires careful reasoning, structured thinking, or nuanced output, this is where I start. What separates it from the alternatives isn't just output quality but the quality of the thinking: it pushes back, holds context across complex multi-part problems, and treats a brief like a collaborator rather than a search engine. It has effectively replaced Grok, Gemini, and Perplexity as my daily driver for everything that requires serious reasoning.
Ethics: 4/5 | Commercial: Yes | C2PA: N/A
​
Adobe Gen-AI (Adobe as asked me not to show their logo):
Firefly, Premiere, Photoshop, After Effects, Illustrator
The AI tools I use most in terms of sheer hours are embedded inside Adobe's suite. Generative Fill and Generative Expand in Photoshop, AI-assisted audio editing and speech enhancement in Premiere, and the growing Firefly-powered tools across the CC apps are the ones I reach for daily in production. The reason Adobe sits at the top of my ethical stack is simple: their generative AI is trained on licensed Adobe Stock and public domain content, and they offer IP indemnification on Firefly-generated output. That matters for commercial work. Content Credentials are embedded by default — every Firefly-generated or AI-assisted output can carry provenance metadata. Adobe asked me not to show their logo on this page, which is a shame, since they have the best ratings here.
Ethics: 4/5 | Commercial: Yes | C2PA: Yes
​

Weavy:
My primary image generation and refinement platform. Krea earns its place because it's a complete solution rather than a single trick — real-time generation, canvas-based editing, LoRA training, upscaling to 22K, and access to multiple models including FLUX and Nano Banana Pro, all within a single coherent interface. I've used it on commercial production work including large-scale experiential pitch work, and the professional studio credit system makes it viable for actual client billing rather than just personal experimentation. The LoRA training workflow — upload 15–20 reference images, train a custom model in minutes — is the closest thing to teaching AI your visual language without needing a developer background. It has replaced Leonardo AI as my primary LoRA training platform.
Ethics: 4/5 | Commercial: Conditional | C2PA: No
​

Weavy:
A node-based platform for chaining AI models across text, image, video, and 3D into custom production pipelines. Useful when you need to wire together models that wouldn't otherwise connect cleanly, and for building repeatable workflows across campaign variants. Still in active use, though I reach for Krea first for most image work — it covers more ground, the output quality is marginally better, and the studio credit system is more production-suitable. Weavy's notable gap is the lack of native LoRA training. Worth keeping for specific multi-model pipeline construction, less so as a general-purpose platform. Still using and watching this space, especially as to how it relates to Figma Weave.
Ethics: 3/5 | Commercial: Conditional | C2PA: No
​

Figma (and Figma Make):
AI features including First Draft let me prompt full wireframes or high-fidelity prototypes — "e-commerce dashboard with dark mode" — and iterate themes, spacing, and components quickly using AI while staying anchored to a design system. Figma Make is my primary vibe coding tool: prompt a UI, iterate on components, generate interactive flows, all without leaving the design environment. I pair it with Claude for anything requiring reasoning about how something should behave rather than just how it should look. Between the two, brief to functional prototype is achievable without a developer in the room, which changes what's possible at the concept and pitch stage.
Ethics: 3/5 | Commercial: Yes | C2PA: No
​

Perplexity AI:
Research tool with source citations, useful for digging into media trends and staying current. Claude has displaced it as my primary tool but Perplexity still earns occasional use when I want fast source-linked answers rather than a longer dialogue.
Ethics: 3/5 | Commercial: Yes | C2PA: N/A
​

Synthesia:
Avatar-based video generation for personalised content. One of the most ethically sound platforms in the AI video space — their IP Copyright Pledge and published ethics framework set a standard most competitors haven't matched. I've used Synthesia since pitching it for an ABM campaign in 2018 and it remains the only AI avatar tool I'd confidently recommend for client work.
Ethics: 5/5 | Commercial: Yes | C2PA: Partial
​
ACTIVE STACK - MODELS
Adobe Firefly:
Adobe's commercially safe image generation model, accessed via the Firefly web app and increasingly embedded across the CC suite. Trained exclusively on licensed Adobe Stock and public domain content, with IP indemnification on outputs — which is why it's my first stop for any generative image work that will end up in a client deliverable. The quality ceiling is lower than FLUX or Midjourney for some styles, but the commercial safety and Content Credentials support make the tradeoff straightforward for professional use.
Ethics: 4/5 | Commercial: Yes | C2PA: Yes
​
Nano Banana Pro (Google Gemini, via Krea, Adobe, and Weavy):
Google's Gemini-based image generation and editing model, used primarily through Krea's interface. The standout capability is semantic editing — describe what you want to change in natural language and it executes with genuine intelligence, without needing masks, selections, or technical prompting. I used it extensively on a large commercial experiential pitch, generating venue renders, zone mockups, and atmospheric stills directly from reference images. Going to Google AI directly isn't something I do — the value is accessing this model through a platform that wraps it in proper creative controls and a production-viable workflow.
Ethics: 3/5 | Commercial: Conditional | C2PA: No
​
FLUX (Black Forest Labs):
Adobe's commercially safe image generation model, accessed via the Firefly web app and increasingly embedded across the CC suite. Trained exclusively on licensed Adobe Stock and public domain content, with IP indemnification on outputs — which is why it's my first stop for any generative image work that will end up in a client deliverable. The quality ceiling is lower than FLUX or Midjourney for some styles, but the commercial safety and Content Credentials support make the tradeoff straightforward for professional use.
Ethics: 4/5 | Commercial: Yes | C2PA: Yes
​
Recraft:
use Recraft as a model accessed via other platforms rather than through its own interface — it's available as a model option in Krea and increasingly elsewhere. Where FLUX is my go-to for photorealistic and atmospheric outputs, Recraft V3 is the model I reach for when the brief calls for graphic design-led results — clean vector-style illustration, structured layouts, typographic precision, and on-brand consistency. It handles those outputs more reliably than most image models. Recraft is also available as a standalone platform and service at recraft.ai if you prefer to work with it directly, but for my workflow it earns its place as a model layer inside tools I'm already using.
Ethics: 3/5 | Commercial: Conditional | C2PA: No
​
ACTIVE STACK - TOOLS AND PLUGINS

Topaz Video AI, AI Scale-Up, Depth Scanner, Speedx AI Plugins:
Local video enhancement plugins covering upscaling, depth mapping, noise reduction, and motion enhancement. The reason I prefer these over cloud-based equivalents for enhancement work is simple: one purchase, no subscription, runs on my machine indefinitely. These sit in the Build layer of my workflow — they refine and enhance rather than generate from scratch, which means fewer ethical complications and no ongoing cost.
Ethics: 3/5 | Commercial: Yes | C2PA: No
​

Move.AI:
Markerless motion capture from standard camera footage, used when Mixamo's library doesn't have what a project needs. I treat this as an AI tool rather than a content generator — it's processing my footage and extracting performance data, not creating from a training set of others' work.
Ethics: 3/5 | Commercial: Yes | C2PA: N/A
​

Wonderstudio:
Automates character animation and VFX integration into live footage. Same reasoning as Move.AI — it's working with assets and footage I bring to it, not generating from a black box.
Ethics: 3/5 | Commercial: Conditional | C2PA: No
​

Fossa Tether:
The most conceptually correct AI animation tool I've encountered. Rather than asking you to learn a new interface, Tether works natively inside your After Effects timeline — animate a null object, set your motion paths and timing, and the AI executes. That distinction matters: most AI video tools put the AI in charge of motion; Tether keeps the animator in control and uses AI for execution. It works across animation, motion graphics, 2.5D photography, cinemagraphs, character rigging, and anything image-based — 2D artwork, 3D renders, photography. AI that amplifies skills creatives have spent years building rather than replacing them with a black box. Currently in active beta testing.
Ethics: 4/5 | Commercial: Conditional (beta) | C2PA: TBC
​

Reallusion Cartoon Animator:
A 2D animation suite that integrates directly with After Effects via a dedicated AE script, exporting full projects — characters, scenes, cameras, audio — as JSON files that reconstruct layer by layer in AE. For rigging PSD and SVG assets, applying mocap-driven performances and lip-sync, and finishing in AE without wrestling with image sequences, this is a significant workflow improvement over the alternatives I've used historically: Character Animator, DuIK, Limber, Joysticks'n'Sliders, Deekay. The AI-assisted auto-rigging in particular saves hours on character setup.
Ethics: 3/5 | Commercial: Yes | C2PA: N/A
​
ACTIVE STACK - 3D


Meshy / Hitem3D:
My go-to tools for AI-generated 3D geometry, with an important caveat baked in: neither produces mesh or texture quality I'd take into a final production pipeline. The mesh topology is rarely clean enough for animation or close-up rendering, and the texture output doesn't come close to what Substance Painter or BodyPaint produces with proper UV unwrapping and manual work. What they're genuinely useful for is speed at the reference and blocking stage — generating 3D placeholder proxies for spatial reasoning, composition testing, or pre-vis work where the goal is "roughly right shape in the right place" rather than finished geometry. Used for that purpose they earn their place. Used beyond it, they'll let you down.
Ethics: 3/5 | Commercial: No (reference/proxy use only) | C2PA: No
​

Spline.Design:
I have access to Spline through my subscription and it's a capable web-based 3D tool with growing AI features. In practice it's not the 3D generator I reach for first — the technically specific nature of most of my 3D work means Cinema 4D remains the primary environment. Spline earns its place for quick interactive 3D assets for web and media where the full C4D pipeline would be disproportionate.
Ethics: 3/5 | Commercial: Conditional | C2PA: No
​
ACTIVE STACK - SEVICES

ElevenLabs:
Voice cloning and AI voiceover generation. My preferred approach here isn't text-to-voice but voice-to-voice: I act out the script myself — tempo, dramatic pauses, emphasis — and use my scratch recording to direct the AI-generated version. This keeps creative intention in the work rather than outsourcing it entirely to a prompt. The newer ElevenLabs Voice Library operates on a consent-based model with creator compensation, which is a meaningful step forward ethically.
Ethics: 3/5 | Commercial: Conditional | C2PA: No
​

Artlist, MotionArray:
Licensed music and AI voiceover services. Artlist's approach — licensed content with professional voice actors paid as partners — puts it ahead of most audio AI services ethically. My preference is always to find real artist tracks for client work; AI music editing earns its place specifically for duration editing and mood-point customisation where manual editing would be disproportionately time-consuming.
Ethics: 3/5 | Commercial: Yes | C2PA: No
​

Soundraw:
Royalty-free music generation tailored to video mood and duration. Built on in-house compositions only — no scraped training data concerns. The AI value here is in fast editing and mood-matching rather than replacing real music composition.
Ethics: 3/5 | Commercial: Yes | C2PA: No
ACTIVE STACK - BETA TESTING & COMING SOON
Blendworks — Closed beta tester:
Desktop-first, node-based, built for creatives rather than developers. The positioning is simple and genuinely differentiated: think ComfyUI's power and flexibility, but designed by people who understand how motion designers and creative directors actually think and work. Bring-your-own-API-keys model and local-first architecture address the ownership and cost concerns that make cloud-only tools a difficult sell for serious production. I haven't completed a full structured test yet — that's coming and I'll write it up properly — but the approach is the most serious attempt I've seen to solve the workflow fragmentation problem in this space.
Ethics: 4/5 | Commercial: TBC | C2PA: TBC
​
Styleframes — Beta tester:
Built specifically for motion designers and animators working in pre-visualisation. Where most image generators treat still frames as the end product, Styleframes thinks in sequences — how a shot develops, how a transition feels, how a brief becomes a series of visual moments. The approach maps directly onto how I work when building out concepts for pitches and productions. More to follow as it matures.
Ethics: TBC | Commercial: TBC | C2PA: TBC
​
Adobe Project Graph — Beta registered:
Adobe's most significant upcoming platform for my workflow. Where Project Neo addresses 3D, Graph is the aggregation layer — a node-based environment for connecting and orchestrating across the full Creative Cloud AI suite. If it delivers, it has the potential to bring a lot of the pipeline that currently requires multiple third-party tools into a single, commercially safe, IP-indemnified environment. The implications for teams doing high-volume AI-integrated production work are significant.
Ethics: 4/5 (anticipated) | Commercial: Yes (anticipated) | C2PA: Yes (anticipated)
​
Artlist Studio — Beta registered:
Artlist's managed AI production environment. Artlist Original 1.0 — trained exclusively on their licensed stock footage library — is due Spring 2026, which would make it one of the most ethically clean video generation options available. Consent-based training data plus professional licensing is the combination the rest of the industry should be working towards.
Ethics: 4/5 (anticipated) | Commercial: Yes (anticipated) | C2PA: TBC
​
Adobe Firefly Custom Models — Coming:
Individual access to custom model training in Firefly: fine-tune on 6–12 reference images for characters or style consistency, built on Adobe's licensed Stock data base. The integration with Photoshop and Illustrator for seamless export is the part I'm watching most closely — brand-consistent asset generation with full IP indemnification is a meaningful capability for production teams.
Ethics: 4/5 | Commercial: Yes (anticipated) | C2PA: Yes (anticipated)
​
RETIRED FROM AI STACK
Midjourney
Produces impressive results, and it was a useful early ideation tool. However, its training relied heavily on scraped artist work without consent or compensation, it has faced significant legal action, and using it in any commercial context carries real IP risk. I can't recommend it for professional use and no longer use it myself. The ethical position simply isn't viable for client work.
Ethics: 0/5 | Commercial: No | C2PA: No
​
ComfyUI:
Powerful, open-source, and technically impressive — but it was built by developers for developers, and the community has actively maintained that positioning. For a tool with genuine capability, the UX is a significant barrier to adoption in any agency or studio environment. I won't recommend a tool to colleagues or clients that I'd be reluctant to use myself under client pressure. The competitors that have emerged since understand how creatives think and work, and they've caught up on capability while leaving ComfyUI behind on experience. Its open-source, local-first architecture means the data ethics are excellent — that's just not the reason I've moved on.
Ethics: 5/5 | Commercial: Conditional | C2PA: No
​
Leonardo AI:
My previous go-to for LoRA training — upload reference images, fine-tune a custom model, generate consistent stylised outputs. Fully replaced by Krea, which covers the same capability with a better interface, stronger output quality, and a more professional credit system. Leonardo remains a solid option for anyone getting started with custom model training; it's just no longer the best option for my workflow.
Ethics: 3/5 | Commercial: Conditional | C2PA: No
​
Mago.studio:
Interesting platform for style transfer and AI-generated rendering, with potential applications for 3D render post-processing. Not currently in active use, but I may revisit specifically for AI rendering workflows where the goal is transforming C4D outputs into stylised or photorealistic results quickly. One to watch rather than one to write off.
Ethics: 3/5 | Commercial: Conditional | C2PA: No
​
Luma AI / NVIDIA Instant NeRF
Early image-to-3D and NeRF generation tools that were genuinely exciting at the time. Retired — Meshy and Hitem3D cover the proxy and reference use case more practically, and for anything beyond that my pipeline stays in Cinema 4D.
Ethics: 2/5 | Commercial: No | C2PA: No
​
Cursor
AI-assisted code editor. Retired in favour of Figma Make and Claude for vibe coding work.
Ethics: 3/5 | Commercial: Conditional | C2PA: N/A
​
Kaiber
An early and genuinely interesting video generation experiment. The field has moved significantly since — current tools offer substantially better creative control, output quality, and production viability.
Ethics: 2/5 | Commercial: No | C2PA: No
​
PromeAI
Turns photos and graphics into consistent sketched images, useful for storyboarding. Replaced more diverse platforms. I may revisit if Google Whisk becomes fully available in the UK/EU.
Ethics: 2/5 | Commercial: Conditional | C2PA: No
​
Grok (xAI)
Capable, and still my very occasional fallback when I've hit limits elsewhere. But firmly displaced as a daily driver and never used for anything important. Mentioned here for completeness rather than as a recommendation. This was my go-to after ChatGPT which I ditched in turn in 2024.
Ethics: 3/5 | Commercial: Conditional | C2PA: N/A
​
