top of page

ARE YOU BORED OF BOARDS? An à la 'node' menu of creative pipelines & why AI's copycat canvases feel like coming home

  • Writer: candyandgrim
    candyandgrim
  • Nov 18, 2025
  • 7 min read

Originally there was ComfyUI, comfortable with its niche and waaaaay ahead of its time. Now? Everyone's got a canvas. October 2024 saw an absolute deluge of node-based workflow tools hit the AI video scene. Flora, Freepik Spaces, Krea Nodes, ImagineArt Workflows, Runway Workflows – suddenly everyone had their own infinite canvas with nodes and pipelines.

Erik Knobl wrote a brilliant piece asking whether this new workflow is actually better. His answer? Yes, emphatically yes.

My answer? Yes, but there is more...


I've seen this film before

I've been working with node-based systems in MAXON Cinema 4D's XPRESSO for years. (I won't tell you exactly how many years – it would make me sound ancient.) So when these AI boards started appearing, I didn't feel lost or overwhelmed. I felt at home.

XPRESSO taught me something crucial that the current wave of AI canvas evangelists seems to have forgotten: connecting A to B is just the beginning. The real power lives in what happens between those connections.

Custom sliders. Data fields. Calculations. Combination logic. Decision trees.

Like ChatGPT cementing OpenAI's place in people's minds, being first in a niche matters. ComfyUI wasn't just first – it remains absolute open-source AI heaven for developers. The complexity that makes it a nightmare to set up for less-techy creative users is exactly what gives it unmatched power for those who master it. Then came Flora and Weavy (now Figma Weave), bringing user-friendly interfaces to node workflows. Then October 2024 happened, and suddenly everyone had some sort of node board editor.

Why not? It's a genuinely useful tool. Not the quickest system to set up, admittedly, but presets – whether off-the-shelf or custom templates you've built yourself – help enormously.


Control is coming back to creatives

But node-based canvases are just one part of a much bigger trend: AI platforms are finally putting creative control back in our hands.

Look at what's happening across the ecosystem:


Luma's Ray3 lets you draw annotations directly on frames to control motion paths, camera movements, and character blocking. No more hoping the AI guesses what you meant – you literally draw it.


Tether by Animated Company brings directable AI animation into After Effects using null controls. If you've ever animated with nulls, you know exactly how powerful this is.


Stability AI's Stable Virtual Camera transforms 2D images into immersive 3D videos with full camera control and realistic depth.


NVIDIA's GEN3C delivers 3D-informed video generation with precise camera control using point cloud caching.


Runway's Workflows now include Gemini Flash 2.5 nodes that don't just connect outputs to inputs – they process the data in between. They interpret, refine, and enhance prompts automatically. That's computational nodes, not just connectors.


Then there are the Adobe MAX 2025 Sneaks – those experimental tools that may appear any time between now and 18 months from now:


  • Project Motion Map: animate static artwork with text prompts instead of timelines

  • Project Turn Style: edit 2D objects as if they're 3D, maintaining texture and lighting

  • Project Light Touch: reshape lighting after capture, turning day to night without reshoots

  • Project Frame Forward: make one change in one frame, apply it automatically across all subsequent frames


Add to that the explosion of AI motion capture tools – Autodesk Flow Studio (formerly Wonder Dynamics), Move.ai, DeepMotion, Rokoko Vision, Plask Motion – and you've got a clear pattern.

AI is maturing. The "type a prompt and pray" era is ending. The "direct your vision with precision" era has begun.


But here's what everyone's missing

The conversation about node-based canvases has been stuck on a binary debate: ComfyUI's power versus Flora's accessibility. Advanced versus approachable.

That's the wrong framework entirely.

Because here's what XPRESSO taught me that nobody in the AI canvas space seems to have learnt: you don't need two tiers. You need three.

And here's the uncomfortable truth: how many Cinema 4D users really used XPRESSO in depth? Maybe 5-10%? Most studios had one or two XPRESSO wizards, and everyone else just used pre-built setups they didn't understand.

Sound familiar?

Let me show you what the three-tier future actually looks like – and we don't need to look far for proof it works. After Effects already cracked this with Motion Graphics Templates.



The three-tier future: lessons from MOGRTs


TIER 1: THE BUILDERS (Expression Writers)

These are the technical directors, pipeline architects, the people who write expressions and build complex rigs. In Cinema 4D, they were the XPRESSO masters. In After Effects, they're the expression wizards.

What they need:


  • Maximum flexibility and no guardrails

  • Custom functions, calculations, data manipulation

  • Ability to build custom nodes and tools

  • Documentation, GitHub repos, developer Discord


Tools serving them well: ComfyUI, advanced Runway Workflows, Luvian

Current status: ✅ Well-served

Market size: 5-10% of users


TIER 2: THE PACKAGERS (MOGRT Builders)

These are motion designers, editors, creative directors. They're not writing expressions from scratch, but they understand how to use them. In C4D, they stuck to MoGraph. In After Effects, they build Motion Graphics Templates.

What they need:


  • Powerful but approachable interfaces

  • Pre-built nodes with customisation options

  • Smart defaults that "just work"

  • Ability to save and share templates

  • The computational power without the programming


Here's where it gets interesting: Remember XPRESSO's User Data? You could connect your complex node setup to simple sliders and controls, then hide the entire XPRESSO editor. Other users could manipulate your creation without ever seeing the complexity underneath.

Motion Graphics Templates perfected this. An After Effects wizard builds a complex .aep project with expressions, controllers, and effects. Then they package it as a .mogrt file. Clean. Contained. Controlled.

This is what's missing from current AI canvases. Flora users can build workflows, but they can't easily package them into simplified UIs for others to use.

Imagine:


  • Building a complex character animation workflow once

  • Exposing only 5-6 key controls (outfit colour, pose, background style)

  • Sharing it as a simple form interface

  • Anyone can use it without touching nodes


Invoke is the only platform that's partially cracked this with their Form Builder and Linear UI tools. But even they haven't gone far enough.

Tools attempting this: Flora, Freepik Spaces, Runway Workflows, Luvian, Invoke

Current status: 🟡 Increasingly well-served, but missing the abstraction layer

Market size: 30-40% of users


TIER 3: THE USERS (Video Editors Who Never Open After Effects)

These are marketing teams, social media managers, brand coordinators. In the MOGRT world, they're the Premiere Pro editors who customise templates without ever opening After Effects. They change colours, swap text, adjust timing – all through a simple panel.

They don't know what an expression is. They don't need to.

What they need:


  • "Change this text and export 47 ad ratios" simplicity

  • Template-driven, zero learning curve

  • Mass automation and batch processing

  • Absolutely NO exposure to nodes, pipelines, or prompts

  • Brand asset management built in


The market reality: The marketing manager who needs 47 variations of an ad for different platforms doesn't want to learn node-based workflows. They want:


  1. Upload brand assets

  2. Type the new headline

  3. Click "Export All Sizes"

  4. Download and deploy


This is the .mogrt moment for AI. The marketing team doesn't need to understand the wizardry that created the template. They just need the end result – fast, reliable, and repeatable.

Tools serving them: Nobody. Not a single one.

Current status: ❌ Completely unserved

Market size: 50-60% of users


Growing up means taking onboarding seriously

We've all had our fun playing with nodes. Now it's time for professionalism.

If a platform wants to be taken seriously, they need three distinct onboarding pathways:

Tier 1 Onboarding:


  • Comprehensive API documentation

  • GitHub examples and community nodes

  • Developer forums and Discord

  • Assume high technical literacy


Tier 2 Onboarding:


  • Interactive product tours

  • Template library with video walkthroughs

  • "Build Your First Workflow" tutorials

  • Regular webinars and masterclasses

  • Certification programmes


Tier 3 Onboarding:


  • None needed – it should be self-explanatory

  • Contextual tooltips only

  • Pre-populated examples

  • 1-click deploy templates

  • Phone support (yes, really – because this is marketing teams)


The platforms that win will be the ones that build all three tiers and make it easy for Tier 2 to create packaged solutions for Tier 3.


The question we should be asking

Cinema 4D had XPRESSO (Tier 1) and MoGraph (Tier 2), but they never built the "export this as a simple UI" bridge properly. Result? Most studios had one or two XPRESSO wizards, and everyone else just used pre-built setups they didn't understand.

After Effects got it right with Motion Graphics Templates. The .aep file (Tier 2) becomes a .mogrt (Tier 3). The Premiere editor gets exactly what they need – no more, no less.

The question isn't whether node-based canvases are the future. They absolutely are.

The question isn't even whether they're better than the old workflow. They demonstrably are.

The real question is: will AI canvas platforms learn from both XPRESSO's mistakes and MOGRT's success?

Because the real revolution won't be when every motion designer is using node workflows.

It'll be when the marketing manager in Tier 3 is using AI canvas outputs without ever knowing they exist – just like how thousands of editors use Motion Graphics Templates daily without ever opening After Effects.

That's when we'll know these tools have grown up.


The tools pushing control forward

If you're exploring these new control paradigms, here are the platforms worth watching:

Node-Based Workflow Platforms:



Annotation & Drawing Controls:



Camera Control Systems:


  • Stability AI (Stable Virtual Camera)

  • NVIDIA (GEN3C)

  • Runway Gen-3

  • Qwen Image Edit


3D & Motion Control:



Adobe MAX 2025 Sneaks:


  • Project Motion Map

  • Project Turn Style

  • Project Light Touch

  • Project Frame Forward


Node-based canvases aren't a fad. They're the logical next step for serious AI video creation.


But the winners won't be the ones with the most nodes, the prettiest interface, or even the best AI models.


The winners will be the ones who understand that different users need different experiences – and that the most important tier is the one nobody's building for yet.


Are you board of boards? You shouldn't be. But you should be asking harder questions about who these tools are actually for.


 
 
 

Comments


bottom of page