I WAS VISITED BY AI SPIRITS PAST, PRESENT, AND FUTURE. HEED THIS WARNING OF MINE:
- candyandgrim

- Dec 9, 2025
- 16 min read
Updated: Dec 12, 2025

After years of text-prompt winter, hands-on tools are emerging. Here's what we lost, what we have now, and why it feels like Christmas has come early. 2026 might be the miracle we have been wishing for.
It was a cold December night when they came for me. Three spirits of AI creative tools, each bearing visions that would change how I see this industry forever.
I've spent 15+ years in motion graphics and 3D, watching tools come and go. But between 2022 and 2024, something darker happened. The industry discovered AI generation, fell in love with text prompts, and systematically stripped away every hands-on control that made creative work feel like... well, creative work.
But the spirits showed me something I'd missed: a pattern. A turning point. A renaissance hiding in plain sight.
This is what they revealed.

🕯️ the ghost of christmas past: "as dead as a doornail"
The pattern we needed to see
The first spirit took me back through the graveyards of abandoned features. Not to mourn but to understand. Because every dead tool teaches us what the industry truly values when the marketing fades.
Runway Gen-2 multi motion brush (Nov 2023 - Mid 2024)
STATUS: removed when Gen-3 launched
What we had:
Up to 5 independent motion regions
Separate horizontal/vertical/proximity controls per region
Direct manipulation of WHERE and HOW things moved
What we learned: When Runway launched Gen-3, this vanished. The "smarter" model gave us better quality, but stripped away granular control. Pattern identified: advanced control is always first to be sacrificed for "intelligence."
Adobe neural filters (2019 - present)
STATUS: zombie, six years in beta
The promise:
Revolutionary AI-powered effects
"The future of Photoshop"
Community excitement, industry buzz
The reality:
Photo restoration randomly disabled since 2023
Community forums: "Have neural filters been abandoned?" (Jan 2023)
Still in beta after 6 years
No communication, no roadmap, no sunset announcement
What we learned: Launch with fanfare → beta forever → quiet deprecation is the Adobe playbook. If a feature stays in beta for years, it's already dead. They just haven't told you yet.
Project comic blast (Adobe MAX sneak)
STATUS: vanished, never released
The tease: Exciting MAX demonstration, crowd loved it, seemed production-ready.
The outcome: Complete silence. No beta, no mention, no explanation.
What we learned: MAX sneaks have a ~30% ship rate. They're not promises, they're experiments the public gets to see. Getting excited without understanding this pattern leads to heartbreak.
The meta-pattern
The ghost of Christmas past wasn't showing me failures, it was showing me the industry's priorities:
Text prompts scale better than UI (one input field vs. complex UI controls that take longer to develop with drawn-out UX testing)
"Smarter" models justify removing features ("You don't need motion regions anymore, just describe it!"... even if that takes longer and produces the wrong results. Let's be fair: if you're using Runway you work in video and motion, not copy/script writing)
Beta purgatory is where tools go to die quietly (Adobe especially)
Hands-on control is expensive to maintain and develop (and first to be cut) yet if you crack it, EVERYONE will copy your hard work, like ComfyUI
But here's what I learned: The period between 2022-2024 wasn't the new normal. It was text-prompt winter, a temporary ice age where the industry prioritized scale over craft. And like all winters...spring is around the corner.

🔥 the ghost of Christmas present: "god bless us, every one!"
The tools fighting back RIGHT NOW
The second spirit grabbed me by the collar: "Stop mourning the past! Look at what's ALIVE! Look at who's FIGHTING BACK!"
And suddenly I saw them, dozens of tools that understand something crucial: text prompts are the START of the creative process, not the destination.
Video motion control: the resurrection
KLING AI MOTION BRUSH ⭐ https://klingai.com
Draw motion paths for up to 6 elements simultaneously
Static brush to freeze areas
Auto-segmentation that actually works
Available NOW in v1.0 and v1.5 (not v1.6 yet)
Why it matters: Kling didn't just bring back motion brushes, they improved them. Multiple regions, better segmentation, intuitive controls. They saw what Runway abandoned and said "We'll do it better."
KAIBER MOTION BRUSH https://kaiber.ai
Brush sizes 1px-50px for precision control
Area-by-area animation
Audioreactive capabilities (motion synced to sound)
One-click segment selection
Why it matters: Kaiber added audio reactivity, connecting motion control to another input channel. Not just "move this," but "move this IN TIME with this sound."
PIKA LABS https://pika.art
Regional editing/modification (inpainting-style control)
Camera controls via parameters (pan 0-4, zoom 0-4, rotate 0-4)
Pikaframes for keyframe transitions (1-10 second clips)
Motion intensity sliders
Why it matters: Pika's giving us the camera controls that cinematographers understand. Not "describe camera movement" but "pan left 2.5, zoom in 3."
ADOBE FIREFLY VIDEO MODEL https://firefly.adobe.com
Keyframe system (set start/end frames explicitly)
Camera controls (angle, motion, zoom, shot type)
Composition reference (upload reference video, transfer structure)
Style presets with real-time adjustment
Why it matters: Adobe learned from neural filters. Firefly video shipped with controls from day one, not as an afterthought.
FOSSA TETHER ⭐⭐⭐ (the crown jewel) STATUS: private beta (Nov 2025) - Animated Company https://forms.animatedcompany.com/pvtbeta
This is the one that made me believe again.
After Effects plugin, works INSIDE your timeline
Control AI video generation with null objects
Animate the null, AI character/camera follows your motion paths
NO ComfyUI, NO node graphs, NO separate software
Editable motion paths and curves
Position, scale, rotation, 3D depth control
"Small teams completing in minutes what took large studios months"
Why it matters: Tether solves the integration problem. Every other AI video tool makes you leave your workspace, generate, import, hope it's right, repeat. Tether keeps you in the timeline, the place animators THINK. It's not bolting AI onto After Effects; it's making AI speak After Effects' native language.
The pattern: These aren't companies adding motion brushes as an afterthought. They're building entire products AROUND the principle that creatives need hands-on control.
3D/perspective control: the space revolution
PROJECT NEO (Adobe public beta - Feb 2025) STATUS: LIVE at https://projectneo.adobe.com
Web-based 3D for designers (not 3D specialists)
Illustrator-style familiar controls
Real-time lighting/shadow adjustment
Boolean operations, extrusion, beveling
SVG import/export to Illustrator
Scene to image: AI layout control within 3D space
Collaborative (share link, real-time viewing)
62,000+ beta users already
Why it matters: Neo isn't asking designers to learn Blender. It's bringing 3D to where designers already work, with tools that feel familiar. And "scene to image" is brilliant: use 3D for spatial control, then AI for style/texture. Best of both worlds.
PROJECT TURNTABLE (Adobe Illustrator beta 2024) STATUS: fully available in Illustrator (not gated)
Rotate 2D vector art in 3D space
Slider-based rotation control
AI generates missing angles/views
Coming to Photoshop next...please don't forget After Effect and Adobe Character
Created by Adobe research scientist Zhiqin Chen
Why it matters: Take a flat logo, rotate it in space, AI fills in the perspective. This is spatial control meeting generative AI: you define the angle, AI handles the rendering.
HIGGSFIELD ANGLES https://higgsfield.ai
Interactive 3D cube for visual angle selection
Manual sliders: 5 rotation options (180° span), 3 zoom levels, 3 vertical angles
"Generate from all angles": automatic 12-perspective set
Maintains subject consistency across angles
Why it matters: Higgsfield understood that "generate a 3D view" is too vague. They built a UI where you SEE the angles as you select them. Spatial reasoning through direct manipulation.
QWEN CAMERA CONTROL (multiple implementations) https://openart.ai (has visual interface version)
Text prompts for camera positioning ("Rotate camera 45° left")
Visual interface versions emerging
Maintains subject consistency across moves
Why it matters: Even "text-based" camera control is getting visual interfaces. The industry is learning that describing camera moves in words is the HARD way.
Compositing: the integration masters
ADOBE HARMONIZE (Photoshop desktop/web/mobile) STATUS: fully released (Aug 2025)
From MAX sneak "Project Perfect Blend" to shipping product in 10 months. Adobe CAN move fast when they want to.
The workflow:
Import background scene
Place subject on new layer
Remove background (Photoshop's cloud-based removal)
Position and scale
Click harmonize
Choose from 3 variations (or regenerate)
What it does:
Auto-matches lighting, color, shadow
Regenerates the entire composite (not just adjustments)
Intelligently alters areas around subject to cast shadows INTO scene
Uses 1 Firefly credit per generation
The limitation (and it's honest):
Reduces resolution
Best for concept work and quick mockups
Not recommended for final production composites
Why it matters: Harmonize doesn't pretend to be magic. It's a speed tool: get 80% there in seconds, refine manually if needed. It acknowledges that AI is part of the workflow, not the whole workflow.
The controls:
Works on pixel layers only (clear constraints)
Toggle before/after view
Generate multiple variations
Still have full Photoshop toolset for refinement
Why THIS matters: It's AI that knows its place. Fast composite for direction, then hand off to traditional tools for final polish. Not trying to replace compositors, trying to speed them up.
Node-based systems: the power user path
COMFYUI ECOSYSTEM https://github.com/comfyanonymous/ComfyUI STATUS: open source, rapidly evolving
The reality: Still uber-geek territory. Intimidating UI. Steep learning curve.
But look at what's emerging:
ComfyUI_RealtimeNodes:
Map MIDI controllers to any workflow parameter
Gamepad input for live parameter control
Turn node workflows into performance instruments
ComfyUI-Interactive:
Button-based workflow selection
Interactive gates (human decision points in AI generation)
Brings UI/UX thinking to node systems
ControlNet systems:
Real-time strength adjustment
Multiple ControlNet chaining
Visual preview of control influence
Why it matters: ComfyUI is proving that open source can compete with corporate AI. Every new node and integration that makes ComfyUI easier to set-up and use is a vote against text-prompt-only futures.
The trajectory:
Barrier: still requires technical knowledge
Opportunity: extensible, transparent, community-driven
Future: either gets its "capsules moment" (becomes accessible) OR remains the power-user tool while easier wrappers emerge
The present pattern: three tribes forming
The ghost of Christmas present showed me that the tools aren't random, they're three distinct approaches:
TRIBE 1: motion control natives Kling, Kaiber, Pika, Tether. Built around the principle that creative control = better output.
TRIBE 2: the integrators Tether, Weavy, Canva Magic Studio, Harmonize, Neo, Turntable, Firefly video. Taking AI into existing professional workflows.
TRIBE 3: the open source maximalists ComfyUI ecosystem. Full transparency, infinite control, steep learning curve.
And here's the revelation: They're not competing. They're creating a spectrum of control where creatives can choose their complexity level based on the task.
Need quick concept? Harmonize. Need precise animation? Tether. Need absolute control of every parameter? ComfyUI.
The winter is ending because these tools understand: text prompts are for IDEATION. Hands-on controls are for EXECUTION.

👻 the ghost of Christmas future: "shadows of things that may be"
Three paths diverging
The final spirit showed me not ONE future, but THREE, each possible, each with champions, each with risks.
"These are shadows of things that MAY be, not things that WILL be," the spirit warned. "The future depends on which path the industry chooses and which path YOU support with your attention and money."
Path 1: the Figma invasion
FIGMA WEAVE (acquired Weavy - Oct 2025, ~$200M)
What it is:
Image, video, animation, motion design, VFX generation IN Figma
Node-based workflow (visual, modular)
Model-agnostic (choose best AI for each task)
Founded Tel Aviv 2024, $4M seed funding
The vision:
Generate/animate interface elements developers can immediately test
AI understands hierarchy, motion timing, component structure
Can train on design system for consistent brand outputs
Brings together leading AI models + professional editing tools
The implications: Challenge to: Rive, Lottie, After Effects for web animation Expansion: Figma moving beyond UI/product design into motion/video/VFX Threat to Adobe: The AE → Lottie → web pipeline Next likely: WebGL/3D integration
The promise: Universal creative canvas where design, motion, and code converge.
The risk: Figma REJECTED Adobe's $20B acquisition in December 2023 after 15 months of regulatory scrutiny. EU and UK regulators blocked the deal over antitrust concerns. Adobe paid Figma a $1 billion breakup fee. Figma remains fiercely independent and likely more motivated than ever to compete directly with Adobe after nearly being absorbed.
Will Weave remain truly "model-agnostic" or become a corporate acquisition target itself? Independence is powerful but expensive to maintain.
What this path means: If Figma Weave succeeds, Figma isn't just competing with Adobe, it's building the creative platform Adobe TRIED to buy and failed to acquire. The regulatory win proved that innovation beats consolidation. Sometimes.
Path 2: the Canva democratization
CANVA + AFFINITY (acquired March 2024, ~$380M-$1B)
The deal:
Canva acquired Serif (Affinity's parent)
Deal started via LinkedIn DM, closed in 7 weeks
All Affinity staff retained
The four pledges:
Perpetual licenses will continue alongside optional subscription
Double down on expanding Affinity products
Keep Affinity free for schools/nonprofits
Fair, transparent, affordable pricing
Affinity relaunch (2025):
Fully reimagined professional design app
Photo editing + vector design + page layout in one platform
NOW COMPLETELY FREE for everyone
New brand identity
The strategic vision: Professional designers create in Affinity → teams deploy in Canva
"Creative and brand management teams need professional tools to create and edit assets... Being able to quickly sync those assets to Canva for the wider organisation to easily use"
Integration targets:
Direct Affinity → Canva asset sync
Whiteboard integration likely
Enterprise workflow: create (Affinity) → distribute (Canva)
The promise: Professional-grade creative tools, accessible pricing, preserved perpetual licenses, direct Adobe challenger.
The skepticism (and it's earned):
Will perpetual licenses survive long-term pressure?
Subscription creep despite pledges?
Feature gating for "broader audience"?
History says: acquisitions always start with promises
What this path means: If Canva keeps its promises, professional creative tools become accessible to everyone without subscription extortion. If Canva breaks its promises, it's just another corporate bait-and-switch.
Path 3: the ComfyUI forever
Open source as permanent alternative
Current state:
ComfyUI: open source node-based AI workflow system
Barrier: still uber-geek, intimidating UI
Community: rapidly building plugins, wrappers, abstractions
The trajectory:
Pessimistic: ComfyUI remains power-user-only, never gets its "capsules moment," stays niche
Optimistic: Community builds enough abstraction layers that "normal" creatives can access the power without the pain
Realistic: ComfyUI becomes the "Linux of AI creative tools," powers a lot of infrastructure, most people use friendlier wrappers built on top
Why this path matters:
Proves corporate AI isn't the only option
Transparent (you see exactly what's happening)
Extensible (if you need it, you can build it)
Can't be abandoned via corporate decision
The limitation: Community-driven means:
Fragmented documentation
Inconsistent UX
Requires technical literacy
No one to sue if it breaks
What this path means: Open source guarantees that SOMEONE will always maintain hands-on control tools, even if corporations abandon them. It's the insurance policy.
The three paths, visualized:
Figma path: Nodes + collaboration → universal creative canvas Canva path: Perpetual licenses + accessibility → democratized pro tools ComfyUI path: Open source + transparency → power user permanence
The ghost showed me that these aren't mutually exclusive. We might get ALL THREE. But we might also get corporate consolidation that kills two of them.
The Adobe sneaks: reading the tea leaves
MAX 2024 announced 9 sneaks. Here's the scorecard:
✅ SHIPPED (3):
Project Turntable (Illustrator beta → full release)
Project Neo (public beta Feb 2025)
Project Perfect Blend → shipped as Harmonize
🔬 STILL EXPERIMENTAL (6):
Project Scenic - Build 3D scene layouts with copilot, generate 2D images with full object/camera control
Project Hi-Fi - Capture ANY screen area, use as reference for AI generation
Project In Motion - After Effects shape + text prompt → animated video
Project Clean Machine - Remove camera flashes/fireworks glare from video/photo
Project Remix A Lot - Sketch → finished design with style reference, adapts to any aspect ratio
Project Super Sonic - Click objects in video to generate sound effects OR voice-controlled timing
Historical pattern:
~30% make it to production
~50% become "beta forever"
~20% vanish silently
What to watch: Project In Motion (AE integration), Project Super Sonic (audio as control method), Project Scenic (3D scene building). These are interaction paradigms, not just features.
Why MAX sneaks matter: Adobe is testing UI patterns. Screen capture as reference, object clicking for sound, 3D scene building, these are the control methods of the future. Some will ship, some will vanish, but they reveal where Adobe thinks interaction is headed.
What name was engraved upon the tombstone?
The ghost of Christmas future showed me one last vision before vanishing, a tombstone in a desolate creative landscape. The inscription was weathered but legible:
"HERE LIES HUMAN CREATIVE CONTROL 2020-2027 Killed by convenience, regulation, and our own complacency"
"Spirit!" I cried. "Are these the shadows of things that WILL be, or things that MAY be?"
The ghost pointed to the date. 2027.
Not 2025. Not 2026. 2027.
The regulation storm is coming
The spirits showed me what few in the creative industries have noticed while we've been arguing about prompt engineering and AI ethics:
The regulatory hammer drops in 2026.
EU AI Act (full enforcement: August 2, 2026):
AI-generated content MUST be clearly labeled
Transparency obligations for generative AI
Copyright compliance mandatory for training data
High-impact models face strict risk assessments
California AI transparency act - SB 942 (effective: January 1, 2026):
Generative AI systems with 1M+ monthly users must provide AI detection tools
AI-generated content requires "clear and conspicuous" disclosure
$5,000 penalty per violation per day
California AB 2013 (effective: January 1, 2026):
Developers must publicly disclose training data summaries
Rights holders can request information on copyrighted materials used
30-day response requirement
California AB 412 (proposed for 2026):
Developers must document ALL copyrighted materials in training
Public mechanism for rights holders to verify usage
Each day of noncompliance = separate violation
Colorado AI act (effective: February 2026):
First comprehensive state AI legislation in US
Risk management requirements for high-risk systems
Algorithmic impact assessments mandatory
China's AI content labeling (effective: September 1, 2025):
ALL AI-generated content must be clearly labeled
Mandatory for online services creating/distributing AI content
Government-supervised compliance enforcement
What this means for interactive control tools
The ghost showed me two futures branching from 2026:
FUTURE A: tools with provenance survive
Tools that offer:
Clear human input tracking (Tether's null objects, Kling's motion paths)
Transparent attribution (which parts are AI, which parts are human control)
Audit trails (show how creative decisions were made)
Explicit control points (demonstrate human authorship/direction)
These tools have something crucial: They can prove human creative involvement.
When regulators ask "Is this AI-generated?" you can answer: "This was human-directed AI execution. Here's the timeline. Here's the null object animation. Here's the motion path I drew."
FUTURE B: black box tools get regulated to death
Tools that offer:
Text-only prompts (no provenance, no human input trail)
Pure generative output (can't prove what's human vs AI)
No audit capability (can't demonstrate creative decisions)
Opaque processes (regulatory nightmare)
These tools face existential threats:
Expensive compliance overhead
Content licensing investigations
Platform bans (California, EU)
User liability exposure
The tombstone's true warning
"Spirit, I understand now. The date on the tombstone, 2027, it's not when regulation kills creative control. It's when the industry CHOOSES automation over agency because compliance with interactive tools was 'too hard.'"
Here's the dark timeline:
2026: Regulations hit. Companies panic.
2026-2027: Industry calculates:
Text prompts = easier to label as "AI-generated"
Interactive controls = complex attribution requirements
Audit trails = expensive to implement
Black box = cheaper than transparency
2027: Boardroom decisions:
"Kill the motion brushes, too complicated to document"
"Strip the timeline integration, audit trail is expensive"
"Remove manual controls, just use prompts, easier to label"
"Let them type, don't let them touch, compliance is simpler"
The tombstone's inscription isn't a prediction. It's a warning.
If we're not LOUD about demanding tools with transparent, traceable, human-directed control, regulation will accidentally become the excuse that kills them.
Why 2026 is the inflection point
We're at the ONE MOMENT where:
Interactive tools exist and are improving (Tether, Kling, Harmonize, Neo)
Regulations aren't yet enforced (most hit Jan/Aug 2026)
Companies haven't decided which tools to build for compliance
THIS IS THE WINDOW.
If we support interactive control tools NOW (with money, attention, feedback, loud demands), companies will build compliance AROUND them, not INSTEAD of them.
If we stay quiet and just accept "easier" text-only tools, regulation becomes the perfect excuse to strip away every hands-on control we just got back.
The choice Scrooge faced, we face now
The spirits gave Scrooge a choice: change, or become the man on the tombstone.
They're giving us the same choice:
OPTION 1: fight for traceable creative control
Use tools like Tether, Kling, Harmonize NOW
Demand audit trails and provenance tracking
Support tools that show human creative decisions
Make "transparent AI collaboration" louder than "easier prompts"
Prove to regulators that human-directed AI is different from black-box generation
Result: Regulation distinguishes between "AI-generated" and "human-directed AI" because WE SHOWED THEM the difference. Interactive tools become the COMPLIANT choice.
OPTION 2: accept whatever's easiest or provides the best results no matter the cost
Just use text prompts (simpler!)
Don't demand interactive controls (who needs them?)
Let companies strip features for "compliance"
Stay quiet when motion brushes vanish again
Accept that "AI-generated" labels cover everything
Chase the newest model even if it removes the controls you need
Result: 2027 tombstone. Regulation becomes the excuse that kills creative agency. Companies say "We'd LOVE to give you controls, but compliance..." while knowing we never demanded it loud enough. And we'll have accepted it because the output looked 2% better, even though we lost 100% of the control.
What did I behold, engraved on the tombstone?
Not a death date. A choice date.
The tombstone shows 2027 because that's when the compliance deadline excuses finish rolling out. That's when companies will have made their tooling decisions. That's when the interactive controls we have NOW will either be:
A) Built upon, improved, and compliance-ready
OR
B) Quietly deprecated with "regulatory burden" as the explanation
The name on the tombstone isn't "AI creative tools."
It's "human creative control."
And whether that tombstone becomes reality depends on what we do in the next 12 months.
🎄 I will honour creative control in my heart
I woke from the visitation changed, not because the spirits showed me doom, but because they showed me CHOICE.
Christmas hasn't passed. 2026 hasn't arrived. The tombstone isn't inevitable.
The tools are here. The window is open. The regulations are coming but haven't landed.
This is our moment.
Text-prompt winter is ending. Interactive tools are emerging. Regulation could either PROTECT them (if we demand traceable creative control) or KILL them (if we accept whatever's easiest or produces the best results no matter the cost).
The ghost of christmas future showed me 2027 on that tombstone because that's when compliance decisions become permanent.
We have 12 months to make interactive creative control so loud, so demanded, so valuable that companies build compliance AROUND it instead of INSTEAD of it.
Use these tools. Demand these tools. Make interactive control LOUD.
Because 2026 brings regulation, and regulation brings corporate decisions, and corporate decisions become permanent infrastructure.
Tether is in private beta: https://forms.animatedcompany.com/pvtbeta - SIGN UP. Show demand exists.
Kling motion brush: https://klingai.com - USE IT. Make work with it. Post about it.
Harmonize: Already in Photoshop - PUSH IT to its limits. Request features.
Project Neo: https://projectneo.adobe.com - BUILD WORKFLOWS. Integrate it. Make it indispensable.
Make interactive control so LOUD that when companies design for 2026 compliance, they design systems that PRESERVE creative agency, not systems that use regulation as an excuse to strip it.
God bless us, every one.
Now go use those tools before regulation becomes the excuse to take them away.
Tool reference: what to use right now
Video motion control:
Kling AI (v1.0, v1.5): https://klingai.com - Motion brushes, up to 6 elements
Kaiber: https://kaiber.ai - Motion + audioreactivity
Pika Labs: https://pika.art - Camera parameters, keyframes
Adobe Firefly video: https://firefly.adobe.com - Keyframe system
FOSSA Tether: https://forms.animatedcompany.com/pvtbeta - Private beta signup
3D/spatial control:
Project Neo: https://projectneo.adobe.com (open beta)
Project Turntable: Illustrator (full release)
Higgsfield Angles: https://higgsfield.ai - Interactive 3D selection
Qwen camera control: https://openart.ai - OpenArt interface
Compositing:
Adobe Harmonize: Photoshop desktop/web/mobile (full release)
Power user:
ComfyUI: https://github.com/comfyanonymous/ComfyUI - Open source node system
ComfyUI_RealtimeNodes: MIDI/gamepad control
ComfyUI-Interactive: Button-based workflows
Regulation deadlines to watch:
January 1, 2026: California SB-942, AB-2013 (transparency, training data disclosure)
February 2026: Colorado AI act (comprehensive requirements)
August 2, 2026: EU AI act full enforcement (labeling, transparency, copyright)
September 1, 2025: China AI content labeling (already in effect)
The spirits came. They warned. They showed three paths.
Which future we get depends on which tools we support TODAY.
Heed the warning.




Comments