𝗧𝗛𝗘 𝗠𝗜𝗫𝗔𝗠𝗢 𝗙𝗢𝗥 𝗧𝗛𝗘 𝗚𝗘𝗡-𝗔𝗜 𝗔𝗚𝗘
- candyandgrim

- Dec 7, 2025
- 2 min read

I'm an advocate for more creative control in gen-AI tools. Not node-based editing on infinite canvas boards. Wrong sort of control.
I want DIRECT manipulation. Visual thinking. Doing, not describing.
This is why I'm excited about Kinetix (https://www.kinetix.tech):
✅ Upload your image (style reference, character, and scene)
✅ Film your own action scene, or use their reference library
✅ Select camera movement from library of live previews
✅ Supported with simple text prompts for best results
This is the future I want. Less hours setting up custom node boards. Less typing, more doing.
Let's face it: visual creatives don't generally aspire to write copy/scripts or code (even when the code is hiding behind nodes).
𝗟𝗲𝘀𝘀 𝘁𝗲𝘅𝘁 𝗽𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴. 𝗠𝗼𝗿𝗲 𝗱𝗶𝗿𝗲𝗰𝘁 𝗰𝗼𝗻𝘁𝗿𝗼𝗹.
𝗪𝗵𝗮𝘁 𝗜'𝗱 𝗹𝗼𝘃𝗲 𝘁𝗼 𝘀𝗲𝗲 𝗻𝗲𝘅𝘁:
𝗦𝘁𝗮𝗻𝗱𝗮𝗹𝗼𝗻𝗲 𝗰𝗮𝗺𝗲𝗿𝗮 𝗰𝗼𝗻𝘁𝗿𝗼𝗹: I don't work with 2D, 3D, or live action characters every day. But I DO need camera control constantly. Let me use just the camera controls on their own, without the character motion system. Camera path, target, movement preview. That's the tool I'd use daily. Think product shots, flythroughs, etc
𝗗𝗶𝗿𝗲𝗰𝘁 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻: I want this IN Adobe Creative Cloud After Effects or Cinema 4D. Not export/import/hope. Native timeline integration like Tether is doing. Control the camera in my workspace where I'm already thinking.
𝗠𝗼𝘁𝗶𝗼𝗻 𝗰𝗹𝗶𝗽 𝗯𝗹𝗲𝗻𝗱𝗶𝗻𝗴 (𝗹𝗶𝗸𝗲 maxon 𝗖𝗶𝗻𝗲𝗺𝗮 𝟰𝗗'𝘀 𝗠𝗼𝘁𝗶𝗼𝗻 𝗖𝗹𝗶𝗽 𝗦𝘆𝘀𝘁𝗲𝗺): Merge different mocap sequences together. Some action shots aren't safe to film at home (jumping out a window, falling 3 stories).
𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: convert video to AI mocap → timeline editor to set action sequences with time and position controls → blend multiple captures into one performance using motion clip blending. This would also enable sequential clips to extend runs.
𝗖𝘂𝘀𝘁𝗼𝗺 𝗰𝗮𝗺𝗲𝗿𝗮 𝗲𝗱𝗶𝘁𝗼𝗿: The library is great, but what can I say, I am greedy. A-to-B tween controls. Camera path and target rig. Advanced settings: field of view, f-stop, focus distance, etc.
𝗠𝘂𝗹𝘁𝗶-𝗰𝗵𝗮𝗿𝗮𝗰𝘁𝗲𝗿 𝘀𝘂𝗽𝗽𝗼𝗿𝘁: Currently single character only (unless Kinetix partners with Move.ai). Multiple characters means creating the same sequence multiple times, which gets complicated fast.
But even with these missing features, Kinetix gets the fundamentals RIGHT:
𝗦𝗵𝗼𝘄 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁. 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 𝗵𝗼𝘄 𝗶𝘁'𝘀 𝗳𝗶𝗹𝗺𝗲𝗱. 𝗟𝗲𝘁 𝗔𝗜 𝗵𝗮𝗻𝗱𝗹𝗲 𝘁𝗵𝗲 𝗿𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴.
That's the interactive control future we need.
Not "describe this perfectly in text."
Not "spend 3 hours wiring nodes."
𝗔𝗖𝗧. 𝗗𝗜𝗥𝗘𝗖𝗧. 𝗖𝗥𝗘𝗔𝗧𝗘.




Comments