3D Community News | terça-feira, 24 fevereiro 2026
Nuno Silva - Which AI video Model Actually Wins?

AI video models are developing quickly, but which ones actually perform well in actual production settings? Nuno Silva tested all of the main AI video models using the same prompts and progressively more complicated scenes in his most recent comparison. Following weeks of testing and hundreds of credits, the result is unmistakable: there are experts for particular jobs but no overall winner. Let’s break down the results!

The Testing Setup: Real Renders, Real Production Scenarios.

Nuno used architectural renders—exactly the type of images archviz artists deal with on a daily basis—instead of abstract prompts. To give a clear, useful takeaway, each model was evaluated in the same way and given a score between 1 and 5.

Hicksfield was used for all tests, enabling direct comparison between several models within a single interface.

The models tested included:

  • Kling 3.0
  • Google Veo 3.1
  • Seedance 1.5 Pro
  • Minimax 2.3 / Halo02
  • Sora Pro

Each round increased in difficulty, from simple camera moves to physics-heavy human interaction and complex start/end frame interpolation.


Round 1 – The Basic Dolly In (Camera Movement Test).

Scenario: A static home office render | Prompt: Smooth camera dolly in, 5 seconds, 1080p.

This is the most fundamental animation in architectural visualization. If a model can’t handle this cleanly, it’s a red flag.

Results:

  • Kling 3.0 – Flawless execution. Smooth movement, no artifacts. ★★★★★
  • Kling 2.1 – Camera started rotating mid-animation, visible shelf glitches. ★★★
  • Google Veo 3.1 – Completely hallucinated a different interior. Sharp image, wrong scene. ★★
  • Seedance 1.5 Pro – Correct movement but acceleration and hallucination toward the end. ★★★
  • Sora Pro – Most expensive model, yet visible glitches and poor prompt adherence. ★

Winner for clean camera movement: Kling 3.0.


Round 2 – Human Physics & Interaction.

The scenario is about a woman sitting with legs crossed and drinking coffee. Then she stands up, and closes her laptop. This test evaluates , leg physics, collision handling, object interaction as well as natural movement continuity.

Results:

  • Kling 3.0 – Foot clipping through leg; incomplete sequence. ★★★
  • Kling 2.5 Turbo – Similar clipping but strong overall performance for a faster/cheaper model. ★★★★
  • Google Veo 3.1 – Foot stayed grounded; even chair cushion reacted. Minor laptop closing issue. ★★★★
  • Seedance – Coffee drops mid-air, glitches during laptop closing. ★
  • Minimax 2.3 – Most natural foot movement, no ghost clipping. Minor smoke freeze issue. ★★★★

Standout for physics: Minimax 2.3 (with Google Veo close behind).


Round 3 – Natural Elements (Wind Simulation).

Prompt: Strong wind breaking through the leaves.

Wind is surprisingly difficult for AI. It must feel fluid and not like an invisible object colliding with the scene.

Results:

  • Kling 3.0 & Omni – Looks like something hits the tree rather than wind passing through. ★★★
  • Google Veo 3.1 – Natural wind behavior. Best interpretation. ★★★★★
  • Minimax 2.3 – Gradual build-up of wind intensity, highly realistic. ★★★★★
  • Sora Pro – Stop-motion feel, minimal fluidity. ★
  • Seedance – Camera drift, unnatural movement. ★|

Winners for natural elements: Google Veo 3.1 and Minimax.


Round 4 – Start & End Frame Interpolation (Advanced Cinematic Test).

Scenario: Camera orbits a woman in a leather armchair while she flips through a book.
Models tested: Kling 3.0, Kling Omni, Google Veo, Seedance Pro, Minimax Halo02

This was one of the hardest AI tasks as we needed precise camera orbit, page flipping, consistency with defined start and end frames, and finally background animation (fireplace)

Results:

  • Kling 3.0 – Good orbit, weak page flipping, static fireplace. ★★★
  • Google Veo 3.1 – Sharpest image, active fireplace, minor finger glitching. ★★★★
  • Kling Omni – Strong fireplace + page flipping, but deviated from start/end frames. ★★★★
  • Seedance Pro 1.5 – Most stable shot, few glitches. ★★★★
  • Minimax Halo02 – Significant page glitches, inactive fireplace. ★★

Most balanced cinematic output: Google Veo 3.1 and Kling Omni.


The Final Verdict.

After extensive testing, the takeaway is refreshingly honest: "There is no single AI video model that dominates every scenario."

Instead:

  • Kling 3.0 → Best for clean camera movement
  • Google Veo 3.1 → Best for natural elements like wind
  • Minimax → Strongest for physics-driven character motion
  • Sora Pro → Surprisingly underwhelming despite its price point

For architectural visualization professionals, this means strategy matters. Choosing the right model depends entirely on the type of motion you need. AI video is not yet a “one-click solution.” But in the right context—and with careful prompt engineering—it can already transform static renders into compelling animated sequences.

The real win? Understanding which tool fits which job.


Where AI Still Struggles (And Why That Matters).

Nuno is transparent about AI’s current limitations. Particularly when contrasted with more conventional approaches like Lumion phasing or 3ds Max particle systems, physics-based furniture movement is still uneven.

Classic programs like Lumion and 3ds Max still provide more control for intricate simulations. However, AI methods significantly shorten production times for quick idea animations and design research.


A Glimpse into the Future of ArchViz Workflows.

This breakdown's realism is what makes it so valuable. When applied carefully, AI is positioned as a potent accelerator rather than a substitute for conventional archviz tools.

Nuno Silva illustrates methods that are not only creative but also repeatable, scalable, and financially feasible by combining Kling for animation, Nano Banana Pro for picture consistency, and a node-based logic inside Fuser Studio.

🎥 Watch the full Nuno's video and deep explanation about which AI video model actually wins:


About Nuno Silva.

Nuno Silva is a Portugal-based 3D artist and educator specializing in architectural visualization. With more than a decade of experience creating renders for international clients, he is known for combining technical clarity with cinematic storytelling. Through his online courses, tutorials, and carefully structured workflows, he helps thousands of artists develop professional-level rendering skills in Lumion, and other visualization tools. His teaching style focuses on efficiency, realism, and creativity, allowing beginners and advanced users alike to elevate their architectural presentations.


Start Rendering with Rebusfarm today