Generative over pre-rendered: when real-time content wins

Pre-rendered video has been the default for immersive installations for twenty years. It's also increasingly the wrong default. Here's when real-time generative content is the better answer and how to tell the difference.

LISTEN
0:00 / -:-
MP3
Projection dome at night with generative visuals

The default assumption for an immersive installation is pre-rendered video. You storyboard the show, render the content, ship it to the venue, loop it. That workflow made sense for a long time - render farms were cheap, playback hardware was dumb, and generative tools were niche. None of those conditions still hold, and the default is overdue for a rethink.

This is a post about when real-time generative content beats pre-rendered video, when it doesn't, and how producers should think about the choice.

The three things pre-rendered can't do

Pre-rendered video is deterministic. That's its strength for cinematic work and its weakness for installations. An installation that's going to run for three months has at least three problems a pre-rendered loop can't solve.

  • Repetition. Visitors who come twice see the same show. A 10-minute loop running all day is watched 48 times per day by the staff alone.
  • Responsiveness. The content can't know anything about the room it's playing in - the crowd, the time of day, the music that happens to be playing, the weather outside.
  • Size. A 4K 10-minute loop is gigabytes. Twenty of them is a logistics problem. A generative patch is a few megabytes.

For a cinema, none of these matter. For a dome, a lobby installation, or a permanent museum exhibit, they matter a lot.

What "real-time" actually means

A useful working definition: the content is generated on the display machine at performance time, at target frame rate, using math that responds to live inputs. Those inputs can be any mix of time, audio, sensors, cameras, or show-control cues.

Practical stacks for real-time generative work in 2026: TouchDesigner (the most common for permanent installations), Notch (the most common for live events), Unity or Unreal with custom shaders (for anything that wants game-engine geometry), or a bespoke WebGL/Three.js build (for anything browser-delivered, like kiosks). All four compile to GPU shaders underneath; the wrapper is a matter of team skill and delivery target.

When pre-rendered is still the right choice

  • Hero cinematics where every frame needs to be art-directed - a 30-second film that opens the show with specific camera moves, lighting, character performance. Render it.
  • Content with heavy geometry that's more expensive to generate than to stream - photo-real humans, complex cloth simulations, fluid sims that would cost minutes per frame.
  • Any situation where the venue's playback hardware isn't capable of real-time at target resolution. A 12K dome driven by a mid-range laptop is going to run pre-rendered content just fine and fall over running generative.

The best shows are usually hybrid: pre-rendered for hero moments, generative for the ambient material that fills the 80 percent of runtime where the show is not in a climax. Purely pre-rendered shows feel dead on the loop; purely generative shows feel like screensavers.

The right question isn't real-time versus pre-rendered. It's "which moments in this show need determinism, and which ones need life?"

What audiences actually notice

In side-by-side testing, audiences rarely articulate whether content is real-time or not. What they do notice, consistently, is whether the room feels alive. Content that responds - even subtly, even to something the audience can't name - registers as alive. Content that doesn't respond registers as a recording, regardless of how good the imagery is.

This is the core argument for generative: it's not about showing off the technology. It's about the room reading as present instead of played-back. The first time a projection wall reacts to applause volume, the audience doesn't think "that was generative." They think "this space is listening." That's the effect worth chasing.

/// RELATED TRANSMISSIONS

Working on something immersive?