9 min
The Rise of Real-Time AI Video: Open Source, World Models, and the Next Computing Layer
1. A New Frontier in AI: From Frames to Worlds
If 2023 was the year text-to-video exploded, 2025 is the year video becomes alive.
Until recently, video generation was a static affair — sequences of images stitched together, with no understanding of continuity or physics. Models like StreamDiffusion represented an early leap: turning diffusion models into live, real-time generators. But they lacked temporality — no concept of cause, persistence, or spatial coherence.
Now, we’re entering the world model era — systems that don’t just render visuals but simulate consistent environments over time. They understand how the world moves.
This evolution transforms video from a final output into an interactive medium — a living simulation that can respond, generate, and adapt in real time. It’s a shift as profound as moving from photographs to film, or film to games.
2. Real-Time as the Breakthrough
What makes this moment unique is latency — or rather, the removal of it.
In real-time AI, milliseconds matter. A delay of even one second breaks immersion. That’s why this wave of models demands a new kind of infrastructure — distributed GPU networks capable of continuous inference at scale. Traditional cloud pipelines, built for asynchronous jobs, simply can’t handle it.
This is where Livepeer and Daydream enter the story.

Daydream, incubated within Livepeer, is building the real-time video inference layer — the connective tissue that lets developers and creative technologists stream, generate, and remix live AI video without managing GPUs. It’s what CTO Eric Tang calls “the Hugging Face for real-time video models”: open source at its core, with a scalable inference network powering everything underneath.
3. Open Source as the Growth Engine
Open source isn’t a marketing choice — it’s the only path to scale in such a fast-moving ecosystem.
As Hunter (Head of Product) explained, Daydream’s approach is simple: community first, monetization second. Every major leap in model capability — from temporality to controllability — starts in the open. Researchers and tinkerers experiment locally, publish results, and share configs. Builders remix those into live experiences.
That loop — research → open source → application → inference — is how real-time AI grows.
The community is the funnel. The inference API is the business.
4. The Early Market: Creative Technologists as Catalysts
Every platform needs a beachhead — the early believers who shape its culture.

For Daydream, that wedge is the TouchDesigner ecosystem: a passionate community of interactive artists and live VJs who have long pushed the boundaries of real-time visuals. By partnering with the creator of the official TouchDesigner plug-in and building integrations directly into their workflow, Daydream tapped into an unmet need — real-time diffusion without GPU pain.
The results were immediate: over 500 developers and artists signed up for the API waitlist within a week of launch. As one user put it, “I finally don’t need a 4090 to create what’s in my head.”
But this is just the start. As raw API access grows, new personas — from game developers to robotics researchers — are emerging. They don’t want a plug-in. They want a foundation
5. From Tools to Worlds
Today’s models generate images that move. Tomorrow’s will generate worlds that persist.

The next wave — video world models — is already blurring the line between simulation, robotics, and storytelling. A world model can learn the physics of a scene, generate consistent perspectives, and predict causal behavior. It’s not just showing you what something looks like; it’s teaching machines what the world is.
This unlocks entirely new verticals:
- Gaming and interactive experiences, where every frame reacts to player input.
- Robotics, where real-time video models power training and simulation.
- Live performance and entertainment, where visual effects respond to motion, voice, or emotion in real time.
As Hunter summarized, “There’s a home for every model on Hugging Face. But Daydream is where they come alive.”
6. The Emerging Ecosystem: Hugging Face vs. Daydream
Competitor | Focus | Overlap with Daydream | Potential Risk / Opportunity |
Runway ML | Creative tool + video generation | Model generation + creative workflows | Risk: creative side moves toward real-time |
Fireworks AI | Inference infrastructure | Serving open models at scale | Risk: becomes preferred inference layer |
Kling AI / Sora 2 Pro | Text-to-video generation models | Cutting-edge model capabilities | Opportunity: Daydream’s real-time niche remains |
Together AI | Model infra + multi-modal models | Infrastructure + open-source models | Risk: could expand into live video dev tooling |
Hugging Face defined the open-source playbook for AI — hosting, sharing, and collaboration. But video breaks that model.
Running a real-time world model isn’t like hosting a static checkpoint. It’s continuous, compute-intensive, and highly interactive. It requires infrastructure tuned for low latency streaming, not simple inference.
That’s why new players like Foul and Daydream are emerging — purpose-built for the next era. Foul experiments with creative video pipelines; Daydream focuses on real-time, distributed inference and open-source collaboration.
In short:
- Hugging Face → Model hosting & research.
- Foul → Creative video experimentation.
- Daydream → Real-time, applied world models.
7. Why This Matters
Real-time AI video represents more than a new creative medium — it’s a new computing layer.
In the same way browsers abstracted the web, and operating systems abstracted hardware, world models will abstract physical simulation. They’ll become the canvas for autonomous systems, virtual environments, and generative entertainment.
And the infrastructure built today — open, distributed, and real time — will determine who owns that layer.
8. The Decade Ahead
We are witnessing the convergence of inference, interaction, and imagination.
As models evolve from generating pixels to understanding causality, and as communities like Daydream’s turn open research into live systems, real-time AI video will stop being a demo — and start being the interface of the future.
It won’t just show the world.
It will become the world.