⚡Delivering High Quality GenAI Video

Olivia Walker
Editor in Chief
Jul 30, 2025
3:45 - 4:00
T-Mobile Park, Main Stage
Traditional video infrastructure was built around a "one-to-many" paradigm: a camera captures reality, a fixed encoder compresses it, and a CDN fans it out to passive viewers. Generative-AI video flips every assumption in that pipeline. Content is synthesized rather than filmed. Videos are shifting from being created once for millions of viewers to being generated on-demand and uniquely per user. Creators and consumers blur into the same role, and distribution increasingly happens at the edge.
This talk dissects three fault-lines where today's natural-video stack breaks under generative workloads, and discusses potential solutions: Creation → Encoding: Diffusion models produce frames whose temporal coherence is algorithmic, not physical. Standard codecs waste bits preserving pseudo-motion that could instead be regenerated from seeds or latent tokens. Encoding → Delivery: Generative streams are many-to-many: millions of personalized outputs, each short-lived. That breaks GOP-based CDN caching and overwhelms origin bandwidth. Delivery → Experience: Interactivity expectations (prompt-to-pixel < 200 ms) demand tight co-design of model partitioning, adaptive bitrate in latent space, and real-time provenance watermarks.
Attendees will leave with a mental framework for re-architecting encoding and delivery around the distinctive physics of generative video.
Reserve Your Spot–In Person or Online!
This hands-on workshop is available in person on Day 2 of BuffConf at SURF Incubator in Seattle and live on Twitch. Spots are limited for in-person attendees, so RSVP soon!
Add your name and email and we’ll send you a reminder the day before the event so you don’t miss it. It’s quick, easy, and totally worth it.