⚡Delivering High Quality GenAI Video
Traditional video infrastructure was built around a "one-to-many" paradigm: a camera captures reality, a fixed encoder compresses it, and a CDN fans it out to passive viewers. Generative-AI video flips every assumption in that pipeline. Content is synthesized rather than filmed. Videos are shifting from being created once for millions of viewers to being generated on-demand and uniquely per user. Creators and consumers blur into the same role, and distribution increasingly happens at the edge.
This talk dissects three fault-lines where today's natural-video stack breaks under generative workloads, and discusses potential solutions: Creation → Encoding: Diffusion models produce frames whose temporal coherence is algorithmic, not physical. Standard codecs waste bits preserving pseudo-motion that could instead be regenerated from seeds or latent tokens. Encoding → Delivery: Generative streams are many-to-many: millions of personalized outputs, each short-lived. That breaks GOP-based CDN caching and overwhelms origin bandwidth. Delivery → Experience: Interactivity expectations (prompt-to-pixel < 200 ms) demand tight co-design of model partitioning, adaptive bitrate in latent space, and real-time provenance watermarks.
Attendees will leave with a mental framework for re-architecting encoding and delivery around the distinctive physics of generative video.

Adam Brown
Co-Founder & CTO
Mux
Adam Brown co-founded Mux in 2015 and leads technology and architecture for the developer-first video infrastructure platform. With deep roots in video technology, Adam has built high-performance encoding systems, low‑latency live streaming pipelines, and scalable cloud video infrastructure, including during his time at Zencoder and Brightcove, with additional experience in VR rendering at Otoy.
Known for merging engineering rigor with developer empathy, he’s focused on enabling seamless, scalable video delivery and real-time analytics through API-first products like Mux Video and Mux Data.