Accelerating GenAI Deployment at Scale with Media Understanding and Evaluation

Olivia Walker
Editor in Chief
Jul 30, 2025
2:45 - 3:10
T-Mobile Park, Main Stage
Deploying GenAI applications is very challenging since AI-models are usually validated mostly on research content, and they don't automatically work well on User Generated Content (UGC). UGC can be significantly different due to non-pristineness and high diversity.
In general, UGC may have lower source quality than research dataset, because they may have been already compressed.
On the audio generation side, like autodubbing translation applications, the presence of echos, multiple speakers, and ambience noise are challenging for research models.
On the video generation side, like lipsync generation, it’s challenging to generate natural lip movements from occluded or partial faces, or faces in extreme angles.
On the enhancement of AI generated videos, strategies need to be optimized based on content characteristics, like factoring in multiple shots/scene boundaries, or presence of black border.
To address the challenges above, when deploying GenAI on UGC, Meta leverages both video understanding for optimal performances of models and algorithms as well as evaluation of media quality for efficient model iteration. This session will showcase the designs and insights.
Reserve Your Spot–In Person or Online!
This hands-on workshop is available in person on Day 2 of BuffConf at SURF Incubator in Seattle and live on Twitch. Spots are limited for in-person attendees, so RSVP soon!
Add your name and email and we’ll send you a reminder the day before the event so you don’t miss it. It’s quick, easy, and totally worth it.