UniVidX: One Diffusion Backbone for RGB, Intrinsic Maps, and RGBA Video Generation

Research official + media 2 src. ~1 min

UniVidX proposes a single framework handling multiple video generation tasks — RGB synthesis, intrinsic map generation, and RGBA layer decomposition — without separate models. Three components enable this: Stochastic Condition Masking (SCM) randomly partitions modalities into conditions and targets during training; Decoupled Gated LoRA (DGL) applies per-modality adaptations; Cross-Modal Self-Attention (CMSA) shares information across modalities. The system achieves competitive performance training on under 1,000 videos.

Why it matters

Consolidating multiple video generation tasks into one backbone without degrading native capabilities is a key efficiency goal for production video models. The approach requires minimal training data, lowering the barrier for multi-task video generation research. Led HF Daily Papers on May 4 with 70 upvotes.

Importance: 3/5

Top HF Daily Papers May 4 with 70 upvotes; unified multi-task video generation from a single diffusion backbone.

Sources