4DVD: Cascaded Dense-view Video Diffusion Model for High-quality 4D Content Generation
arxiv 2025
Shuzhou Yang1, Xiaodong Cun2, Xiaoyu Li*3, Yaowei Li1, Jian Zhang*1
1 Peking University Shenzhen Graduate School 2 Great Bay University 3 Tencent
Abstract
Given the high complexity of directly generating high-dimensional data such as 4D, we present 4DVD, a cascaded video diffusion model that generates 4D content in a decoupled manner. Unlike previous multi-view video methods that directly model 3D space and temporal features simutaneously with stacked corss view/temporal attention modules, 4DVD decouples this into two sub-tasks: coarse novel-view generation and structure-aware conditional generation, and effectively unifies them. Specifically, given a monocular video, 4DVD first predicts the dense view content of its low-resolution version for better cross-view and temporal consistency. Then, a structure-aware spatio-temporal generation branch is developed, using the predicted dense-view videos as the coarse structural priors and the original high-quality monocular video as generation condition, generating final dense-view videos. Based on these, explicit 4D representation~(such as 4D Gaussian) can be optimized accurately, enabling wider practical application. To train 4DVD, we collect a dynamic 3D object dataset from the Objaverse benchmark and render 16 videos with 21 frames for each object. Extensive experiments demonstrate our state-of-the-art performance on both novel view synthesis and 4D generation.
Demo
Video Comparison
4D Comparison