Distilling Multi-view Diffusion Models into 3D Generators


Hao Qin1, Luyuan Chen2, Ming Kong1,3†, Mengxu Lu1, Qiang Zhu1,

1Zhejiang University    2Beijing Information Science and Technology University    3Hikvision Research Institute   
Corresponding Authors

Abstract



Overview

We introduce DD3G, a formulation that Distills a multi-view Diffusion model (MV-DM) into a 3D Generator using gaussian splatting. DD3G compresses and integrates extensive visual and spatial geometric knowledge from the MV-DM by simulating its ordinary differential equation (ODE) trajectory, ensuring the distilled generator generalizes better than those trained solely on 3D data. Unlike previous amortized optimization approaches, we align the MV-DM and 3D generator representation spaces to transfer the teacher’s probabilistic flow to the student, thus avoiding inconsistencies in optimization objectives caused by probabilistic sampling. The introduction of probabilistic flow and the coupling of various attributes in 3D Gaussians introduce challenges in the generation process. To tackle this, we propose PEPD, a generator consisting of Pattern Extraction and Progressive Decoding phases, which enables efficient fusion of probabilistic flow and converts a single image into 3D Gaussians within 0.06 seconds. Furthermore, to reduce knowledge loss and overcome sparse-view supervision, we design a joint optimization objective that ensures the quality of generated samples through explicit supervision and implicit verification. Leveraging existing 2D generation models, we compile 120k high-quality RGBA images for distillation. Experiments on synthetic and public datasets demonstrate the effectiveness of our method.


Overview



Overview


Interpolation Results


The smooth transformation indicates that PEPD learns a continuous latent space capturing
meaningful geometric variations rather than merely memorizing the training set.


Results on Photographs


DD3G has a strong ability to handle data variations inherent in real-world photography,
such as varying illumination, viewpoint changes, background complexity, and slight noise.


Comparisons


Thanks to the extraction and reconstruction of rich visual knowledge during the distillation process,
our method (DD3G) is capable of generating plausible 3D geometries.


More Results


More results.