August 9th, 2024

VFusion3D: Learning Scalable 3D Generative Models from Video Diffusion Models

VFusion3D is a project developing scalable 3D generative models using video diffusion, to be presented at ECCV 2024. It offers pretrained models and a Gradio application for user interaction.

Read original articleLink Icon
VFusion3D: Learning Scalable 3D Generative Models from Video Diffusion Models

VFusion3D is a project aimed at developing scalable 3D generative models using video diffusion models, authored by Junlin Han, Filippos Kokkinos, and Philip Torr. It will be presented at the European Conference on Computer Vision (ECCV) in 2024. The project leverages a minimal amount of 3D data combined with extensive synthetic multi-view data to create a large, feed-forward 3D generative model. This initiative seeks to advance the field of 3D generative and reconstruction models, contributing to the establishment of a robust 3D foundation. Users can clone the repository, install necessary dependencies, and utilize pretrained models for inference tasks such as rendering videos and exporting meshes. The project also includes a local Gradio application for user interaction. The inference code is derived from the OpenLRM project, and the licensing is primarily under CC-BY-NC, with some components under different licenses. For further details, users can access the project page and the GitHub repository.

- VFusion3D focuses on scalable 3D generative models from video diffusion models.

- The project will be presented at ECCV 2024.

- Users can clone the repository and set up a conda environment for installation.

- Pretrained models are available for various inference tasks.

- The project is licensed under CC-BY-NC with some components under different licenses.

Link Icon 2 comments
By @billconan - 2 months
I tried a few samples, I feel that its quality is not as good as stable fast 3d.