3D Content Generation Framework
8.8k 2026-04-18
ashawkey/stable-dreamfusion
A PyTorch implementation of Dreamfusion, enabling text-to-3D and image-to-3D content generation using NeRF and Stable Diffusion.
Core Features
Text-to-3D model generation
Image-to-3D model generation
Mesh exportation for generated 3D content
Integration with Stable Diffusion and DeepFloyd-IF models
Fast rendering with Instant-NGP NeRF backbone
Quick Start
git clone https://github.com/ashawkey/stable-dreamfusion.git && cd stable-dreamfusion && pip install -r requirements.txtDetailed Introduction
Stable-Dreamfusion is a PyTorch-based open-source project that implements the Dreamfusion text-to-3D model, leveraging the power of Stable Diffusion for 2D guidance. It allows users to generate 3D models from textual descriptions or input images, utilizing NeRF (Neural Radiance Fields) for scene representation and offering mesh export capabilities. While a work-in-progress, it aims to provide a flexible framework for 3D content creation, incorporating advanced techniques like Instant-NGP for faster rendering and Perp-Neg for improved generation quality.