

Open-Sora - Democratizing Efficient Video Production for All
Open-Sora
Democratizing Efficient Video Production for All
We design and implement Open-Sora, an initiative dedicated to efficiently producing high-quality video. We hope to make the model, tools and all details accessible to all. By embracing open-source principles, Open-Sora not only democratizes access to advanced video generation techniques, but also offers a streamlined and user-friendly platform that simplifies the complexities of video generation. With Open-Sora, our goal is to foster innovation, creativity, and inclusivity within the field of content creation.
🎬 For a professional AI video-generation product, try Video Ocean — powered by a superior model.
Quickstart
Installation
# create a virtual env and activate (conda as an example)conda create -n opensora python=3.10conda activate opensora
# download the repogit clone https://github.com/hpcaitech/Open-Soracd Open-Sora
# Ensure torch >= 2.4.0pip install -v . # for development mode, `pip install -v -e .`pip install xformers==0.0.27.post2 --index-url https://download.pytorch.org/whl/cu121 # install xformers according to your cuda versionpip install flash-attn --no-build-isolation
Optionally, you can install flash attention 3 for faster speed.
git clone https://github.com/Dao-AILab/flash-attention # 4f0640d5cd flash-attention/hopperpython setup.py install
Model Download
Our 11B model supports 256px and 768px resolution. Both T2V and I2V are supported by one model. 🤗 Huggingface 🤖 ModelScope.
Download from huggingface:
pip install "huggingface_hub[cli]"huggingface-cli download hpcai-tech/Open-Sora-v2 --local-dir ./ckpts
Download from ModelScope:
pip install modelscopemodelscope download hpcai-tech/Open-Sora-v2 --local_dir ./ckpts
Text-to-Video Generation
Our model is optimized for image-to-video generation, but it can also be used for text-to-video generation. To generate high quality videos, with the help of flux text-to-image model, we build a text-to-image-to-video pipeline. For 256x256 resolution:
# Generate one given prompttorchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_256px.py --save-dir samples --prompt "raining, sea"
# Save memory with offloadingtorchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_256px.py --save-dir samples --prompt "raining, sea" --offload True
# Generation with csvtorchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_256px.py --save-dir samples --dataset.data-path assets/texts/example.csv
For 768x768 resolution:
# One GPUtorchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_768px.py --save-dir samples --prompt "raining, sea"
# Multi-GPU with colossalai sptorchrun --nproc_per_node 8 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_768px.py --save-dir samples --prompt "raining, sea"
You can adjust the generation aspect ratio by --aspect_ratio
and the generation length by --num_frames
. Candidate values for aspect_ratio includes 16:9
, 9:16
, 1:1
, 2.39:1
. Candidate values for num_frames should be 4k+1
and less than 129.
You can also run direct text-to-video by:
# One GPU for 256pxtorchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/256px.py --prompt "raining, sea"# Multi-GPU for 768pxtorchrun --nproc_per_node 8 --standalone scripts/diffusion/inference.py configs/diffusion/inference/768px.py --prompt "raining, sea"
Image-to-Video Generation
Given a prompt and a reference image, you can generate a video with the following command:
# 256pxtorchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/256px.py --cond_type i2v_head --prompt "A plump pig wallows in a muddy pond on a rustic farm, its pink snout poking out as it snorts contentedly. The camera captures the pig's playful splashes, sending ripples through the water under the midday sun. Wooden fences and a red barn stand in the background, framed by rolling green hills. The pig's muddy coat glistens in the sunlight, showcasing the simple pleasures of its carefree life." --ref assets/texts/i2v.png
# 256px with csvtorchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/256px.py --cond_type i2v_head --dataset.data-path assets/texts/i2v.csv
# Multi-GPU 768pxtorchrun --nproc_per_node 8 --standalone scripts/diffusion/inference.py configs/diffusion/inference/768px.py --cond_type i2v_head --dataset.data-path assets/texts/i2v.csv
← Back to projects