Generate AI Videos with ControlNext and SVD V2

The methodology is to use ControlNext model(released by DV labs research) with SVD V2 (by StabilityAI) to create consistent AI videos.  The actual architecture just been cloned the way AnymateAnyone works. The model has been trained on better, higher-quality videos with humans pose alignment to create more realistic specially the dancing videos. 

The training and batch frames has been increased to 24 to handle the adaptation of video generation in generic fashion. Additionally, the height and width also been increased to a resolution of 576 × 1024 to meet the Stable Video Diffusion benchmarks. You can do in-depth research using relevant research papers.

Now, you can run this model on your machine with ComfyUI using custom nodes. 

Installation:

1. First of all, install ComfyUI and update it by clicking “Update all“.

install controlnext-svd model

2. You have to install custom nodes by Kijai. So, navigate to ComfyUI manager and hit “Custom nodes manager“. Then, search for “ControlNeXt-svd” by kijai and click “Install” button to install it. 

3. Then just restart ComfyUI to take effect.

download controlnext-svd model

4. Now download the respective model (controlnext-svd_v2-unet-fp16_converted.safetensors) from Kijai’s Hugging Face. Save it inside “ComfyUI/models/unet” folder.

download svd v2 model

5. Next download SVD XT v1.1 model from Stability Ai’s hugging face. And just put it inside “ComfyUI/models/checkpoints/svd” folder.

6. Finally, the workflow can be found inside your “ComfyUI/custom_nodes/ComfyUI_ControlNeXt-SVD/example” folder. 

Here, you will get two workflows. First one is for ComfyUI and other for diffusers. Just drag and drop in ComfyUI.