AnimateDiff is a Text-to-video model that is really powerful and becoming popular. The community is generating quite incredible videos and they are gaining huge popularity.
Here, we will give you the installation and workflow to work with all the minute settings required to make your generation more powerful. This workflow uses Stable diffusion 1.5 as the checkpoint. For Stable Diffusion XL, follow our AnimateDiff SDXL tutorial.
1. Install ComfyUI on your machine. Open the ComfyUI manager and click on “Install Custom Nodes” option.
Search for “animatediff” in the search box and install the one which is labeled by “Kosinkadink“. Then restart ComfyUI to take effect.
2. Now time to download three relevant motion models shown in the above image from the Hugging Face repository.
After downloading save them inside the “ComfyUI_windows_portable/custom_nodes/ComfyUI-AnimateDiff-Evolved/models” folder.
These are advanced features that give you an extra edge over the editing and manipulating your video rendering process. You can download the other motion LoRA models which help you to do camera motion panning, zooming, tilting, and rolling movement. Save them inside “ComfyUI_windows_portable/custom_nodes/ComfyUI-AnimateDiff-Evolved/models/MotionLoRA” folder.
3. Again download the SD Vae (vae-ft-mse-840000-ema-pruned.safetensors file) model from the Hugging Face repository and restart ComfyUI. Put the downloaded files into “ComfyUI_windows_portable/ComfyUI/models/vae” folder.
4. Now download the basic workflows that we have enlisted in our Hugging Face repository.
Drag and drop into ComfyUI canvas. We are using Text2Img workflow.
5. Download the relevant Stable Diffusion1.5 checkpoints. For illustration, we are using Dreamshaper (fine-tuned on SD1.5)model. You can choose your favorite one.
Now, just put your positive and negative prompts. You can also try our Stable Diffusion Prompt Generator to get an idea. Click the “Queue prompt” to start generation. The generation time will depend on your workflow settings and machine.
6. After, the generation you can get the results in frames (as GIF image) inside “ComfyUI_windows_portable/ComfyUI/output” folder. The relevant PNG image stores the relevant workflow setting as metadata.
Lets say, you want to work with that, then simply drag-drop the image and you will get your workflow instantly.
To get the converted video(in MP4 format), simply select the GIF-generated image, load it into any video editing tool, and do the conversion. An online converter can be one of the options.
To get the perfect results, you need to do multiple tries with tweak the settings as desired. In case of upscaling, you can use the Upscale node(which consumes more time) or any third-party editing tools.
1. First, you should install Automatic1111 on your machine.
2. Here, we are using ToonYou (fined tuned on Stable Diffusion1.5) from CivitAI as the supporting checkpoints. Make sure to use the right fine tuned model as checkpoints. You can choose your favorite one. Save it inside “stable-diffusion-webui/model/Stable-diffusion” folder.
3. Back to Automatic1111, head over to the “Extension” tab, then click “Available“, again click “Load from“. Search for “animatediff” in to the search box and there will extension named “sd-webui-animatediff” Click “Install” button to start the installation.
Once installed just click “Apply and restart UI” to get it to work.
4. Next is to download the animatediff models from the Hugging Face repository. Save them inside “stable-diffusion-webui/extension/sd-webui-animatediff/model” folder.
5. Move back to Automatic1111, a new tab will be shown “AnimateDiff” above the ControlNet tab. Just click on the tab to open it.
Put your positive and negative prompts into the prompt box. Set the recommended setting provided for your model checkpoints. You can get these from CivitAi’s specific model page. For the ToonYou model, we have used these recommended settings:
6. To make it animate navigate to “AnimateDiff” tab. Enable it, then change the number of frames around 16-18 and FPS to 8-10 to higher for longer generation and select the check box on save as GIF. Finally, just click on “Generate“.
To generated more smooth frames, you should use higher number of frames and FPS, but will take longer rendering time. Here the results will be generate in GIF (image)format. So, to get into Mp4 (Video format) you can use any tool for conversion.
AnimateDiff is a text-to-video model that can be used in Stable Diffusion WebUIs to create stunning AI-generated videos.
Figuring out the model that can fix your low quality pictures? Now, restoring your low…
Traditional diffusion models uses various mechanisms for image modification like ControlNet, IP-Adapter, Inpainting, Face detection,…
Mochi 1, an open-source text-to-video diffusion model has been released by Genmo. Trained with 10…
So, it's finally here. Stable Diffusion 3.5 has been released by StabilityAI on October 22nd,…
Due to the extreme diversity of video content like motion, camera panning, and length, the…
You can use these negative prompts while generating images using Stable Diffusion models.1. Clear BrandingPrompt:text,…