So, here the waiting is over because a new model released
officially by Stability Ai which is a changing moment for the image
generation community.
We have experienced and used a couple of months, including the released Control Net which gives an extra edge over others and helps much for AI lovers in the field of image generation.
1. First of all you need to download and install the Controlnet models from Hugging Face.
2. Now, it will be helpful if you download each model available on the control net from the Hugging Face space platform. This helps us to work on every aspect of the control net and we can enjoy the full advantage of the Controlnet. But this is optional and depends on your requirements.
https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
If you head over the control net models of Hugging Face, you will see files that end with the letter “e” which means these are used for experimental purposes and the other control net models with the letter “p” are for production-ready which can be deployed on the server for standalone applications and files with “sd15” denote that these are the Stable Diffusion 1.5 version base model.
4. After downloading these, you can move the downloaded files into the respective path folder for working.
5. Move to your stable diffusion root folder.
stable-diffusion-webuiwebuiextensionssd-webui-controlnetmodels
6. Save your files inside the “models” folder.
7. Now go back to Automatic1111 and click on the “Refresh” button to take effect and let all the models load on the Automatic1111 control net section.
0. First of all, we see how to install the manager tab which we usually see on ComfyUI presented over the right panel. For installing the ComfyUI Manager go to the github repo link. and just click the copy button to copy the cloning link.
1. Open the Command prompt by typing Window key + R and copy the following command to clone the respective repository:
git clone https://github.com/ltdrdata/ComfyUI-Manager.git
If you get zip downloaded files, make sure you use a zip file extractor(like 7zip or WinRar software) to unzip the downloaded files. Now after cloning, just copy and paste the downloaded folder to the ComfyUI installation root folder. If you have the portable version, you have “ComfyUI-windows-portable” folder.
2. After installing, just restart the ComfyUI to see the change effect. You will see a MANAGER tab like this presented in the above image. Click on the Manager tab.
3. A new small pop-up with multiple options will appear. Just click on Install Custom Nodes.
4. Again a new window will appear. Now on the rightmost side of the window, you will a search box. Just search for “CONTROL NET” and click the Search button which will be available in the top right corner. You will see various preprocessors related to Control Net.
5. Now click the Install button presented over to the right of the dashboard to install all the Control net-related preprocessors.
6. Now, done. Just close by click the Close button and Restart the Comyfy UI because it doesn’t show the changes automatically.
7. Now open it again and right click on the dashboard of the Comfy UI and move to the following option:
Add Node > Control Net PreProcessors >
So Inside this option, you will get access to all the preprocessors that we have installed recently.
8. Now, move to the Hugging Face portal and install the Stable diffusion XL version model released by Stability AI. Here, select the link which is for Control – Net LORA (But actually this is not the LORA that you know in the field of stable diffusion).
These are small and are more memory efficient. You have to clone the repository.
9. Place the downloaded file into the ComfyUI/models/ControlNet folder.
10. Here is the result after putting all the downloaded files into the ComfyUI installation folder.
Now let’s see what these Control Net Preprocessors
can do.
If you are loading the recently installed preprocessors for the first time, this will take some time in the background to download some required files which need to run the preprocessors.
You can use some features to add some
depth or sharpness to the images as well.
Figuring out the model that can fix your low quality pictures? Now, restoring your low…
Traditional diffusion models uses various mechanisms for image modification like ControlNet, IP-Adapter, Inpainting, Face detection,…
Mochi 1, an open-source text-to-video diffusion model has been released by Genmo. Trained with 10…
So, it's finally here. Stable Diffusion 3.5 has been released by StabilityAI on October 22nd,…
Due to the extreme diversity of video content like motion, camera panning, and length, the…
You can use these negative prompts while generating images using Stable Diffusion models.1. Clear BrandingPrompt:text,…