Categories: Tutorials

Enhance Your Images: Elevate with SUPIR

Forget about those older Photoshop techniques where you need editing skills but don’t get the perfect resolution. SUPIR (Scaling-UP Image Restoration) based on LoRA and Stable Diffusion XL(SDXL) framework released by XPixel group , helps you to upscale your image in no time.

The model is trained on 20 million high-resolution images, each with descriptive text annotations. It give you the power to restore images guided by detailed positive and negative textual prompts.

This is licensed under non commercial one, you can use this for research purposes only. For in-depth research you can go through their research papers.

Installation:

1. Installing ComfyUI if you are new to it.

2. Update ComfyUI from Manager by choosing “Update ComfyUI“.

3. Move to “ComfyUI/custom_nodes” folder. Navigate to the folder address bar and type “cmd” to open command prompt. 

Clone the repository by typing the following command provided below:

git clone https://github.com/kijai/ComfyUI-SUPIR.git

4. Next, install the dependent libraries using the following command:

For normal ComfyUI user:

pip install -r requirements.txt

For ComfyUI portable user, move inside “ComfyUI_windows_portable” folder. Open command prompt and type these command:

python_embededpython.exe -m pip install -r ComfyUIcustom_nodesComfyUI-SUPIRrequirements.txt

5. Now, Xformers(helps to faster the render time) will be detected automatically. If you want to install then use this command.

For normal ComfyUI user:

pip install -U xformers –no-dependencies

For ComfyUI portable user:

python_embededpython.exe -m pip install -U xformers –no-dependencies

6. Download the SUPIR pruned models from Kijai’s Hugging Face repository. Original raw model (if you want) can be downloaded from Camenduru’s repository.

As usual put them inside “ComfyUI/models/checkpoints” folder.

Workflow:

1. By drag-drop, load the workflow in ComfyUI. You can find the workflow inside your “ComfyUI/custom_nodes/ComfyUI-SUPIR/examples” folder.

2. Upload the blurry image into “Load Image” node. 

3. Choose the SDXL based models(like DreamshaperXL, JuggernautXL etc) from sdxl_model option.

3. Choose the relevant models from SUPIR model loader option. There are two variants:

  • SUPIR-v0F_fp16 – Trained with light degradation configuration.
  • SUPIR-v0Q_fp16 – Trained with default settings. This is for high generalization and produces high image quality in most of the cases.

Set the “scale by” parameter to do the upscaling like 2x,3x or 4x. The more pixels you want the longer your processing time will be. Its totally depend on your system configuration. For CFG, steps, samplers and other parameters, select what works best for your SDXL models you use.

3. As officially reported, they are using LLAVA LLM at the background to enhance the overall performance but it can also work without this as well. In case you can include that as well. 

4. Select the floating point type. FP8 works great for Unet as it will help you to reduce much VRAM and keep you away to out of memory errors. Use “tiled_vae” option for VAE. Well, tile sizes can be reduced from 1024 and 512 to 512 and 256 pixel.

Take it in mind, using tiled option also reduces VRAM but increases your system RAM.

We tested with 3090 with 24 GB VRAM, the 3x and 4x upscaling was pretty smooth but with 3060 12GB VRAM it was causing out of memory error.

5. Finally, click queue to start the rendering.

Conclusion:

As mentioned the SUPIR model is registered under non-commercial license. Enterprises who are interested for commercial purpose should obtain permission from Dr. Jinjin Gu (jinjin.gu@suppixel.ai)

admage

Share
Published by
admage

Recent Posts

InstantIR: Restore Your Images

Figuring out the model that can fix your low quality pictures? Now, restoring your low…

4 days ago

Omnigen: Next-Gen Image Generation & Editing Tool

Traditional diffusion models uses various mechanisms for image modification like ControlNet, IP-Adapter, Inpainting, Face detection,…

1 week ago

Create Engaging Videos with Mochi1

Mochi 1, an open-source text-to-video diffusion model has been released by Genmo.  Trained with 10…

3 weeks ago

Local Installation of Stable Diffusion 3.5

So, it's finally here. Stable Diffusion 3.5 has been released by StabilityAI on October 22nd,…

4 weeks ago

Video Depth Mapper: Efficient 3D Depth Mapping Solution

Due to the extreme diversity of video content like motion, camera panning, and length, the…

1 month ago

Top 30 Negative Prompts for Reliable Diffusion Models

 You can use these negative prompts while generating images using Stable Diffusion models.1. Clear BrandingPrompt:text,…

1 month ago