Figuring out the model that can fix your low quality pictures? Now, restoring your low quality is like a cake walk. InstantIR (Instant-reference Image Restoration) released by Peking University, InstantX Team and The Chinese University of Hong Kong, is capable of restoring the low quality images with realistic texture and high detailing.
Source-InstantIR official page |
Its a novel diffusion-based BIR(blind-image restoration ) method reconstruct at inference time in dynamic fashion.
Further more details can be achieved with customized editing using extra textual based prompts. The model is registered under Apache2.0 license means its free for personal and commercial usage. You can get more details be referring to their research paper for in-depth understanding.
1. Install ComfyUI to your machine.
2. Move to “ComfyUI/custom_nodes” directory. Navigate to folder address bar and type “cmd” to open command prompt. Then type the cloning command provided below to clone the repository.
git clone https://github.com/smthemex/ComfyUI_InstantIR_Wrapper.git
3. Install the required dependencies:
For normal ComfyUI user:
pip install -r requirements.txt
For Comfy Portable users:
python_embededpython.exe -m pip install -r requirements.txt
4. Download the model ( adapter.pt, aggregator.pt, previewer_lora_weights.bin ) from InstantX Hugging Face. Then create new directory inside “ComfyUI/models/InstantIR” folder and rename it as “models“, then save them inside “ComfyUI/models/InstantIR/models” folder.
You also need to download Facebook’s DinoV2Large (model.safetensors) model and LCM Lora SDXL (pytorch_lora_weights.safetensors).
Move inside “ComfyUI/models/InstantIR” folder. Again create new folder and rename something relative. Save these two models inside new folder.
5. Restart ComfyUI to take effect.
1. Get the workflow inside your “ComfyUI/custom_nodes/ComfyUI_InstantIR_Wrapper” folder named as “workflow.json“.
2. Drag and drop to ComfyUI.
3. Load your checkpoints. Its suggested to use SDXL based checkpoints(Ex- JuggernautXL, DreamshaperXL etc) only. So, you can get plenty of them from Hugging face or CivitAI.
4. Select the adapter checkpoint, aggregator checkpoint, SDXL-lcm-lora model, Dino llm, InstantIR Lora checkpoint from inside the InstantIR Loader node.
Load your target image.
5. Recommended settings:
CFG: 7
Steps: 20
Now, here we can get some kind of hallucination which you can fix by feeding positive prompts. LLMs are also a better option while adding the detailed instruction yielding in more accuracy.
Traditional diffusion models uses various mechanisms for image modification like ControlNet, IP-Adapter, Inpainting, Face detection,…
Mochi 1, an open-source text-to-video diffusion model has been released by Genmo. Trained with 10…
So, it's finally here. Stable Diffusion 3.5 has been released by StabilityAI on October 22nd,…
Due to the extreme diversity of video content like motion, camera panning, and length, the…
You can use these negative prompts while generating images using Stable Diffusion models.1. Clear BrandingPrompt:text,…
While some tech companies have lofty goals to transform drug discovery through AI or to…