Adding lighting effects to our images is now easier than before. Yeah, this is now possible with the IC Light (Imposing Consistent Light) model in ComfyUI. Remember those days when we used to play with multiple masking effects and settings in Adobe Photoshop to get the desired results?
Now, it can be done with a single click. The two versions have been released that is the text-conditioned relighting model and the background-conditioned model released by lllyasviel.
Let’s see the installation and work in ComfyUI and Automatic1111.
1. Get ComfyUI installed on your machine and get the basic understanding of ComfyUI.
2. Make sure you log in to Github. Download the workflow from Github repository. Then drag-drop into ComfyUI canvas.
3. After dropping you will get red missing nodes. Move to the ComfyUI Manager, select “Install missing custom nodes” and install the Comfyui ic-light node.
Alternative:
Move into the “ComfyUI/custom_nodes” folder. Open your command prompt by typing “cmd” into the folder address bar. Into the command prompt clone it by copying and pasting the command provided below:
git clone https://github.com/kijai/ComfyUI-IC-Light.git
4. Restart ComfyUI to take effect and click “Refresh“.
5. Now, the time to download IC light models. Move to Manager and select “Install models“. Search “ic-light” in the search box and download these models listed. All of them get installed into “ComfyUI/models/unet/IC-Light” folder.
The default “iclight_sd15_fc.safetensors” model outperformed this model slightly as per user case study. Due to this the default model is the model without offset noise. So, you can ignore the “iclight_sd15_fcon.safetensors” model.
5. Load your checkpoints. We have used the Juggernaut model. You should always load Stable Diffusion 1.5 checkpoints. Load your image into “Load image node“.
6. Select the IC light model from “Load and Apply IC Light“. The fbc version is for the background and the fc version is for the foreground effect. Add your prompt as any realistic lighting effects. Example- Studio lighting, Spotlight, Sunlight, Red Neon lights, Cinematic etc.
7. Creating and reshaping is a great option to play with your mask size by tweaking the settings into “Create shape mask“. You can get different shapes like circles, squares, and triangles.
Set KSampler settings as default. You can also play with the settings to get something new.
8. Increase and decrease the mask blur using “Grow mask with blur” node.
9. Finally click on “Queue prompt” to generate output.
Subject |
First try: Generated with IC Light model |
We tested with our subject and inputted positive prompt “detailed face, shadow from window” and here is the result. It looks like shot with professionalism. Now, here you can see how the shadow coming from the window has covered overall subject’s face.
Second Try: Generated with IC Light model |
So, further to minimize the effect you can add necessary prompt. Like here we provided prompt as “detailed face, minimal, shadow from window” and the result was significantly amazing.
1. Make sure you have Automatic1111 / ForgeUI installed.
2. For Automatic1111, you need to first install SD Webui patcher extension by just pasting it into the git url section (link provided below) from the Automatic1111’s extension tab. Then just click “Install” and restart your WebUI by clicking “Apply and RestartUI“.
https://github.com/huchenlei/sd-webui-model-patcher.git
3. Download these two models from GitHub repository
4. Now, you have to create a new folder and rename it as “ic-light” into your Webui “models” folder. Put the downloaded models into the newly created folder.
The light map position can be used by opting the left, right, top and bottom options to influence the effects.
Make sure to choose and work with Stable Diffusion 1.5 based checkpoints only otherwise, it will generate errors.
Subject |
Output |
Figuring out the model that can fix your low quality pictures? Now, restoring your low…
Traditional diffusion models uses various mechanisms for image modification like ControlNet, IP-Adapter, Inpainting, Face detection,…
Mochi 1, an open-source text-to-video diffusion model has been released by Genmo. Trained with 10…
So, it's finally here. Stable Diffusion 3.5 has been released by StabilityAI on October 22nd,…
Due to the extreme diversity of video content like motion, camera panning, and length, the…
You can use these negative prompts while generating images using Stable Diffusion models.1. Clear BrandingPrompt:text,…