Enhanced ControlNet Alpha for Flux Inpainting in ComfyUI

How to do inpainting using flux model

Inpainting ControlNet Alpha model for the FLUX.1-dev model released by the Alimama Creative Team works under Alibaba. The respective model weights fall under Flux dev non-commercial license. 

Its trained on 12 million images from the laion2B dataset and other internal sources, all at a resolution of 768×768. This is the optimal size for inference; using other resolutions can lead to subpar results.

As reported officially the model is capable to enhance your image generation capabilities and gives you more advantage than the older SDXL inpainting.

Table of Contents:

Installation:

1. First install Comfyui. You also need to follow the basic Flux installation workflow if you have not yet.

2. From the Comfy Manager, click “Install Custom nodes“. Then, install “ComfyUI essentials” by Cubiq.

download flux inpainting controlnet model

3.Now, next part is to download the Flux inpainting model(safetensors) weight from Alimama creative’s Hugging face repository.

After downloading, just save it inside “ComfyUI/models/controlnet” folder. You can rename it to anything relative like “Alimama-Flux-controlnet-inpainting.safetensors” for well structured workflow.

Workflow:

1. Download the workflow from Alimama Creative Hugging face. This uses the native Flux Dev workflow.

2. Drag and drop to ComfyUI. If you get red colored error nodes , then from Manager click “Install missing custom nodes“. Restart and refresh your ComfyUI to take effect.

load flux  model

3. Choose “Flux.1 Dev” as the model weight from “Load diffusion model” node. Load ControlNEt inpainting model which you downloaded from “Load ControlNet model” node.  

upload your image

4. Load your target image in “Load image” node. Right click on the image and select option “Open in Masked Editor“. Mask the area where you want to influence it and click “save to node“.

add positive prompts

5. Add descriptive positive prompts in “Clip text encode” node.

6. Configure the settings:

When you use the “t5xxl-FP16” and “flux1-dev-fp8” models for 28-step inference, you will encounter significant GPU memory usage(around 27GB). Here are some key insights to keep in mind:

Configure settings for controlnet inpainting

Inference Time: With a configuration of CFG=3.5, you can expect an inference time of about 27 seconds. Choosing CFG=1 will help you to reduce that time to approximately 15 seconds.

Acceleration: Another tip is to use the Hyper-FLUX-lora to speed up your inference times.

Parameter Adjustments: For optimal results, lower the parameters like control-strength, control-end-percent, and CFG. A great starting point for best setting set control-strength = 0.9, control-end-percent = 1.0, and CFG = 3.5

For best performance, set the “controlnet_conditioning_scale” between 0.9 and 0.95.

6. At last , click “Queue” button to initiate your image generation.

Conclusion:

This is the alpha version which is in its testing stage. So, you don’t get the satisfied results as expected. But, its really helpful and amazing for the community that these tech giants are training their own ControlNet models.