Whether you are touching up photos, creating digital art, or developing innovative applications. FLUX.1 Tools released by Black Forest Labs, a powerful suite of models that puts overall control and flexibility right at your fingertips.
It includes features(Fill, Depth Canny, and Redux) that are accessible by the community for Flux.1 Dev and Flux.1 Pro. They work like the same Controlnet , IP Adapter techniques but way more refined than any of the third party Flux Controlnet models.
There are other third party Flux Controlnets, LoRA and Flux Inpainting featured models we have also shared in our earlier article if haven’t checked yet.
Table of Contents:
Basically four distinct features have been officially released that are illustrated below:
Flux Inpainting illustration |
Flux Outpainting illustration |
(a) FLUX.1 Fill- The model is based on 12 billion parameter rectified flow transformer is capable of doing inpainting and outpainting work, opening the editing functionalities with efficient implementation of textual input. When comparing with other models like Ideogram2.0 or Alimama’s Controlnet Flux inapitning, gives you the natural result with more refined editing at inference.
Flux depth illustration |
(b) FLUX.1 Depth- As usual like other Depth Models it responsible for depth map extraction from an inputted image and a textual based prompts. There are two model specifically for LoRA for easier editing and other raw one for maximized performance without comprimising the quality.
Flux Canny illustration |
(c) FLUX.1 Canny- Model can detect and extract canny edges using and a text prompt from your inputted image.
Again this model also released in two variants- first for LoRA and another is raw model for maximum output performance.
(d) FLUX.1 Redux- Its a new featured tool that works like an IP Adapter that provides you the way to mix and recreation of your input images with adding detailed textual embedding. This model is capable of generating new stylized image from your referenced inputted image.
1. If you are a new user, install ComfyUI and get the official Flux model weights installed (released by Black Forest Labs).
2. Update ComfyUI from the Manager menu by clicking on “Update All“.
Make sure you are using the raw Flux.1 Dev model released by Black forest Labs, saved to “ComfyUI/models/unet” folder and not any other variants.
3. Download the respective models, you want to use:
Before downloading any of the models, you have to accept their terms and agreement to access from their Hugging Face repository. So, make sure you have logged in to your Hugging Face account.
(a) Flux Fill– Download Flux Fill Dev(flux1-fill-dev.safetensors) from Hugging Face repository and save it to “ComfyUI/models/diffusion_models” folder.
(b) Flux Depth- Download Flux Depth Dev (flux1-depth-dev.safetensors) from Hugging Face repository and save it to “ComfyUI/models/diffusion_models” folder.
(c) Flux Depth LoRA- Download Flux Depth Dev LoRA (flux1-depth-dev-lora.safetensors) from Hugging Face repository and save it to “ComfyUI/models/loras” folder.
(d) Flux Canny- Download Flux Canny Dev (flux1-canny-dev.safetensors) from Hugging Face repository and put it to your “ComfyUI/models/diffusion_models” folder.
(e) Flux Canny LoRA- Download Flux Canny Dev LoRA (flux1-canny-dev-lora.safetensors) from Hugging Face repository and put it to “ComfyUI/models/loras” folder.
(f) Flux Redux- Download Flux Redux Dev (flux1-redux-dev.safetensors) from Hugging Face repository and place it into your “Comfyui/models/style_models” folder. You also need to download Slip_clip_Vision text encoder and save this into your “ComfyUI/models/clip_vision” folder.
4. Restart ComfyUI to take effect.
1. Download all or any of the workflows listed into Flux Workflows from our Hugging Face repository.
2. Just drag and drop to ComfyUI.
Figuring out the model that can fix your low quality pictures? Now, restoring your low…
Traditional diffusion models uses various mechanisms for image modification like ControlNet, IP-Adapter, Inpainting, Face detection,…
Mochi 1, an open-source text-to-video diffusion model has been released by Genmo. Trained with 10…
So, it's finally here. Stable Diffusion 3.5 has been released by StabilityAI on October 22nd,…
Due to the extreme diversity of video content like motion, camera panning, and length, the…
You can use these negative prompts while generating images using Stable Diffusion models.1. Clear BrandingPrompt:text,…