
Until now, high-performance image editing with generative models was locked behind closed APIs and proprietary tools thats limiting innovation, control, and accessibility for developers and researchers. FLUX.1 Kontext changes that. Built by Black Forest Labs, this open-source 12B parameter model delivers proprietary-level image editing while running on consumer hardware.
Types:
(a) FLUX.1 Kontext [pro] (used via API) – This is the Commercial variant, that’s related to fast iterative editing
(b) FLUX.1 Kontext [max] (used via API) – This is for Experimental Purpose with stronger prompt adherence supported
(c) FLUX.1 Kontext [dev] – The Open source variant, released under non-commercial license that can be used on application.
![]() |
Flux Kontext Working (Ref. Official Hugging Face repo) |
Here, we will talk about the Flux.1 Kontext Dev that’s free for research and non-commercial use under the FLUX.1 Non-Commercial License, with full support for ComfyUI, HuggingFace Diffusers, and TensorRT available from day one. People can find more in-depth information into their research paper.
Trained and evaluated on KontextBench, FLUX.1 Kontext dev outperforms both open models like Bytedance Bagel and HiDream-E1-Full, and even closed systems like Gemini-Flash Image from Google. Independent evaluations by Artificial Analysis back these findings, validating its lead in categories like character preservation, iterative/local/global editing, and scene consistency.
Installation
Setup ComfyUI (New user) . Update ComfyUI from the Manager section (for older users).
TYPE: A
1. Setup the native Flux settings. If you already using the Flux workflows, then downloading VAE, text encoders are not required as they are using the same models.
2. Download the Flux1-dev-kontext . Save this into ComfyUI/ models/diffusion_models folder.
3. Download VAE and save it into ComfyUI/ models/vae folder.
4. Download text encoders ( clip_l, t5xxl_fp16 or t5xxl_fp8_e4m3fn_scaled ) and save them into ComfyUI/ models/text_encoders folder.
5. Restart ComfyUI and refresh it to take effect.
TYPE: B
You can also use the GGUF Variant that will give you great inference speed. Follow the instructions if you have not setup yet.
(a) Setup Flux GGUF custom node (ComfyUI-GGUF) by city96.
(b) Download the Flux Kontext GGUF model (ranges from Q2 for faster generation with low quality to Q8 for higher quality with slow speed) from Hugging face and save it into the ComfyUI/models/unet folder.
Workflow
1. Make sure you have the latest ComfyUI version. Open ComfyUI. Go to Workflow section(on left ) >>Browse Templates >> Flux >> Flux.1 Kontext Dev. Click and run the workflow.
You can also download the workflows from our hugging face repository.
2. Drag and drop into ComfyUI.
3. Upload you image. Put prompt you want to do with the uploaded image into prompt box.
Load Flux Kontext model dev model, VAE, text encoders and hit Run button to initiate the workflow.
![]() |
Inputted Image |
This is our inputted image. We put the prompt into prompt box:
Prompt: girl is wearing black beautiful gown
CFG : 2.5
Steps: 28
![]() |
Output image |