Flux.1 Kontext Dev: Top Image Editing & Styling Software

install flux Kontext in ComfyUI

Until now, high-performance image editing with generative models was locked behind closed APIs and proprietary tools thats limiting innovation, control, and accessibility for developers and researchers. FLUX.1 Kontext changes that. Built by Black Forest Labs, this open-source 12B parameter model delivers proprietary-level image editing while running on consumer hardware. 

Types:

(a) FLUX.1 Kontext [pro] (used via API) – This is the Commercial variant, that’s related to fast iterative editing

(b) FLUX.1 Kontext [max] (used via API) – This is for Experimental Purpose with stronger prompt adherence supported

(c) FLUX.1 Kontext [dev] – The Open source variant, released under non-commercial license that can be used on application.

Flux.1 Kontext working showcase
Flux Kontext Working (Ref. Official Hugging Face repo)

Here, we will talk about the Flux.1 Kontext Dev that’s free for research and non-commercial use under the FLUX.1 Non-Commercial License, with full support for ComfyUI, HuggingFace Diffusers, and TensorRT  available from day one. People can find more in-depth information into their research paper.

Trained and evaluated on KontextBench, FLUX.1 Kontext dev outperforms both open models like Bytedance Bagel and HiDream-E1-Full, and even closed systems like Gemini-Flash Image from Google. Independent evaluations by Artificial Analysis back these findings, validating its lead in categories like character preservation, iterative/local/global editing, and scene consistency.

Installation

Setup ComfyUI (New user) .  Update ComfyUI from the Manager section (for older users).

TYPE: A

1. Setup the native Flux settings. If you already using the Flux workflows, then downloading VAE, text encoders are not required as they are using the same models.

2. Download the Flux1-dev-kontext . Save this into ComfyUI/ models/diffusion_models folder.

3. Download VAE  and save it into ComfyUI/ models/vae folder.

4. Download text encoders ( clip_lt5xxl_fp16 or t5xxl_fp8_e4m3fn_scaled ) and save them into ComfyUI/ models/text_encoders folder.

5. Restart ComfyUI and refresh it to take effect. 

TYPE: B

You can also use the GGUF Variant that will give you great inference speed. Follow the instructions if you have not setup yet.

Download Flux Kontext GGUF model

(a) Setup Flux GGUF custom node (ComfyUI-GGUF) by city96.

(b) Download the Flux Kontext GGUF model (ranges from Q2 for faster generation with low quality to Q8 for higher quality with slow speed) from Hugging face and save it into the ComfyUI/models/unet  folder.

Workflow

Setup Flux Kontext workflow

1. Make sure you have the latest ComfyUI version. Open ComfyUI. Go to Workflow section(on left ) >>Browse Templates >> Flux >> Flux.1 Kontext Dev. Click and run the workflow.

download Flux Kontext workflow

You can also download the workflows from our hugging face repository

2. Drag and drop into ComfyUI. 

3. Upload you image. Put prompt you want to do with the uploaded image into prompt box. 

Load Flux Kontext model dev model, VAE, text encoders and hit Run button to initiate the workflow.

Inputted image
Inputted Image

This is our inputted image. We put the prompt into prompt box:

Prompt: girl is wearing black beautiful gown

CFG : 2.5

Steps: 28

Flux Kontext Output image
Output image

Here, the result is not cherry picked and we are showing what we got at our initial generation stage. 
If you think Flux Kontext and Flux fill is same, well its not. Kontext and Fill both handle image editing, but they are built for different editing types and use cases. FLUX.1 Kontext takes the full image and regenerates it based on your prompt, not just a masked region. Its ideal for applying style shifts, scene restructuring, or replacing characters while keeping general layout/context intact.
It can introduce artifacts, especially during multi turn edits (as noted by Black Forest Labs themselves). This is because the entire image gets reprocessed multiple times, degrading sharpness or details.
FLUX Fill is more like traditional inpainting, it modifies only the selected/masked region while preserving the rest of the image as is.
Since Fill doesn’t touch untouched areas, it’s less prone to artifacts or degradation with repeated use.
Use Flux Kontext when you want a broad, prompt driven transformation like “make this photo into an anime scene.” Use Flux Fill when you need surgical edits like “remove the tree from this spot.”