Automated IP Adapter Setup and Workflow Optimization

ip adapter install and workflow

If you are struggling in attempting to generate any style with the referenced image then IP Adapter (Image Prompt Adapter) will prove to be the live saver.

ip apater working explanation

According to the research paper, this method helps the pre-trained diffusion models to create images in any art style. We used to fine-tune the pre-trained models for generating images with different styles with poses. But, IP Adapter claims to generate quite good results as compared to fine-tuned models. 

Features:

1. Most suitable for beginners who don’t want to get into complicated stuff.

2. Don’t need to train or fine-tune diffusion models

3. Saves time because you don’t need any training

4. Capable of running in production which saves storage for trained models.

5. ImageGen in any style in a controlled environment.

Installation in Automatic1111:

0. First, install and update Automatic1111 if you have not yet.

1. Make sure you have ControlNet SD1.5 and ControlNet SDXL installed.

Downloading ip adapter models

2. Navigate to the recommended models required for IP Adapter from the official Hugging Face Repository, and move under the “models” section.

Downloading ip adapter models

Download the IP adapter “ip-adapter-plus-face_sd15.bin” model and rename its extension from “.bin” to “.pth” before using it. 

And put them into your “stable-diffusion-webuiextensionssd-webui-controlnetmodels” or “stable-diffusion-webuimodelsControlNet” folder.

3. Again, move to the repository for SDXL collection and download the three updated IP Adapter models from Hugging Face.

Downloading ip adapter models

Also, download these files:

  • ip-adapter_sd15.pth
  • ip-adapter_sd15_plus.pth
  • ip-adapter_xl.pth

And put them into your “stable-diffusion-webuimodelsControlNet” folder.

4. For the illustration,  we are using two checkpoints. Download the Realistic Vision(SDXL base1.0) and the Rev Animated(Stable Diffusion1.5) from CivitAI, you can use other relevant models for your workflow, but we are using these:

  • Realistic Vision5.1(SDXL base1.0) (download Inpaiting and safetensor files)
  • Rev Animated(Stable Diffusion1.5) 

And store them in checkpoints folder. Also at last you need to download the relevant VAE and store them in “VAE” folder.

5. Now, open your Automatic1111 and use these settings to work with the workflow.

IP Adpater Workflow(Automatic1111):

This is the workflow in Automatic1111 WebUI, where we will see various methods of how to use IP Adapter effectively.

Method1 (Generating final image with the reference one):

Here, we want to generate an image with different styles and objects. Let’s say we want to generate an image with red sunglasses with the same face.

ip adapter settings and configuration

Select txt2img tab and setup these settings-

-positive prompt: “wearing sunglasses

-Sampling Method: DPM++ 2M SDE Karras

-Sampling Steps: 50

-Hires Fix: Upscaler-4x, Hires Steps:10, Denoising Strength: 0.3 

-Width:1024

-Height:1024

-Refiner: SDXL Refiner 1.0

-Switch at :0.6

ControlNet Unit0 tab: Drag and drop your reference image (pose/style)

“Enable” check box and Control Type: Ip Adapter

Preprocessor: Ip Adapter Clip SDXL

Model: IP Adapter adapter_xl

ControlNet Unit1 tab:  Drag and drop the same image loaded earlier

“Enable” check box and Control Type: Open Pose

Preprocessor: Open Pose Full (for loading temporary results click on the star button)

Model: sd_xl Open pose

And Select “Generate” button to create your new style of image art.

We tried multiple times with sunglasses wearing.

ip adapter result 1

ip adapter result 2

ip adapter result 3

ip adapter result 4
We have generated four different results. You can see the first result is giving the perfect sunglasses corners but the rest of them didn’t generate satisfactory results. Actually, this is happening due to the impact of denoising strength.

Installation in ComfyUI:

A recent update of the IP adapter Plus (V2) in ComfyUI has created a lot of problematic situations in the AI community. The major reason the developer rewrote its code is that the previous code wasn’t suitable for further upgrades. 
We are providing a simple step-by-step installation for relevant models.

Instructions:

To find out the real solution, we joined the developer’s 1hr live session, and he recommended:
  • IP Adapter(V1)– If you don’t want to destroy the previous workflow with an older IP adapter then you don’t have to upgrade to V2, but due to its deprecation you don’t get future updates.
  • IP Adapter(V2)– Although you want to upgrade to IP adapter V2 and want regular updates then, you need to rebuild the new workflow from scratch, because the previous workflow 
    will not be compatible. And you have to leave your previous workflows.
Apart from that a new IP Adapter Unified Loader is embedded in Version2 to manage all those old models(checkpoints) in the background.

Upgrading to IP Adapter V2:

updating all comfyui
1. First, open ComfyUI navigate to “Manager” and click “Update All” to update ComfyUI and the nodes. 
2. Now, if you don’t have the relevant nodes installed and you are getting a missing node error, then there are two ways to install nodes:-
Method 1(Manual):
(a) Download nodes from the official IP Adapter V2 Repository, for easy access same nodes have been listed below. Download these two models, put them inside “ComfyUI_windows_portableComfyUImodelsclip_vision” folder, and rename it as mentioned in below table.

Sl. no Rename downloaded model(Copy it) Download Details
1. CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors Link (This is the first model name)
2. CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors Link (This is the second model name)

If you are a windows user, then you need to enable the option “File name extensions” under Menu section otherwise you can’t rename the files with its extensions.

(b) Again download these models provided below and save them inside “ComfyUI_windows_portableComfyUImodelsipadapter” directory. Here you don’t need to rename any model, just save it as it is.
Sl No. Model Name Description
1 ip-adapter_sd15.safetensors Basic model, average strength
2 ip-adapter_sd15_light_v11.bin Light impact model
3 ip-adapter-plus_sd15.safetensors Plus model, very strong
4 ip-adapter-plus-face_sd15.safetensors Face model, portraits
5 ip-adapter-full-face_sd15.safetensors Stronger face model, not necessarily better
6 ip-adapter_sd15_vit-G.safetensors Base model, requires bigG clip vision encoder
7 ip-adapter_sdxl_vit-h.safetensors SDXL model
8 ip-adapter-plus_sdxl_vit-h.safetensors SDXL plus model
9 ip-adapter-plus-face_sdxl_vit-h.safetensors SDXL face model
10 ip-adapter_sdxl.safetensors vit-G SDXL model, Requires bigG clip vision encoder
11 (Deprecated) ip-adapter_sd15_light.safetensors v1.0 Light impact model
(c) Download these models if you want to work on the FaceID feature and save them inside “ComfyUI_windows_portableComfyUImodelsipadapter” folder. Now, if you don’t have the “ipadapter” folder then you need to create it manually.
Sl No. Model Name Download Description
1 ip-adapter-faceid_sd15.bin Link Base FaceID model
2 ip-adapter-faceid-plusv2_sd15.bin Link FaceID plus v2
3 ip-adapter-faceid-portrait-v11_sd15.bin Link Text prompt style transfer for portraits
4 ip-adapter-faceid_sdxl.bin Link SDXL base FaceID
5 ip-adapter-faceid-plusv2_sdxl.bin Link SDXL plus v2
6 ip-adapter-faceid-portrait_sdxl.bin Link SDXL text prompt style transfer
7 ip-adapter-faceid-plus_sd15.bin (Deprecated) Link FaceID plus v1 
8 ip-adapter-faceid-portrait_sd15.bin (Deprecated) Link v1 of the portrait model 
(d) Many of the FaceIDs use LoRA in the background, so you need to use the “IPAdapter Unified Loader FaceID” and all the things will be managed automatically. If you need to work on LoRA,  then download these models and save them inside “ComfyUI_windows_portableComfyUImodelsloras” folder. 
While using these FaceID models, make sure you are using with correct compatible LoRA models which we have listed below:
Sl No. Model Name Download Description
1 ip-adapter-faceid_sd15_lora.safetensors Link For SD1.5
2 ip-adapter-faceid-plusv2_sd15_lora.safetensors Link For SD1.5
3 ip-adapter-faceid_sdxl_lora.safetensors Link SDXL FaceID LoRA
4 ip-adapter-faceid-plusv2_sdxl_lora.safetensors Link SDXL plus v2 LoRA
5 ip-adapter-faceid-plus_sd15_lora.safetensors  (Deprecated) Link LoRA for the deprecated FaceID plus v1 model
Method 2(Alternative):
This is an alternative way to download all the models from ComfyUI itself. 
1. First download all the ComfyUI nodes workflow with IP Adapter V2 enabled from our Hugging Face repository section.
2. Open all the workflows one by one and you will get the missing nodes as red colored. Just select those nodes and one by one download each of them.
To download the missing nodes, just move to the ComfyUI manager, select “Install Custom Nodes“, search for relevant missing nodes, and install one by one by clicking on the respective “Install” button.

IP Adapter Workflow (ComfyUI):

downloading comfui workflow
ComfyUI workflow
Image source: IP Adpater (Matteo’s) Github repository
For easy understanding, we have enlisted different ComfyUI official workflows with IP Adapter (V2). You can download it from our Hugging Face repository:

Error while upgrading to IP Adapter V2:

insight face missing error
1. InsightFace missing error– If you get an error while installing insight face, you need to download and install the necessary insightface wheel files from the respective GitHub repository
insight-face for different python versions
Here, there are multiple files for different python versions.
To check the python version, move to the “comfyuipython_embeded”  folder and open the “python.exe” file. You will get the python version in command prompt.
Here, you need to figure out which is relevant for your python version. Just download the relevant insightface whl files compatible for your python version.
After this, you also need to set the environment path. For that do windows search “environment variables” and select to open.
Get the path and edit it using edit button. Now here you need to do some modifications:
  • Paste the path of your python.exe file and add extra semicolon(;). To get the path just find for “python.exe” file inside “comfyuipython_embeded”  folder and right click and select copy path.
  • Paste the path of python python_embeded folder. To get the path just find for “python_embeded” folder, right click and select copy path.
  • Paste the path of python Scripts folder. To get the path just get the “Scripts” folder, right click and select copy path.
(Windows10/8/7 users) Copy the the path of respective folder by right clicking, select properties then copy location path.
Then click OK to save the changes.
copy file path on windows 11
(Windows11/10 users) Copy the the path of Insight Face wheel file that you have currently downloaded by right clicking on it.
copy file path on windows 10/8/7
(Windows 8/7 users) Copy the the path of Insight Face wheel file that you have currently downloaded by right clicking, select properties then copy location following forward backslash and name of file.
Your actual format should be like this “insight-face-file-pathinsight-face-file-name“. In our case its “C:UsersMY-PCDownloadsDocumentsinsight-face-311-amd64.whl” path location.
Move to your ComfyUI root folder, place you cursor to address bar and type “cmd” to open command prompt. Now paste the command given below:
.python_embededpython.exe -m pip install onnxruntime XXXX-Paste-Your-Insight-face-file-path-XXXX 
Now, replace the “XXXX-Paste-Your-Insight-face-file-path-XXXX” with your actual previously copied path using Ctrl+V key and press Enter key to start the process.
This command will upgrade the insight face and onnxruntime files. Just wait for the moment to get all the files downloaded and all the things will be fixed.
Now even though you are getting an error then you can raise your problems by putting your problems into issues section.
2. Missing nodes error- If you are facing errors with red colored missing nodes then you need to install it manually, discussed in Method2 section.
3. Managing model checkpoints- IP adapter Unified loader already made to take care of that in the background.

Conclusion:

IP Adapter is the image-to-image conditioning model. which helps you transfer any style and pose into your subject from the reference image. IP Adapter can be used with Stable Diffusion XL or stable Diffusion 1.5 base models. 
Many people have issues using it, although you can raise your issues in IP AdpaterV2 issues section or IP AdpaterV1 issues section.