Enhance Images and Videos with Face Gestures: Live Portrait Technology

live portrait installation in comfyui

There are multiple portrait reference-based frameworks released on the top of diffusion models. But, this is something different. LivePortrait is a video-driven based portrait animation framework trained on 69 million high-quality frames. 

Instead of diffusion models’ principles it works on different methodologies providing the better advantage of computational efficiency and controllability without stressing much on your GPU. 

The research paper can be accessed from the respective link. The test results have been published in the paper with the generated sample, which took 12.8ms on an RTX 4090 GPU in a PyTorch-installed machine. 

We have already experienced the video generation with lipsync but this is more than that. Now, this can be installed into ComfyUI. Let’s see how to do the installation.

Installation:

1. To use the Live portrait, you have to Install ComfyUI on your PC.

2. Move into the “ComfyUI/custom_nodes” folder. Navigate to the folder address bar type “cmd” to open the command prompt, paste the command provided below, and wait for the installation to complete:

git clone https://github.com/kijai/ComfyUI-LivePortraitKJ.git

The respective models get automatically downloaded into the “ComfyUI/models/liveportrait” folder. You don’t need to download it manually.

Alternative:

download nodes from comfy manager

You can also download it from the ComfyUI Manager. Just open your comfy manager and click “Install Custom Nodes“. Search for “liveportrait” and click “Install” which is labeled by “Kijai“.

Then download all the models from Hugging face repository. Now, move into “ComfyUI/models” folder and create new folder as “liveportrait“. Then, save the downloaded models inside “ComfyUI/models/liveportrait” folder.

3. Here, Insight face is required as a dependency so make sure you have already installed it, if not then use these commands:

For normal ComfyUI users, open the command prompt and type:

pip install insightface

Take it into mind that insight face comes under non-commercial license, means you can use it for research and not for commercial one.

For ComfyUI portable users, open the command prompt and type:

python_embeded/python.exe -m pip install insightface

Alternative:

Instead of Insightface, now you can also use MediaPipe(opensource) by Google for commercial purpose in deployment but the face detection will not be as good as Insight face.

In case of any installation errors for insight face, it can be fixed from the troubleshooting section.

4. Then restart ComfyUI and click “Refresh” to clear the cache.

Workflow:

1. Workflow can be found inside the “ComfyUI/custom_nodes/ComfyUI-LivePortraitKJ/examples” folder. Alternatively, it can be downloaded from Github link. The workflow has various different types-(a) Image to Video (b) Video to Video (c) Real time face Capture using Webcam.

Just drag and drop into ComfyUI.

After loading the workflow, you will get red colored missing nodes. Navigate to ComfyUI manager and click “Install missing custom nodes” and install all the nodes one by one and restart ComfyUI to take effect.

upload the live portrait workflow

2. Upload your image to add face movement into the “Load Image” node.

3. Upload your reference video into the “Load video” node. The examples can be found inside the “ComfyUI/custom_nodes/ComfyUI-LivePortraitKJ/assets/examples/driving” folder. Alternatively, you can download it from GitHub.  

set default settings

Set all the settings to default and click “Queue prompt” to generate a video. After a few seconds, you will get the output into the “Video Combine” node.

You can control head movement and eye movement from the “Live Portrait Process” node. For best results always upload the image of the same aspect ratio. (Ex- 1024 by 1024, 768 by 768 or 512 by 512)  

generated output using Live portrait

Here, are some testing done by us. The eye blinking, lips, and eyebrow movements have been captured from the reference video. This only works with the face gesture, if you want to mimic the whole body movement it can’t.