Categories: Tutorials

Enhance Images and Videos with Face Gestures: Live Portrait Technology

There are multiple portrait reference-based frameworks released on the top of diffusion models. But, this is something different. LivePortrait is a video-driven based portrait animation framework trained on 69 million high-quality frames. 

Instead of diffusion models’ principles it works on different methodologies providing the better advantage of computational efficiency and controllability without stressing much on your GPU. 

The research paper can be accessed from the respective link. The test results have been published in the paper with the generated sample, which took 12.8ms on an RTX 4090 GPU in a PyTorch-installed machine. 

We have already experienced the video generation with lipsync but this is more than that. Now, this can be installed into ComfyUI. Let’s see how to do the installation.

Installation:

1. To use the Live portrait, you have to Install ComfyUI on your PC.

2. Move into the “ComfyUI/custom_nodes” folder. Navigate to the folder address bar type “cmd” to open the command prompt, paste the command provided below, and wait for the installation to complete:

git clone https://github.com/kijai/ComfyUI-LivePortraitKJ.git

The respective models get automatically downloaded into the “ComfyUI/models/liveportrait” folder. You don’t need to download it manually.

Alternative:

You can also download it from the ComfyUI Manager. Just open your comfy manager and click “Install Custom Nodes“. Search for “liveportrait” and click “Install” which is labeled by “Kijai“.

Then download all the models from Hugging face repository. Now, move into “ComfyUI/models” folder and create new folder as “liveportrait“. Then, save the downloaded models inside “ComfyUI/models/liveportrait” folder.

3. Here, Insight face is required as a dependency so make sure you have already installed it, if not then use these commands:

For normal ComfyUI users, open the command prompt and type:

pip install insightface

Take it into mind that insight face comes under non-commercial license, means you can use it for research and not for commercial one.

For ComfyUI portable users, open the command prompt and type:

python_embeded/python.exe -m pip install insightface

Alternative:

Instead of Insightface, now you can also use MediaPipe(opensource) by Google for commercial purpose in deployment but the face detection will not be as good as Insight face.

In case of any installation errors for insight face, it can be fixed from the troubleshooting section.

4. Then restart ComfyUI and click “Refresh” to clear the cache.

Workflow:

1. Workflow can be found inside the “ComfyUI/custom_nodes/ComfyUI-LivePortraitKJ/examples” folder. Alternatively, it can be downloaded from Github link. The workflow has various different types-(a) Image to Video (b) Video to Video (c) Real time face Capture using Webcam.

Just drag and drop into ComfyUI.

After loading the workflow, you will get red colored missing nodes. Navigate to ComfyUI manager and click “Install missing custom nodes” and install all the nodes one by one and restart ComfyUI to take effect.

2. Upload your image to add face movement into the “Load Image” node.

3. Upload your reference video into the “Load video” node. The examples can be found inside the “ComfyUI/custom_nodes/ComfyUI-LivePortraitKJ/assets/examples/driving” folder. Alternatively, you can download it from GitHub.  

Set all the settings to default and click “Queue prompt” to generate a video. After a few seconds, you will get the output into the “Video Combine” node.

You can control head movement and eye movement from the “Live Portrait Process” node. For best results always upload the image of the same aspect ratio. (Ex- 1024 by 1024, 768 by 768 or 512 by 512)  

Here, are some testing done by us. The eye blinking, lips, and eyebrow movements have been captured from the reference video. This only works with the face gesture, if you want to mimic the whole body movement it can’t.

admage

Share
Published by
admage

Recent Posts

Quantized Models: GGUF, NF4, FP8, FP16 (Ultimate Reference)

 You must have been keeping up with the image/video generation models and probably noticed the…

7 days ago

Flux.1 Kontext Dev: Top Image Editing & Styling Software

Until now, high-performance image editing with generative models was locked behind closed APIs and proprietary…

2 weeks ago

Efficient VideoGen with Low VRAM: Wan2.1 FusionX 14B

Creating cinematic, detailed, and dynamic text to video content usually requires big models that are…

3 weeks ago

Transform Images to 3D with Hunyuan 3D v2.1 & Blender Integration

 Creating realistic 3D assets is still a tough challenge, especially when balancing high-quality geometry with…

3 weeks ago

AI Avatar Generator for Multiple Characters by HunyuanVideo

 Audio-driven human animation faces three critical problems like maintaining character consistency in dynamic videos, achieving…

3 weeks ago

Comparing Image Generation: Flux vs Hidream

Often choosing between AI models like Flux and HiDream can be confusing. As they both…

1 month ago