Animate Any Portrait: Introducing LivePortrait, Your Open-Source AI Animator
June 04, 2025
LivePortrait
Project Description
LivePortrait is an open-source PyTorch implementation for efficient portrait animation. It enables animating still portraits (humans, cats, and dogs) using a driving video or motion template, focusing on stitching and retargeting control to bring them to life. The project is actively updated and improved by Kuaishou Technology, University of Science and Technology of China, and Fudan University researchers.
Usage Instructions
Environment Setup
- Clone the repository:
git clone https://github.com/KwaiVGI/LivePortrait cd LivePortrait
- Create and activate a Conda environment:
conda create -n LivePortrait python=3.10 conda activate LivePortrait
- Install PyTorch (CUDA compatible for Linux/Windows):
First, check your CUDA version:
nvcc -V
. Then install the corresponding PyTorch version.- For CUDA 11.8:
pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu118
- For CUDA 12.1:
pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu121
- For other CUDA versions, refer to the PyTorch Official Website.
- Note for Windows users: Downgrading CUDA to 11.8 might resolve issues with higher CUDA versions.
- For CUDA 11.8:
- Install remaining dependencies:
pip install -r requirements.txt
- For macOS with Apple Silicon:
(X-Pose dependency is skipped, Animals mode not supported.)
pip install -r requirements_macOS.txt
- For macOS with Apple Silicon:
Download Pretrained Weights
- From HuggingFace (recommended):
# !pip install -U "huggingface_hub[cli]" huggingface-cli download KwaiVGI/LivePortrait --local-dir pretrained_weights --exclude "*.git*" "README.md" "docs"
- Using hf-mirror (if HuggingFace is inaccessible):
# !pip install -U "huggingface_hub[cli]" export HF_ENDPOINT=https://hf-mirror.com huggingface-cli download KwaiVGI/LivePortrait --local-dir pretrained_weights --exclude "*.git*" "README.md" "docs"
- Alternatively, download from Google Drive or Baidu Yun and place them in
./pretrained_weights
.
Inference
Humans Mode
- Run with default inputs:
(For macOS with Apple Silicon, use
python inference.py
PYTORCH_ENABLE_MPS_FALLBACK=1 python inference.py
) - Specify source and driving inputs:
# Source is an image python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 # Source is a video python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d0.mp4
Animals Mode (Linux/Windows with NVIDIA GPU only)
- Build MultiScaleDeformableAttention OP:
cd src/utils/dependencies/XPose/models/UniPose/ops python setup.py build install cd -
- Run inference:
python inference_animals.py -s assets/examples/source/s39.jpg -d assets/examples/driving/wink.pkl --driving_multiplier 1.75 --no_flag_stitching
Driving Video Auto-cropping
- Enable auto-cropping:
(Adjust
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video
--scale_crop_driving_video
or--vy_ratio_crop_driving_video
for better results.)
Motion Template Making
- Use auto-generated
.pkl
files for faster inference and privacy protection:python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl # portrait animation python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d5.pkl # portrait video editing
Gradio Interface
- Humans Mode:
# For Linux and Windows python app.py # For macOS with Apple Silicon PYTORCH_ENABLE_MPS_FALLBACK=1 python app.py
- Animals Mode (Linux with NVIDIA GPU only):
python app_animals.py
- Acceleration: Use
--flag_do_torch_compile
for 20-30% faster subsequent inferences (not supported on Windows/macOS).python app.py --flag_do_torch_compile
- An online HuggingFace Space is also available.
Key Features
- Efficient Portrait Animation: Animates still images of humans, cats, and dogs.
- Stitching and Retargeting Control: Provides fine-grained control over the animation process.
- Diverse Driving Inputs: Supports driving motion from video, images, or pre-generated motion templates (.pkl files).
- Video-to-Video (v2v) Editing: Supports editing existing portrait videos.
- Gradio Interface: User-friendly web interface for easier use and precise portrait editing.
- Regional Control & Image-Driven Mode: Offers advanced control and different driving options.
- Pose Editing: Allows editing of source portrait poses within the Gradio interface.
- Auto-cropping: Automatically crops driving videos to a 1:1 aspect ratio focusing on the head.
- Privacy Protection: Motion templates can be used to protect the privacy of subjects in driving videos.
- Cross-Platform Support: Supports Linux, Windows (with NVIDIA GPU), and macOS with Apple Silicon.
- Optimized Performance: Includes an option for
torch.compile
to accelerate inference.
Target Users
- Video Content Creators: For animating portraits in their videos.
- Animators: To quickly generate facial animations for characters.
- Researchers and Developers: As a base for further development in portrait animation, image/video generation, and face animation.
- General Enthusiasts: Individuals interested in bringing their photos to life or experimenting with AI animation.
- Major Video Platforms: Already adopted by platforms like Kuaishou, Douyin, Jianying, and WeChat Channels.
Project Links
- GitHub Repository: https://github.com/KwaiVGI/LivePortrait
- Project Homepage: liveportrait.github.io
- HuggingFace Online Demo: https://huggingface.co/spaces/KwaiVGI/LivePortrait
- arXiv Paper: arXiv:2407.03168
Application Scenarios
- Social Media Content: Creating engaging animated profile pictures or short video clips from still images.
- Marketing and Advertising: Generating animated characters or spokespersons from static images.
- Film and Game Development: Rapid prototyping of character animations for facial expressions.
- Virtual Avatars and Digital Humans: Animating static avatars for virtual meetings or simulations.
- Personal Use: Bringing old family photos or beloved pet pictures to life.