...

Face Animation | Transform Static Images into Lifelike Animations

Face Move By Sound Thin Plate Spline Motion Model

Welcome to our tutorial : Image animation brings life to the static face in the source image according to the driving video, using the Thin-Plate Spline Motion Model! Great face animation

In this tutorial, we’ll take you through the entire process, from setting up the required environment to running your very own animations.

Thin-Plate Spline Motion Model (TPSMM) is a deep learning framework for realistic face animation, built on PyTorch.

It brings static images to life by transferring motion from a driving video onto a source photo, producing natural movements such as head turns, expressions, and lip-sync.

Unlike earlier approaches, TPSMM uses thin-plate spline transformations and multi-resolution occlusion masks, which better capture complex, non-linear motion and handle hidden areas of the face more effectively.

The project includes pretrained models, demo scripts, and training configurations for datasets like VoxCeleb, TaiChiHD, and TED, making it ideal for researchers, developers, and creators exploring high-quality face animation and image reenactment.

What You’ll Learn :

Part 1: Setting up the Environment: We’ll walk you through creating a Conda environment with the right Python libraries to ensure a smooth animation process

Part 2: Clone the GitHub Repository

Part 3: Download the Model Weights

Part 4: Demo 1: Run a Demo

Part 5: Demo 2: Use Your Own Images and Video

You can find more tutorials, and join my newsletter here : https://eranfeit.net/

Check out our tutorial here : https://youtu.be/oXDm6JB9xak

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3


Here is the instruction file for face animation :

https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model =========================================================  conda create -n moveFace39 python=3.9 conda activate moveFace39   git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model.git cd Thin-Plate-Spline-Motion-Model  mkdir checkpoints  download the files form here : https://drive.google.com/drive/folders/1pNDo1ODQIb5HVObRtCmubqJikmR7VVLT into the checkpoints folder   nvcc --version I am using Cuda 11.6  # install pytorch 13.1 conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia   pip install matplotlib==3.4.3 pip install PyYAML==5.4.1 pip install tqdm==4.62.3 pip install scipy==1.7.1 pip install imageio==2.9.0 pip install imageio-ffmpeg==0.4.5 pip install scikit-image==0.18.3 pip install scikit-learn==1.0 pip install face-alignment==1.3.5  python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/source.png --driving_video assets/driving.mp4  copy Eran.jpg and french.mp4 to the assests folder  More options : python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/pic4.jpg --driving_video assets/french.mp4 --find_best_frame --result_video myResult.mp4  python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/pic4.jpg --driving_video assets/french.mp4 --result_video myResult.mp4  more :  --find_best_frame --result_video myResult.mp4 -- mode "relative" (default) / "standard" / "avd"  Add the audio : ============ ffmpeg -an -i myResult.mp4 -vn -i assets/french.mp4 -c:a copy -c:v copy  myResult_withAudio.mp4

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3


Connect :

☕ Buy me a coffee — https://ko-fi.com/eranfeit

🖥️ Email : feitgemel@gmail.com

🌐 https://eranfeit.net

🤝 Fiverr : https://www.fiverr.com/s/mB3Pbb

Enjoy,

Eran

error: Content is protected !!
Eran Feit