...

Transform a Single Photo into Lifelike Animation

Face Move By Sound Thin Plate Spline Motion Model

Welcome to our tutorial : Image animation brings life to the static face in the source image according to the driving video, using the Thin-Plate Spline Motion Model! Great face animation

By the end, you’ll turn one static face photo into a moving, talking head driven by a short video. You’ll get:

Thin-Plate Spline Motion Model (TPSMM) is a deep learning framework for realistic face animation, built on PyTorch.

It brings static images to life by transferring motion from a driving video onto a source photo, producing natural movements such as head turns, expressions, and lip-sync.

Unlike earlier approaches, TPSMM uses thin-plate spline transformations and multi-resolution occlusion masks, which better capture complex, non-linear motion and handle hidden areas of the face more effectively.

The project includes pretrained models, demo scripts, and training configurations for datasets like VoxCeleb, TaiChiHD, and TED, making it ideal for researchers, developers, and creators exploring high-quality face animation and image reenactment.

What You’ll Learn :

Part 1: Setting up the Environment: We’ll walk you through creating a Conda environment with the right Python libraries to ensure a smooth animation process

Part 2: Clone the GitHub Repository

Part 3: Download the Model Weights

Part 4: Demo 1: Run a Demo

Part 5: Demo 2: Use Your Own Images and Video

You can find more tutorials, and join my newsletter here : https://eranfeit.net/

Check out our tutorial here : https://www.youtube.com/watch?v=oXDm6JB9xak

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3


Requirements

  • Python: 3.9 recommended
  • GPU (optional but recommended): NVIDIA with matching CUDA drivers
  • OS: Windows 10/11, Ubuntu 20.04+ (tested)
  • Tools: Git, Conda (or venv), FFmpeg

You can run on CPU to validate the pipeline, but generation will be slow.

Setup: Version‑Pinned Environment

Use Conda (or adapt to python -m venv). Adjust CUDA if your machine differs.

https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model =========================================================  conda create -n moveFace39 python=3.9 conda activate moveFace39   git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model.git cd Thin-Plate-Spline-Motion-Model  nvcc --version I am using Cuda 11.6  # install pytorch 13.1 conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia   pip install matplotlib==3.4.3 pip install PyYAML==5.4.1 pip install tqdm==4.62.3 pip install scipy==1.7.1 pip install imageio==2.9.0 pip install imageio-ffmpeg==0.4.5 pip install scikit-image==0.18.3 pip install scikit-learn==1.0 pip install face-alignment==1.3.5

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3

Get Pretrained Checkpoints

mkdir checkpoints  download the files form here : https://drive.google.com/drive/folders/1pNDo1ODQIb5HVObRtCmubqJikmR7VVLT into the checkpoints folder 

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3

Run the Minimal Working Command

python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/source.png --driving_video assets/driving.mp4  copy Eran.jpg and french.mp4 to the assests folder  More options : python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/pic4.jpg --driving_video assets/french.mp4 --find_best_frame --result_video myResult.mp4  python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/pic4.jpg --driving_video assets/french.mp4 --result_video myResult.mp4  more :  --find_best_frame --result_video myResult.mp4 -- mode "relative" (default) / "standard" / "avd" 

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3

Helpful Flags

  • --find_best_frame → auto‑aligns source to the driver’s pose (great for mismatched angles).
    Example:python demo.py –config config/vox-256.yaml \–checkpoint checkpoints/vox.pth.tar \–source_image assets/source.png \–driving_video assets/driving.mp4 \–result_video result_best.mp4 \–find_best_frame
  • --mode → motion transfer strategy. Try relative (default), standard, or avd if you see drift/warping.
  • Crop tighter around the face if landmarks seem off.

Add Audio (Mux from the Driver Clip)

Add the audio : ============ ffmpeg -an -i myResult.mp4 -vn -i assets/french.mp4 -c:a copy -c:v copy  myResult_withAudio.mp4

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3

Troubleshooting (Read This If Something Breaks)

1) PyTorch can’t see GPU

  • Your driver/CUDA runtime may not match the PyTorch build. Reinstall PyTorch with the correct CUDA version, or test CPU first (torch.cuda.is_available() should be True).

2) Missing checkpoints / wrong path

  • Ensure the file name and path exactly match your command, e.g., checkpoints/vox.pth.tar.

3) Face drift or tearing

  • Use --find_best_frame, try --mode standard, and crop more tightly. Make sure the driver clip has a similar head size/angle.

4) FFmpeg audio errors

  • Update FFmpeg and confirm which input actually has audio. Try -map 1:a:0? to make audio optional if needed.

5) Windows build issues

  • If face-alignment or scikit-image fails, install prebuilt wheels, or downgrade to a known‑good version.

Ethical Use

Use consented images and videos. Disclose edits when appropriate. Respect platform policies and local laws. Don’t use this to mislead or impersonate people.

More cool Python projects

FAQ

Can I run on CPU? Yes—slow but fine for validation. A GPU dramatically speeds things up.
Best resolution? 256×256 is the default; generate at 256 and upscale later with a separate tool for best speed/quality trade‑off.
How long should the driver clip be? 5–15 seconds is plenty for a demo; avoid extreme rotations or fast motion at first.
Do I need to train? No—pretrained checkpoints are enough for inference.



Here is the full instructions file for face animation :

https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model =========================================================  conda create -n moveFace39 python=3.9 conda activate moveFace39   git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model.git cd Thin-Plate-Spline-Motion-Model  mkdir checkpoints  download the files form here : https://drive.google.com/drive/folders/1pNDo1ODQIb5HVObRtCmubqJikmR7VVLT into the checkpoints folder   nvcc --version I am using Cuda 11.6  # install pytorch 13.1 conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia   pip install matplotlib==3.4.3 pip install PyYAML==5.4.1 pip install tqdm==4.62.3 pip install scipy==1.7.1 pip install imageio==2.9.0 pip install imageio-ffmpeg==0.4.5 pip install scikit-image==0.18.3 pip install scikit-learn==1.0 pip install face-alignment==1.3.5  python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/source.png --driving_video assets/driving.mp4  copy Eran.jpg and french.mp4 to the assests folder  More options : python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/pic4.jpg --driving_video assets/french.mp4 --find_best_frame --result_video myResult.mp4  python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/pic4.jpg --driving_video assets/french.mp4 --result_video myResult.mp4  more :  --find_best_frame --result_video myResult.mp4 -- mode "relative" (default) / "standard" / "avd"  Add the audio : ============ ffmpeg -an -i myResult.mp4 -vn -i assets/french.mp4 -c:a copy -c:v copy  myResult_withAudio.mp4

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3


Connect :

☕ Buy me a coffee — https://ko-fi.com/eranfeit

🖥️ Email : feitgemel@gmail.com

🌐 https://eranfeit.net

🤝 Fiverr : https://www.fiverr.com/s/mB3Pbb

Planning a trip and want ideas you can copy fast?
Here are three detailed guides from our travels:

• 5-Day Ireland Itinerary: Cliffs, Castles, Pubs & Wild Atlantic Views
https://eranfeit.net/unforgettable-trip-to-ireland-full-itinerary/

• My Kraków Travel Guide: Best Places to Eat, Stay & Explore
https://eranfeit.net/my-krakow-travel-guide-best-places-to-eat-stay-explore/

• Northern Greece: Athens, Meteora, Tzoumerka, Ioannina & Nafpaktos (7 Days)
https://eranfeit.net/my-amazing-trip-to-greece/

Each guide includes maps, practical tips, and family-friendly stops—so you can plan in minutes, not hours.

Enjoy,

Eran

error: Content is protected !!
Eran Feit