...

Animate Face Photo Free with TPSMM: Realistic AI Face Animation

Face Move By Sound Thin Plate Spline Motion Model

Last Updated on 08/10/2025 by Eran Feit

Animate your face photo free — give life to static images with motion, expressions, or even lip sync. In this post, you’ll learn how to turn a single face photo into a dynamic animated portrait—for free.

We’ll walk you through:

  1. Why face animation works (the tech behind it)
  2. A step-by-step tutorial using open source methods
  3. Easy free tools and apps you can use without coding
  4. Tips, limitations, and best practices

Free Tools You Can Use (Online & App)

Free Tools & Apps to Animate Face Photos

Here are some free tools (web-based or open-source) that let you animate face photos with minimal effort:

Tool / FrameworkWhat It DoesNotes / Limitations
Thin-Plate Spline Motion Model (TPSMM)A deep learning framework for realistic face animation built on PyTorch. Offers state-of-the-art motion transfer using thin-plate spline transformations.Free & open-source. Requires Python and GPU for best results. Excellent for developers and researchers who want full control.
MyHeritage Deep NostalgiaConverts a face photo into a short animation (facial movements)Free with limitations (watermark, limited animations)
First Order Motion Model DemoUpload a face image + driving video to animateDemo is free; quality depends on your input images
Wombo AIAnimate portraits to sing or moveSome features paywalled; free plan available
Avatarify / Live Avatar appsTurn selfies/photos into animated avatars liveMight require app store download; check free tier

💡 Why TPSMM Stands Out:
Unlike consumer apps that limit features or watermark outputs, TPSMM is an open-source framework built on PyTorch. This means you can animate face photos completely free, customize the model, and even extend it for research or creative projects.


Welcome to our tutorial : Image animation brings life to the static face in the source image according to the driving video, using the Thin-Plate Spline Motion Model! Great face animation

Thin-Plate Spline Motion Model (TPSMM) is a deep learning framework for realistic face animation, built on PyTorch.

It brings static images to life by transferring motion from a driving video onto a source photo, producing natural movements such as head turns, expressions, and lip-sync.

Unlike earlier approaches, TPSMM uses thin-plate spline transformations and multi-resolution occlusion masks, which better capture complex, non-linear motion and handle hidden areas of the face more effectively.

The project includes pretrained models, demo scripts, and training configurations for datasets like VoxCeleb, TaiChiHD, and TED, making it ideal for researchers, developers, and creators exploring high-quality face animation and image reenactment.

What You’ll Learn :

Step 1: Setup your environment (Python / local)

Part 2: Clone the GitHub Repository

Part 3: Download the Model Weights

Part 4: Demo 1: Run a Demo

Part 5: Demo 2: Use Your Own Images and Video

You can find more tutorials, and join my newsletter here : https://eranfeit.net/

Check out our tutorial here : https://www.youtube.com/watch?v=oXDm6JB9xak

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3


Requirements

  • Python: 3.9 recommended
  • GPU (optional but recommended): NVIDIA with matching CUDA drivers
  • OS: Windows 10/11, Ubuntu 20.04+ (tested)
  • Tools: Git, Conda (or venv), FFmpeg
  • CPU : You can run on CPU to validate the pipeline, but generation will be slow.

A. Setup: Version‑Pinned Environment

Use Conda (or adapt to python -m venv). Adjust CUDA if your machine differs.

https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model =========================================================  conda create -n moveFace39 python=3.9 conda activate moveFace39   git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model.git cd Thin-Plate-Spline-Motion-Model  nvcc --version I am using Cuda 11.6  # install pytorch 13.1 conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia   pip install matplotlib==3.4.3 pip install PyYAML==5.4.1 pip install tqdm==4.62.3 pip install scipy==1.7.1 pip install imageio==2.9.0 pip install imageio-ffmpeg==0.4.5 pip install scikit-image==0.18.3 pip install scikit-learn==1.0 pip install face-alignment==1.3.5

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3

B. Get Pretrained Checkpoints

mkdir checkpoints  download the files form here : https://drive.google.com/drive/folders/1pNDo1ODQIb5HVObRtCmubqJikmR7VVLT into the checkpoints folder 

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3

C. Animate a face photo (animate face photo free method)

python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/source.png --driving_video assets/driving.mp4  copy Eran.jpg and french.mp4 to the assests folder  More options : python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/pic4.jpg --driving_video assets/french.mp4 --find_best_frame --result_video myResult.mp4  python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/pic4.jpg --driving_video assets/french.mp4 --result_video myResult.mp4  more :  --find_best_frame --result_video myResult.mp4 -- mode "relative" (default) / "standard" / "avd" 

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3

Helpful Flags

  • --find_best_frame → auto‑aligns source to the driver’s pose (great for mismatched angles).
    Example:python demo.py –config config/vox-256.yaml \–checkpoint checkpoints/vox.pth.tar \–source_image assets/source.png \–driving_video assets/driving.mp4 \–result_video result_best.mp4 \–find_best_frame
  • --mode → motion transfer strategy. Try relative (default), standard, or avd if you see drift/warping.
  • Crop tighter around the face if landmarks seem off.

D. Add Audio (Mux from the Driver Clip)

Add the audio : ============ ffmpeg -an -i myResult.mp4 -vn -i assets/french.mp4 -c:a copy -c:v copy  myResult_withAudio.mp4

You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3

Troubleshooting (Read This If Something Breaks)

1) PyTorch can’t see GPU

  • Your driver/CUDA runtime may not match the PyTorch build. Reinstall PyTorch with the correct CUDA version, or test CPU first (torch.cuda.is_available() should be True).

2) Missing checkpoints / wrong path

  • Ensure the file name and path exactly match your command, e.g., checkpoints/vox.pth.tar.

3) Face drift or tearing

  • Use --find_best_frame, try --mode standard, and crop more tightly. Make sure the driver clip has a similar head size/angle.

4) FFmpeg audio errors

  • Update FFmpeg and confirm which input actually has audio. Try -map 1:a:0? to make audio optional if needed.

5) Windows build issues

  • If face-alignment or scikit-image fails, install prebuilt wheels, or downgrade to a known‑good version.

Ethical Use

Use consented images and videos. Disclose edits when appropriate. Respect platform policies and local laws. Don’t use this to mislead or impersonate people.

FAQ :

How can I animate a face photo for free with TPSMM?

You can animate a face photo free by running the Thin-Plate Spline Motion Model (TPSMM) locally with PyTorch. Provide a single source portrait and a short driving video, then run the inference script to generate an animated result. This tutorial includes step-by-step setup, commands, and tips for best quality.

What do I need installed to use TPSMM?

You’ll need Python 3.x, PyTorch, basic dependencies (NumPy, OpenCV, etc.), and FFmpeg for optional audio muxing. A CUDA-enabled GPU is recommended for speed and higher fidelity, but CPU can work for small tests.

Can I animate face photos free without coding?

Yes. Several online tools offer free tiers that animate portraits without code. However, they may add watermarks, limit resolution, or restrict exports. TPSMM gives you full control and truly free results, but requires a simple local setup.

Which image works best for realistic free face animation?

Use a sharp, front-facing portrait with even lighting and minimal occlusions (no sunglasses, hair over face). Crop so the face fills most of the frame and, if needed, upscale to 256–512px before animation for cleaner motion.

What driving video should I use to animate my photo?

Pick a short video with natural head motions and expressions, similar head angle to your source image. Gentle movements transfer better and reduce artifacts. Keep clips 3–10 seconds for quick experimentation and consistent results.

Does TPSMM require a GPU to animate face photos free?

A GPU is not strictly required, but it significantly speeds up processing and can improve quality for higher resolutions. On CPU, expect longer runtimes; for production-level results, a CUDA-capable GPU is recommended.

How do I fix jitter or distortions in the animated result?

Try a better aligned source portrait, stabilize or replace the driving video, reduce motion intensity, and ensure faces are centered. You can also pre-crop, denoise, or lightly sharpen the image. If possible, test multiple driving clips and pick the cleanest output.

Can I add audio or lip-sync to my animated face photo?

Yes. After generating the animation, you can mux audio with FFmpeg or use a separate lip-sync model to align mouth movements. Keep clips short and match audio length to avoid desynchronization.

Is it legal and ethical to animate someone’s face photo?

Only animate images you own or have permission to use. Avoid impersonation, deception, or sensitive content. Clearly disclose edits when relevant and follow local laws and platform guidelines.

Are there free alternatives if I don’t want to install anything?

Yes—several online tools provide free tiers to animate portraits without setup. They are convenient for quick tests but may add watermarks, compress outputs, or limit exports. TPSMM remains the best option for full, free control and highest customization.

More cool Python projects


You can find the Instructions file for this video here : https://ko-fi.com/s/7e449822e3


Connect :

☕ Buy me a coffee — https://ko-fi.com/eranfeit

🖥️ Email : feitgemel@gmail.com

🌐 https://eranfeit.net

🤝 Fiverr : https://www.fiverr.com/s/mB3Pbb

Planning a trip and want ideas you can copy fast?
Here are three detailed guides from our travels:

• 5-Day Ireland Itinerary: Cliffs, Castles, Pubs & Wild Atlantic Views
https://eranfeit.net/unforgettable-trip-to-ireland-full-itinerary/

• My Kraków Travel Guide: Best Places to Eat, Stay & Explore
https://eranfeit.net/my-krakow-travel-guide-best-places-to-eat-stay-explore/

• Northern Greece: Athens, Meteora, Tzoumerka, Ioannina & Nafpaktos (7 Days)
https://eranfeit.net/my-amazing-trip-to-greece/

Each guide includes maps, practical tips, and family-friendly stops—so you can plan in minutes, not hours.

Enjoy,

Eran

error: Content is protected !!
Eran Feit