Installation

This page outlines a practical setup to run Wan Animate locally. The steps assume a Linux or macOS workstation with a recent NVIDIA GPU and a compatible CUDA toolkit. Adjust versions to match your system.

Prerequisites

  • GPU with sufficient VRAM for 720p outputs.
  • Python 3.10 or newer.
  • CUDA toolkit and drivers compatible with your PyTorch build.
  • Disk space for model weights and sample data.

Environment Setup

  1. Create a clean Python environment using your preferred tool.
  2. Install PyTorch for your CUDA version.
  3. Install Wan-related packages and video I/O dependencies.
  4. Download the Wan Animate model weights as instructed by the repository.
  5. Run a short test to verify the setup.

Quality Profiles

  • wan-pro: 25fps, 720p
  • wan-std: 15fps, 720p

Input Limits

  • Video: under 200MB; shortest side above 200; longest side under 2048; 2–30 seconds; aspect from 1:3 to 3:1; mp4/avi/mov.
  • Image: under 5MB; shortest side above 200; longest side under 4096; jpg/png/jpeg/webp/bmp.

Basic Workflow

  1. Prepare a character image with clear facial features.
  2. Prepare a reference video with the performer centered and well lit.
  3. Select Move Mode (animate) or Mix Mode (replace).
  4. Choose wan-std for quick tests, wan-pro for final results.
  5. Export the final video after reviewing the output frames.

Note: This page summarizes installation concepts for educational use. Refer to official repositories for exact commands and updates.