Installation
This page outlines a practical setup to run Wan Animate locally. The steps assume a Linux or macOS workstation with a recent NVIDIA GPU and a compatible CUDA toolkit. Adjust versions to match your system.
Prerequisites
- GPU with sufficient VRAM for 720p outputs.
- Python 3.10 or newer.
- CUDA toolkit and drivers compatible with your PyTorch build.
- Disk space for model weights and sample data.
Environment Setup
- Create a clean Python environment using your preferred tool.
- Install PyTorch for your CUDA version.
- Install Wan-related packages and video I/O dependencies.
- Download the Wan Animate model weights as instructed by the repository.
- Run a short test to verify the setup.
Quality Profiles
- wan-pro: 25fps, 720p
- wan-std: 15fps, 720p
Input Limits
- Video: under 200MB; shortest side above 200; longest side under 2048; 2–30 seconds; aspect from 1:3 to 3:1; mp4/avi/mov.
- Image: under 5MB; shortest side above 200; longest side under 4096; jpg/png/jpeg/webp/bmp.
Basic Workflow
- Prepare a character image with clear facial features.
- Prepare a reference video with the performer centered and well lit.
- Select Move Mode (animate) or Mix Mode (replace).
- Choose wan-std for quick tests, wan-pro for final results.
- Export the final video after reviewing the output frames.
Note: This page summarizes installation concepts for educational use. Refer to official repositories for exact commands and updates.