Wan2.2-Animate: Open-Source AI That Solves Character Consistency

Introduction
I’m excited to share that Wan 2.2 Animate has been released, building on the Wan 2.2 model. The announcement showcases strong results in two areas that matter most for creators: animating static characters and replacing characters in existing videos while maintaining motion, timing, and expression fidelity. The examples highlight consistent facial expressions, convincing pose transfer, and stable scene preservation.
This article walks through what Wan 2.2 Animate is, how it works at a high level, what you can expect from each mode, and how to try it through the online demo. I’ll keep the flow and order aligned with the original overview.
What Is Wan 2.2 Animate?
Wan 2.2 Animate focuses on two tasks:
- Character animation: Animate a character image using the motion from a reference video.
- Character replacement: Replace a character in a video with a different character while keeping the background and motion intact.
These two modes cover the common needs for animation pipelines: bringing a character to life and swapping characters into existing footage without disrupting the scene.
Character Animation (Input Image + Reference Video)
In character animation, you provide:
- One or more images of the character you want to animate.
- A reference video that contains the motion, pacing, and expressions you want the character to follow.
The model learns from the inputs and outputs a new video where your character follows the motion in the reference. Facial expressions are a standout: the model captures nuanced changes in the face and transfers them to the character with strong consistency.
Character Replacement (Swap a Character in a Video)
In character replacement, you provide:
- A source video where you want to replace a character.
- An image or images of the new character.
The output is a video where the original character is swapped with the new one, but everything else—background, scene layout, lighting, camera motion—remains consistent. The model mirrors the original performance: expressions, body pose, and actions are transferred cleanly, creating a believable swap.
Table Overview
Here’s a quick comparison of the two modes:
Mode | Inputs | Output | Preserves Background | Core Use Case |
---|---|---|---|---|
Character Animation | Character image(s) + reference video | Animated character following the reference motion | Not applicable | Animate a static character using a motion clip |
Character Replacement | Source video + new character image(s) | Video with the original character replaced | Yes | Swap a character in an existing video |
Key Features
Unified Character Animation and Replacement
The model supports both creating an animated performance from a still character and replacing a character inside a live-action or stylized video. This unified approach lets you work across different content types without jumping between separate tools.
Expression Fidelity
Facial expression transfer is a highlight. Subtle changes—eyebrow raises, smiles, lip movements—are reproduced with strong consistency. This helps keep the character’s performance coherent across frames.
Pose and Motion Consistency
For replacement, the model follows the source performance almost one-to-one. Movements, gestures, and timing carry over to the new character so the action feels natural in context.
Dynamic Motion and Camera Support
The system handles motion-heavy shots and dynamic camera movement. It tracks the original motion and maintains alignment over time, which is crucial for any footage that isn’t static.
Background Preservation for Replacement
In replacement mode, the background environment stays intact. The system focuses on swapping the character while retaining scene elements such as lighting, perspective, and camera motion.
Works with Human and Stylized Characters
The demos illustrate human subjects and stylized characters. The method applies to a range of character designs while keeping expression and motion fidelity at the center.
How to Use Wan 2.2 Animate
The workflow is straightforward. You decide which mode you need—character animation or character replacement—then prepare the required inputs.
Character Animation: Step‑by‑Step
-
Prepare character image(s)
- Use a clear image of your character. Multiple angles can help, but a single well-framed image often works.
- Ensure the character’s face is visible if you want strong expression transfer.
-
Prepare a reference video
- Choose a clip that contains the motion, gestures, and expressions you want the character to mimic.
- Short clips typically process faster and are easier to evaluate.
-
Upload inputs to the tool or demo
- Select the character image(s).
- Select the reference video.
-
Generate the animation
- Start the process and wait for the output. Typical processing can take around 1–3 minutes per short clip in the online demo.
-
Review the result
- Check expression accuracy, pose alignment, and overall motion.
- Iterate by adjusting the input image or trying a different reference video if you want a different style of movement.
Tips for Better Animation Results
- Use a high‑quality character image with good lighting.
- Match the reference video’s framing to the style you want (close‑ups for expression transfer, mid‑shots for full‑body motion).
- Keep the reference motion clear and not overly cluttered with occlusions.
Character Replacement: Step‑by‑Step
-
Prepare the source video
- Pick the footage containing the character you want to replace.
- Ensure the character is visible and not heavily obstructed for long periods.
-
Prepare the new character image(s)
- Use a clear image that defines the character’s appearance.
- Front‑facing images with visible facial features help with expression transfer.
-
Upload inputs to the tool or demo
- Select the source video.
- Select the new character image(s).
-
Generate the replacement
- Start the process. For short clips, the online demo often returns results in a couple of minutes.
-
Review and refine
- Confirm that only the character changed while the background stayed the same.
- Check expressions, pose alignment, and timing.
- If needed, try a different character image for better fidelity.
Tips for Better Replacement Results
- Use source videos with clear visibility of the subject you want to replace.
- Provide a character image that matches the intended style (lighting, angle) as closely as possible.
- Keep clip lengths manageable for faster iteration.
Online Demo on ModelScope (China Server)
There’s a hosted demo available on ModelScope that mirrors the main workflow. It’s helpful if you want to try Wan 2.2 Animate without setting up anything locally.
What You’ll Need
- A character image (this is the face or full character you want as the output character).
- A reference template video (for animation) or a source video (for replacement).
Steps on the Demo
- Visit the ModelScope demo page for Wan 2.2 Animate.
- Read the on‑page instructions to confirm the input requirements.
- Upload the character image (the main character for your output).
- Upload the reference template video (for character animation) or the source video (for character replacement).
- Click Generate Video.
- Wait for processing. Typical time is about 1–3 minutes for short clips.
- Download or preview the result.
The demo replicates the examples you’ve seen in the announcement, with consistent expression transfer and motion tracking. It’s a fast way to validate your inputs and understand how the model responds to different characters and videos.
Practical Notes on Results
- Expression accuracy: The model pays close attention to facial changes. Eyelines, smiles, and subtle shifts tend to carry over clearly.
- Motion fidelity: Body pose and gestures from the reference video map well to the target character.
- Scene stability: In replacement mode, the background is preserved, which helps maintain continuity across frames with moving cameras.
- Content length: Shorter clips process faster. You can chain outputs later if needed.
Troubleshooting and Iteration
If your first result isn’t what you expected, iterate on the inputs:
- Improve character imagery: Use sharper, higher‑resolution images with a clear face and minimal occlusion.
- Adjust the reference motion: Pick a video with clean, readable movements in the frame you want the character to emulate.
- Reframe the shot: For tight expression transfer, close‑ups of the reference help. For full‑body movement, use mid‑ or wide‑shots.
- Re‑expose subjects: Ensure both the character image and the video have adequate lighting to avoid artifacts.
FAQs
What inputs do I need for character animation?
You need a character image (or images) and a reference video. The output is a video of your character following the reference motion and expressions.
What inputs do I need for character replacement?
You need the source video containing the original character and a new character image (or images). The output is the same video with the character replaced, while keeping the background.
How long does it take to generate results?
On the online demo, short clips typically take around 1–3 minutes to process. Actual time depends on server load and clip length.
Does it keep the background unchanged for replacement?
Yes. In character replacement mode, the background environment remains the same as the source video. The model focuses on replacing the character while preserving the rest of the scene.
Can it handle dynamic camera motion?
Yes. The system supports dynamic motion and camera movement. It keeps alignment over time so the result tracks the original shot.
Does it work on both humans and stylized characters?
Yes. The examples cover human animations and general character styles. The system is designed to apply expression and motion transfer across different types of characters.
Do I need multiple character images?
The demo accepts one or more images. A single clear image often works, but additional views can help in some cases.
Where can I try it online?
You can try the demo on ModelScope. Upload the required inputs and click Generate Video to get a result.
Conclusion
Wan 2.2 Animate brings together two core capabilities—character animation and character replacement—into a single system focused on expression fidelity, pose consistency, and scene stability. With character animation, you animate a static character using a reference video. With character replacement, you swap a character into existing footage while keeping the background and camera motion intact.
The online ModelScope demo makes it easy to test your own inputs: upload a character image and a reference or source video, generate the output, and evaluate the results in a few minutes. If quality isn’t where you want it, iterate by refining the character image, choosing a clearer reference motion, or adjusting framing.
For creators who need reliable expression transfer, convincing pose mapping, and background preservation, Wan 2.2 Animate provides a practical way to bring characters to life or replace them in existing videos with consistent, repeatable results.
Recent Posts

Qwen Image Edit in ComfyUI: Multi-Image Prompting
Set up Qwen Image Edit 2509 in ComfyUI: multi-image prompting (up to 3 refs), better prompt adherence, improved visuals, and example workflows on 8GB VRAM.

Qwen ImageEdit 2509: ControlNet Pose & Multi-Object Transfer
Discover Qwen ImageEdit 2509: improved coherence, ControlNet pose, and precise multi-object transfer, plus practical use cases and pro tips.

Wan 2.2 Animate: Character Animation & Replacement Guide
Guide for open-source Wan 2.2 Animate: use character swap and photo animator to turn images into animated avatars or replace characters in videos.