wan22-fp16-encoders

1
1 language
FP16
by
wangkanai
Video Model
OTHER
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

High-precision FP16 text encoders for the WAN (Worldly Advanced Network) 2.

Code Examples

Usage Examplespythonpytorch
from diffusers import DiffusionPipeline
import torch

# Load WAN2.2 pipeline with custom text encoders
pipe = DiffusionPipeline.from_pretrained(
    "your-wan22-model",
    text_encoder_path="E:/huggingface/wan22-fp16-encoders/text_encoders/t5-xxl-fp16.safetensors",
    torch_dtype=torch.float16,
    variant="fp16"
).to("cuda")

# Generate video from text
prompt = "A serene mountain landscape at sunset with flowing clouds"
video = pipe(prompt, num_frames=24, height=512, width=512).frames

# Save output
video[0].save("output_video.mp4")
Save outputpythonpytorch
from diffusers import DiffusionPipeline
import torch

# Load with multilingual encoder
pipe = DiffusionPipeline.from_pretrained(
    "your-wan22-model",
    text_encoder_path="E:/huggingface/wan22-fp16-encoders/text_encoders/umt5-xxl-fp16.safetensors",
    torch_dtype=torch.float16,
).to("cuda")

# Generate with multilingual prompt
prompt = "東京の夜景、ネオンライトと雨"  # Japanese: Tokyo nightscape with neon lights and rain
video = pipe(prompt, num_frames=48, height=768, width=768).frames
Japanese: Tokyo nightscape with neon lights and rainpythonpytorch
from diffusers import DiffusionPipeline
import torch

# Enable CPU offloading for lower VRAM systems
pipe = DiffusionPipeline.from_pretrained(
    "your-wan22-model",
    text_encoder_path="E:/huggingface/wan22-fp16-encoders/text_encoders/t5-xxl-fp16.safetensors",
    torch_dtype=torch.float16,
).to("cuda")

# Enable model CPU offload
pipe.enable_model_cpu_offload()

# Enable attention slicing for further memory reduction
pipe.enable_attention_slicing(1)

# Generate with reduced memory footprint
video = pipe(prompt, num_frames=16).frames

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.