wan21-fp16-480p

1
14.0B
1 language
FP16
by
wangkanai
Image Model
OTHER
14B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
32GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
14GB+ RAM

Code Examples

Export videopythonpytorch
from diffusers import DiffusionPipeline
import torch

# Load model with memory optimizations
pipe = DiffusionPipeline.from_single_file(
    "E:/huggingface/wan21-fp16-480p/diffusion_models/wan/wan21-i2v-480p-14b-fp16.safetensors",
    torch_dtype=torch.float16,
    use_safetensors=True
)

# Enable memory-efficient attention
pipe.enable_attention_slicing()
pipe.enable_vae_slicing()

# For even lower VRAM usage
pipe.enable_model_cpu_offload()

pipe.to("cuda")

# Generate video with optimizations
video = pipe(
    image=input_image,
    prompt="your prompt here",
    num_frames=16,  # Reduce frames for lower memory
    num_inference_steps=30,  # Fewer steps for faster generation
    guidance_scale=7.5
).frames[0]
Troubleshootingpython
# Enable all memory optimizations
pipe.enable_attention_slicing()
pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()

# Reduce generation parameters
num_frames=16  # Instead of 24
num_inference_steps=30  # Instead of 50

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.