wan25-fp8-i2v
1
1 language
—
by
wangkanai
Image Model
OTHER
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Code Examples
Repository Contentstext
wan25-fp8-i2v/
├── diffusion_models/
│ └── wan/ # Base WAN 2.5 I2V models (empty)
└── README.md # This file (15KB)Usage Examplespythonpytorch
from diffusers import DiffusionPipeline, AutoencoderKL
from PIL import Image
import torch
# Load WAN 2.5 FP8 I2V model (when released)
pipe = DiffusionPipeline.from_single_file(
"E:/huggingface/wan25-fp8-i2v/diffusion_models/wan/wan25-i2v-14b-fp8-high-scaled.safetensors",
torch_dtype=torch.float8_e4m3fn, # FP8 precision
)
# Load shared VAE from wan25-vae repository
pipe.vae = AutoencoderKL.from_single_file(
"E:/huggingface/wan25-vae/vae/wan25-vae-fp8.safetensors",
torch_dtype=torch.float8_e4m3fn
)
pipe.to("cuda")
# Load input image
input_image = Image.open("input.jpg")
# Generate video from static image
video = pipe(
image=input_image,
prompt="The scene comes to life with gentle, natural movement",
num_frames=48, # 2 seconds at 24fps
num_inference_steps=40,
guidance_scale=6.5,
motion_intensity=0.7 # Control motion strength (0-1)
).frames
# Save video
from diffusers.utils import export_to_video
export_to_video(video, "animated_output.mp4", fps=24)Moderate speedpython
# Use low-scaled variant for faster inference (lower VRAM)
pipe_low = DiffusionPipeline.from_single_file(
"E:/huggingface/wan25-fp8-i2v/diffusion_models/wan/wan25-i2v-14b-fp8-low-scaled.safetensors",
torch_dtype=torch.float8_e4m3fn
)
# Use high-scaled variant for maximum quality
pipe_high = DiffusionPipeline.from_single_file(
"E:/huggingface/wan25-fp8-i2v/diffusion_models/wan/wan25-i2v-14b-fp8-high-scaled.safetensors",
torch_dtype=torch.float8_e4m3fn
)
# Generate with high quality model
portrait_image = Image.open("portrait.jpg")
video = pipe_high(
image=portrait_image,
prompt="Subtle facial expressions and natural head movement",
num_frames=48,
guidance_scale=7.0
).framesCitationbibtex
@misc{wan25-i2v-fp8,
title={WAN 2.5 I2V: Advanced Image-to-Video Generation with FP8 Optimization},
author={WAN Team},
year={2025},
howpublished={\url{https://huggingface.co/wan-models/wan-2.5-fp8-i2v}},
note={FP8-optimized image-to-video generation model with 14B parameters}
}Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.