stable-diffusion-1.5_io32_amdgpu

32
5.0B
by
amd
Image Model
OTHER
5B params
New
32 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
12GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
5GB+ RAM

Code Examples

2. Inference Demotextonnx
accelerate
numpy==1.26.4 # Due to newer version of numpy changing dtype when multiplying
diffusers
torch
transformers
onnxruntime-directml
Due to newer version of numpy changing dtype when multiplyingtextonnx
import onnxruntime as ort
from diffusers import OnnxStableDiffusionPipeline

model_dir = "D:\\Models\\stable-diffusion-v1-5_io32"

batch_size = 1
num_inference_steps = 30
image_size = 512
guidance_scale = 7.5
prompt = "a beautiful cabin in the mountains of Lake Tahoe"

ort.set_default_logger_severity(3)

sess_options = ort.SessionOptions()
sess_options.enable_mem_pattern = False

sess_options.add_free_dimension_override_by_name("unet_sample_batch", batch_size * 2)
sess_options.add_free_dimension_override_by_name("unet_sample_channels", 4)
sess_options.add_free_dimension_override_by_name("unet_sample_height", image_size // 8)
sess_options.add_free_dimension_override_by_name("unet_sample_width", image_size // 8)
sess_options.add_free_dimension_override_by_name("unet_time_batch", batch_size)
sess_options.add_free_dimension_override_by_name("unet_hidden_batch", batch_size * 2)
sess_options.add_free_dimension_override_by_name("unet_hidden_sequence", 77)

pipeline = OnnxStableDiffusionPipeline.from_pretrained(
    model_dir, provider="DmlExecutionProvider", sess_options=sess_options
)

result = pipeline(
        [prompt] * batch_size,
        num_inference_steps=num_inference_steps,
        callback=None,
        height=image_size,
        width=image_size,
        guidance_scale=guidance_scale,
        generator=None
    )

output_path = "output.png"
result.images[0].save(output_path)

print(f"Generated {output_path}")

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.