sdxl-lora-juguete-v1
28
—
by
Juanpeg1729
Image Model
OTHER
1.0B params
New
28 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
3GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM
Code Examples
🚀 Código de Inferenciapythonpytorch
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
# 0. Verificación de seguridad (GPU Check)
if not torch.cuda.is_available():
raise RuntimeError("❌ ERROR: No se detecta GPU. En Colab, vete a 'Entorno de ejecución' > 'Cambiar tipo' > 'T4 GPU'.")
# 1. Cargar el modelo base
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
pipe.to("cuda")
# 2. Cargar el LoRA
pipe.load_lora_weights("Juanpeg1729/sdxl-lora-juguete-v1")
# 3. Generar imagen
print("Generando...")
prompt = "photo of sks bear toy in a cyberpunk city, neon lights, 8k"
image = pipe(prompt, num_inference_steps=30).images[0]
# 4. Guardar y Mostrar
image.save("resultado.png")
imageDeploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.