PromptBridge-0.6b-Alpha
173
14
license:apache-2.0
by
retowyss
Language Model
OTHER
0.6B params
New
173 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
2GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM
Code Examples
Usagepythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "retowyss/PromptBridge-0.6b-Alpha"
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
# mode = "Compress the prompt into one sentence."
# mode = "Compress the prompt into keyword format."
mode = "Expand the prompt."
# Expansion example
messages = [
{"role": "system", "content": "Expand the prompt."},
{"role": "user", "content": "woman, flowing red dress, standing, sunset beach"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.2,
top_p=0.9,
do_sample=True
)
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(response)bibtex
@misc{promptbridge_0.6b_alpha},
author = {Wyss Reto},
title = {PromptBridge-0.6b-Alpha: Bidirectional Prompt Transformation for Image Generation},
year = {2026},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/retowyss/PromptBridge-0.6b-Alpha}}
}Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.