Solar-Open-69B-REAP
46
3
—
by
Akicou
Language Model
OTHER
69B params
New
46 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
155GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
65GB+ RAM
Code Examples
1. Pull the latest fixed template from this HF Repopythontransformers
from llama_cpp import Llama
from transformers import AutoTokenizer
# 1. Pull the latest fixed template from this HF Repo
repo_id = "Akicou/Solar-Open-69B-REAP"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
fixed_template = tokenizer.chat_template
# 2. Initialize the Llama model
llm = Llama(
model_path="./Solar-Open-69B-REAP.Q4_K_M.gguf",
n_ctx=4096,
n_gpu_layers=-1 # Use -1 to offload all layers to GPU
)
# 3. Create completion with the injected template
messages = [
{"role": "system", "content": "You are a concise and helpful assistant."},
{"role": "user", "content": "Explain the benefits of MoE pruning."}
]
response = llm.create_chat_completion(
messages=messages,
chat_template=fixed_template, # Force-uses the latest Jan 7th fix
temp=0.7
)
print(response["choices"][0]["message"]["content"])Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.