ST-Coder-14B-LoRA
21
llama-factory
by
RnniaSnow
Code Model
OTHER
14B params
New
21 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
32GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
14GB+ RAM
Code Examples
💻 Quick Startbash
pip install transformers peft torch accelerate💻 Quick Startpythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# 1. Load Base Model
base_model_path = "Qwen/Qwen2.5-Coder-14B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(base_model_path, trust_remote_code=True)
# 2. Load LoRA Adapter
lora_path = "RnniaSnow/ST-Coder-14B-LoRA"
model = PeftModel.from_pretrained(model, lora_path)
# 3. Generate Code
prompt = "Write a Function Block (ST) for a PID controller with anti-windup mechanism."
messages = [
{"role": "system", "content": "You are an expert IEC 61131-3 PLC programmer."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024,
temperature=0.2, # Low temperature for code precision
top_p=0.9
)
output_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(output_text)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.