RPBizkit-v4-12B_Lorablated
35
2
—
by
RicardoEstep
Language Model
OTHER
12B params
New
35 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
27GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
12GB+ RAM
Code Examples
Python Script Used:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, LoraConfig
# --- CONFIGURATION ---
base_model_path = "RicardoEstep/RPBizkit-v4-12B"
lora_path = "nbeerbower/Mistral-Nemo-12B-abliterated-LORA"
tokenizer_path = "yamatazen/EtherealAurora-12B"
output_path = "./RPBizkit-v4-12B-Abliterated-ChatML"
print("Loading base model...")
# We load without device_map="auto" initially to avoid naming issues with accelerate
model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
)
# 1. FIX THE VOCAB SIZE (The 131075 -> 131072 issue)
print(f"Resizing from {model.get_input_embeddings().weight.shape[0]} to 131072...")
model.resize_token_embeddings(131072)
# 2. APPLY THE LORA MANUALLY
print("Applying LoRA...")
# We use from_pretrained but specify the exact model to avoid double-nesting
model = PeftModel.from_pretrained(
model,
lora_path,
adapter_name="default"
)
# 3. MERGE THE WEIGHTS
print("Merging weights into base...")
model = model.merge_and_unload()
# 4. FIX THE TOKENIZER
print("Finalizing tokenizer...")
tokenizer = AutoTokenizer.from_pretrained(tokenizer_path, trust_remote_code=True)
# 5. SAVE
model.save_pretrained(output_path)
tokenizer.save_pretrained(output_path)
print("Process Complete!")Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.