PathoPreter-4B-SNV-Pathogen-ClinVar-gnomAD
7
1
license:apache-2.0
by
YADAV0206
Language Model
OTHER
4B params
New
7 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
9GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
4GB+ RAM
Code Examples
Load tokenizer from your repo (it contains added tokens + merges)pythontransformers
!pip install transformers accelerate peft bitsandbytes #bitsandbytes-0.49.1
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
base_model = "unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit" # same base your adapter was trained on
adapter_repo = "YADAV0206/PathoPreter-4B-SNV-Pathogen-ClinVar-gnomAD"
# Load tokenizer from your repo (it contains added tokens + merges)
tok = AutoTokenizer.from_pretrained(adapter_repo)
# Load base model in 4-bit to save RAM
model = AutoModelForCausalLM.from_pretrained(
base_model,
load_in_4bit=True,
device_map="auto"
)
# Load LoRA adapter weights
model = PeftModel.from_pretrained(model, adapter_repo)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tok,
device_map="auto"
)
#THIS SAMPLE SHOULD OUTPUT PATHOGENIC
sample = {
"text":"Below is a biological context regarding a genetic variant. Determine if it is Pathogenic or Benign.\n\n### Input:\nGene: CDK8\nVariant: NM_001260.3(CDK8):c.563C>G (p.Ala188Gly)\nLocation: chr13:26385259 C>G\ngnomAD Frequency: 0.000000\n\n### Response:",
"variant_id":"NM_001260.3(CDK8):c.563C>G (p.Ala188Gly)",
"join_key":"NM_001260.3(CDK8):c.563C>G (p.Ala188Gly)",
"Name":"NM_001260.3(CDK8):c.563C>G (p.Ala188Gly)",
"Assembly":"GRCh38",
"chrom":"13",
"pos":"26385259",
"ref":"C",
"alt":"G",
"gnomad_af":0
}
prompt = sample["text"]
print(prompt)
#EXPECTED OUTPUT PATHOGENIC
out = pipe(
prompt,
max_new_tokens=50,
do_sample=True,
temperature=0.2,
)[0]["generated_text"]
print("\n------ OUTPUT ------")
print(out)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.