fabric-llm-finetune-bitnet
67
1
license:apache-2.0
by
qvac
Language Model
OTHER
New
67 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Code Examples
Step 2: Download Base Model & Adapterbash
# Create directories
mkdir -p models adapters
wget https://huggingface.co/qvac/fabric-llm-finetune-bitnet/resolve/main/1bitLLM-bitnet_b1_58-xl-tq1_0.gguf
wget https://huggingface.co/qvac/fabric-llm-finetune-bitnet/resolve/main/tq1_0-biomed-trained-adapter.gguf
Note : Use same quantization model with same adapter. The adapters are in FP16 but they need to be used with the models they were trained with.Step 3: Run Inference with Adapterbash
# Interactive chat mode
./bin/llama-cli \
-m models/base.gguf \
--lora adapters/adapter.gguf \
-ngl 999 \
-c 2048 \
--temp 0.7 \
-p "Q: Does vitamin D supplementation prevent fractures?\nA:"
# Single prompt mode
./bin/llama-cli \
-m models/base.gguf \
--lora adapters/adapter.gguf \
-ngl 999 \
-p "Explain the mechanism of action for beta-blockers in treating hypertension."Custom Temperature & Samplingbash
./bin/llama-cli \
-m models/base.gguf \
--lora adapters/adapter.gguf \
-ngl 999 \
--temp 0.3 \ # Lower = more focused (good for medical)
--top-p 0.9 \ # Nucleus sampling
--top-k 40 \ # Top-k sampling
--repeat-penalty 1.1 \
-n 512 \ # Max tokens to generate
-p "Your prompt"Batch Processingbash
# Create prompts file
cat > prompts.txt << 'EOF'
Q: Does vitamin D supplementation prevent fractures?
Q: Is aspirin effective for primary prevention of cardiovascular disease?
Q: Do statins reduce mortality in patients with heart failure?
EOF
# Process all prompts
cat prompts.txt | while read prompt; do
echo "=== Processing: $prompt ==="
./bin/llama-cli \
-m models/base.gguf \
--lora adapters/adapter.gguf \
-ngl 999 \
--temp 0.4 \
-p "$prompt\nA:"
echo ""
doneMobile-Specific Flagsbash
./bin/llama-cli \
-m model.gguf \
--lora adapter.gguf \
-ngl 99 \ # Partial GPU offload
-c 512 \ # Smaller context
-b 128 \ # Smaller batch
-fa off \ # Disable flash attention (Vulkan)
-ub 128 # Uniform batch sizePlatform-Specific Guidesbash
# Use smaller batch size and disable flash attention
./bin/llama-cli -m model.gguf --lora adapter.gguf -ngl 99 -c 512 -b 128 -ub 128 -fa offUse smaller batch size and disable flash attentionbash
# Reduce context size or use smaller model
./bin/llama-cli -m model.gguf --lora adapter.gguf -ngl 50 -c 512Reduce context size or use smaller modelbash
# Offload fewer layers to GPU
./bin/llama-cli -m model.gguf --lora adapter.gguf -ngl 20Offload fewer layers to GPUbash
# Verify adapter file exists and matches model architecture
ls -lh adapters/
./bin/llama-cli -m model.gguf --lora adapter.gguf --verboseDeploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.