Nemotron2Gemma-AURORA-LoRA-27B-IT-0p95
1
llama
by
win10
Other
OTHER
27B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
61GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
26GB+ RAM
Training Data Analysis
🟡 Average (4.3/10)
Researched training datasets used by Nemotron2Gemma-AURORA-LoRA-27B-IT-0p95 with quality assessment
Specialized For
general
science
multilingual
reasoning
Training Datasets (3)
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
- •Scientific Authority: Peer-reviewed content from established repository
- •Domain-Specific: Specialized vocabulary and concepts
- •Mathematical Content: Includes complex equations and notation
Considerations
- •Specialized: Primarily technical and mathematical content
- •English-Heavy: Predominantly English-language papers
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Quickstart (Transformers + PEFT)pythontransformers
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_id = "Changgil/google-gemma-3-27b-it-text"
adapter_id = "win10/Nemotron2Gemma-AURORA-LoRA-27B-IT-0p95"
tokenizer = AutoTokenizer.from_pretrained(base_id, use_fast=True)
base = AutoModelForCausalLM.from_pretrained(
base_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model = PeftModel.from_pretrained(base, adapter_id)
model.eval()
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain knowledge distillation in 5 bullet points."},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_tensors="pt",
)
with torch.no_grad():
out = model.generate(
inputs.to(model.device),
max_new_tokens=512,
do_sample=False,
)
print(tokenizer.decode(out[0], skip_special_tokens=True))Optional: Merge the adapter into the base weightspython
from peft import PeftModel
merged = model.merge_and_unload()
merged.save_pretrained("./merged_model", safe_serialization=True)
tokenizer.save_pretrained("./merged_model")Reproducibility (build command)bash
python universal_distill_v4_1_0_aurora_svd_innovations.py \
--teacher E:\text-generation-webui-1.14\user_data\models\Llama-3.1-Nemotron-70B-Instruct-HF \
--student E:\text-generation-webui-1.14\user_data\models\google-gemma-3-27b-it-text \
--output ./Llama-3.1-Nemotron-70B-Instruct-HF-gemma-3-27b-it-text-lora-adaptive \
--svd-mode aurora \
--energy-threshold 0.95 \
--min-rank 256 \
--max-rank 5376 \
--interp-mode lsq \
--svd-rand-iter 2 \
--svd-rand-oversamples 8 \
--svd-aurora-steps 100 \
--svd-aurora-order 2 \
--calib-format alpaca \
--calib-alpaca-template classic \
--calib-max-samples 128 \
--calib-max-length 65536 \
--calib-batch-size 2 \
--calib-save .\calib_stats_Yi-70B-200k_alpaca-taiwan-dataset.safetensors \
--calib-mode rms \
--include "self_attn|mlp"Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.