KannadaGPT-0.6B

1
license:apache-2.0
by
Mithun501
Language Model
OTHER
0.6B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
2GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM

Code Examples

Installationbash
pip install transformers peft torch accelerate
Installationpythontransformers
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen3-0.6B",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Mithun501/KannadaGPT-0.6B")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "Mithun501/KannadaGPT-0.6B")

# Generate text
messages = [
    {"role": "user", "content": "ಭಾರತದ ರಾಜಧಾನಿ ಯಾವುದು?"}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=False
)

inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7, top_p=0.8)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Project Structuretext
KannadaGPT-0.6B/
├── adapter_config.json        # LoRA configuration
├── adapter_model.safetensors  # LoRA weights (38MB)
├── tokenizer.json             # Tokenizer
├── tokenizer_config.json      # Tokenizer config
├── vocab.json                 # Vocabulary
├── merges.txt                 # BPE merges
├── special_tokens_map.json    # Special tokens
├── added_tokens.json          # Added tokens
├── chat_template.jinja        # Chat template
├── KannadaGPT_Inference.ipynb # Colab inference notebook
├── KannadaGPT_Merge.ipynb     # Colab merge notebook
└── README.md                  # This file
Licensebibtex
@misc{kannadagpt-0.6b,
  author = {Mithun501},
  title = {KannadaGPT-0.6B: A Kannada Language Model},
  year = {2025},
  publisher = {GitHub},
  url = {https://github.com/mithun50/KannadaGPT-0.6B}
}

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.