doopoom-general-chat-agent-1B-hybrid-think

1
license:apache-2.0
by
AmanPriyanshu
Language Model
OTHER
1B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
3GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM

Code Examples

Detailspythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

repo_id = "AmanPriyanshu/doopoom-general-chat-agent-1B-hybrid-think"
subfolder = "epoch-2"  # or "epoch-1", "epoch-1.5"

tokenizer = AutoTokenizer.from_pretrained(repo_id, subfolder=subfolder)
model = AutoModelForCausalLM.from_pretrained(repo_id, subfolder=subfolder, dtype=torch.bfloat16, device_map="auto")

messages = [
    {"role": "system", "content": "You are a helpful assistant with tools:\n- get_weather(location: str)\n- calculate(expression: str)\n\nUse <tool_call> tags to call tools."},
    {"role": "user", "content": "What is the weather in Tokyo?"}
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
output = model.generate(inputs, max_new_tokens=1024, temperature=0.7, top_p=0.9)
print(tokenizer.decode(output[0][inputs.shape[-1]:], skip_special_tokens=False))

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.