QwQ-Buddy-32B-Alpha

2
1
32.0B
license:mit
by
FINGU-AI
Language Model
OTHER
32B params
New
2 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
72GB+ RAM
Mobile
Laptop
Server
Quick Summary

QwQ Buddy 32B Alpha is a merged 32B model created by fusing two high-performing models.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
30GB+ RAM

Code Examples

How to Usepythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "FINGU-AI/QwQ-Buddy-32B-Alpha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="bfloat16")

inputs = tokenizer("Write a Python function to compute Fibonacci numbers:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
How to Usepythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "FINGU-AI/QwQ-Buddy-32B-Alpha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="bfloat16")

inputs = tokenizer("Write a Python function to compute Fibonacci numbers:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
How to Usepythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "FINGU-AI/QwQ-Buddy-32B-Alpha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="bfloat16")

inputs = tokenizer("Write a Python function to compute Fibonacci numbers:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
How to Usepythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "FINGU-AI/QwQ-Buddy-32B-Alpha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="bfloat16")

inputs = tokenizer("Write a Python function to compute Fibonacci numbers:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
How to Usepythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "FINGU-AI/QwQ-Buddy-32B-Alpha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="bfloat16")

inputs = tokenizer("Write a Python function to compute Fibonacci numbers:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
How to Usepythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "FINGU-AI/QwQ-Buddy-32B-Alpha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="bfloat16")

inputs = tokenizer("Write a Python function to compute Fibonacci numbers:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
How to Usepythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "FINGU-AI/QwQ-Buddy-32B-Alpha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="bfloat16")

inputs = tokenizer("Write a Python function to compute Fibonacci numbers:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
How to Usepythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "FINGU-AI/QwQ-Buddy-32B-Alpha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="bfloat16")

inputs = tokenizer("Write a Python function to compute Fibonacci numbers:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.