qwen3-30m-fp16

8
600M
license:apache-2.0
by
Mostafa8Mehrabi
Language Model
OTHER
0.6B params
New
8 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
2GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM

Code Examples

Usagepythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load with automatic fp16 support
tokenizer = AutoTokenizer.from_pretrained("Mostafa8Mehrabi/qwen3-30m-fp16")
model = AutoModelForCausalLM.from_pretrained(
    "Mostafa8Mehrabi/qwen3-30m-fp16",
    torch_dtype=torch.float16,  # Explicitly use fp16
    device_map="auto"  # Automatically place on available device
)

# For GPU inference (recommended)
# model = model.to("cuda") # if you have a GPU

inputs = tokenizer("Hello, how are you?", return_tensors="pt")
# Move inputs to same device as model if using GPU
# inputs = {k: v.to(model.device) for k, v in inputs.items()}

outputs = model.generate(**inputs, max_length=50, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.