Qwen3-Coder-30B-A3B-Instruct-NVFP4

1
license:apache-2.0
by
kleinpanic93
Language Model
OTHER
30B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
68GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
28GB+ RAM

Code Examples

Usagebashvllm
vllm serve kleinpanic93/Qwen3-Coder-30B-A3B-Instruct-NVFP4 \
  --quantization modelopt \
  --trust-remote-code \
  --max-model-len 32768
With vLLM (Recommended)pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "kleinpanic93/Qwen3-Coder-30B-A3B-Instruct-NVFP4",
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
    "kleinpanic93/Qwen3-Coder-30B-A3B-Instruct-NVFP4"
)
Provenancejson
{
  "source_model": "Qwen/Qwen3-Coder-30B-A3B-Instruct",
  "quantization": "NVFP4",
  "tool": "nvidia-modelopt 0.41.0",
  "export_method": "save_pretrained_manual",
  "calib_size": 512,
  "calib_dataset": "synthetic-random",
  "hardware": "NVIDIA GB10 (Blackwell)",
  "elapsed_sec": 472
}

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.